Prediction Centric Warfare
Notes and slides from my talk at DSEI yesterday.
Context
Yesterday, at DSEI, I spoke on the Cyber and Specialist Operations Command Stage, alongside former colleague from Fujitsu and new Head of Fujitsu UK Defence and National Security Business Group Alexandra Bailey (congrats Alexandra!), and the MOD’s Chief Data and Artificial Intelligence Officer (CDAIO), Caroline Bellamy.
My comments on ‘prediction centric warfare’ opened the panel. Below are my remarks and slides, edited lightly for publication. They are my remarks alone, and I do not mean to imply any endorsement from Caroline or Alexandra.
Prediction Centric Warfare
Information Technology transformation is decision transformation. We adopt information technologies in order to collect, process, protect, store and move information to enable better decisions to be made.
Digital Transformation is the means by which such information is moved around.
Digital Transformation then, is decision transformation. It should be assessed on outcomes – clear metrics as to whether it has, in fact, enabled better decisions – those that are more often right, or less wrong, and faster, than before any given technology is deployed.
At Fujitsu, where I was Managing Director. of the Centre for Cognitive Technologies, we focused on revolutionising decision-making because decisions are where cognition manifests.
And decisions, as I argued on the Cassi blog last week, are predictions.[1]
Indeed everything is.
A decision is a conditional forecast: if we do X, we expect Y.
An action is a bet on a payoff. I do X, expecting Y.
Tactics, operational design and strategy are predictions at different scales and time horizons. If we do these things, we expect this to be the outcome.[2]
When prediction quality improves, outcomes improve. When it degrades, we pay in blood, treasure and time.
We live in the exponential age – where technological change is continuously accelerating – creating profound disruption.
The UK’s Strategic Defence Review tells us we live in age of uncertainty.
Uncertainty and change can only be bounded and managed with prediction
The science of prediction – human and algorithmic, has never been more important.
Here we will discuss seven things:
1. How Prediction wins Wars.
2. The necessity of prediction to victory, despite its difficulty.
3. How Operating Concepts are themselves a prediction, and why this matters.
4. How ‘prediction centric warfare’ should be the UK’s New Operating Concept and Theory of Winning, enabled by the Digital Targeting Web.
5. A brief overview of the science of prediction – human and algorithmic.
6. The Strategic Defence Review’s predictions for the Digital Targeting Web,
7. How Programmatic Prediction would significantly increase the chances of the Digital Targeting Web succeeding in achieving the SDRs aims.
How Prediction wins Wars.
If Clausewitz is right and war is politics by other means, then violence is a tool of persuasion and coercion. Its only purpose Schelling tells us, outside sport or revenge, is to influence behaviour: to coerce choices.
Everything we do in conflict therefore aims to shape an adversary’s decisions.
Since decisions are predictions, the task is to predict what the adversary will decide in response to our actions.[3]
We aim to influence an adversary’s forecast, so that the path we prefer dominates their expected value calculation.
Once you accept that, the centre of gravity is cognitive. Fires, manoeuvre, sanctions and narratives are instruments; the target is the decision process of people.
Since decisions are predictions, the task is to predict what they will predict in response to what we do—and to set the conditions so that our preferred option is the least‑bad option they can see.
The nuclear extremity makes the point.
The US dropped nuclear bombs on Hiroshima and Nagasaki predicting they would make peace more likely, and so a devastating and costly ground invasion less likely.
The decisive effect was not the blast pattern but the expectation it created in Japan’s senior leadership—the Imperial Conference, and ultimately the Emperor.
The prediction they were forced to make…
more bombs could follow;
American resolve was unshakeable;
the Soviet entry had closed the last exits;
…collapsed the case for “fight on.”
Our preferred option became their least bad outcome.
The logic generalises. Whether we are driving soldiers off a hill, coaxing insurgents to a ceasefire or coercing governments towards an unconditional surrender, we are in the business of changing behaviour by changing expectations. It’s just as true in grey zone conflict and deterrence.
The necessity of prediction to victory, despite its difficulty
We fight over forecasts more than over map squares.
That is uncomfortable, because modelling human decision‑making under fire is damnably hard.
But discomfort does not free us from the obligation.
We cannot not predict.
We either model and influence those decisions - or we leave outcomes to inadequate heuristics.
How Operating Concepts are themselves a prediction, and why this matters.
The exponential pace of technological change is such that we can no longer afford to follow technological trends.
Indeed the exponential of Moore’s Law means this has been true since the 1970s at least, and as I have argued elsewhere, perhaps it has been true since the First World War.
Prior to this, for 1700 years of human history – perhaps for its entirety – weapons came first, and then we figured out the tactics. We predicted that a tool, or an invention might have a military use, and then worked out the most effective tactics by predicting, getting it wrong, and predicting again, until we got there.
Certainly by the 1970s this was no longer a viable way to proceed. The risk was that the technology you were iterating towards tactical usefulness would be obsolete and outdated before you figured out how to use it. The opportunity was that now, technological and scientific progress were rapid enough that you could predict what technologies you needed to win and set about building them.
In the Cold War the United States applied this logic with DARPA’s ASSAULT BREAKER, which became Air-Land Battle doctrine: define the theory of victory first—how to break the Soviet second echelons—and then assemble the sensors, networks and shooters to make it real. The strategy came first; the technology followed.
That is the spirit of the Strategic Defence Review’s Digital Targeting Web. Call it the ‘kill web’ if you prefer the extant literature; the point is not IT modernisation but a single, integrated force design built backwards from how we win.
In the literature, a kill-web is a theory of victory – the central component in DARPA’s Mosaic Warfare concept. Where, as its Director said “The idea will be to send so many weapon and sensor platforms at the enemy that its forces are overwhelmed. The goal is to take complexity and to turn that into an asymmetric advantage.” Presented here in 2018. Operational Concepts are prediction too – of what tactics and operational approach will win, of what technologies we will need.
How ‘prediction centric warfare’ should be the UK’s New Operating Concept and Theory of Winning, enabled by the Digital Targeting Web.
I argue the Digital Targeting Web must be the central component in our theory of victory.
It must be more than just a digital network connecting every sensor to every shooter.
More than a platform to coordinate across domains.
More than a modern extension of Boyd’s OODA loop and Brose’s Kill Chain.
The aim is not just faster kills or more options. These are the means, not the ends.
It aim is to predict faster and more accurately which combination of weapons and tactics will make our preferred option our adversary’s least bad action or outcome.
The Digital Targeting Web must enable Prediction Centric Warfare.
A brief overview of the science of prediction – human and algorithmic.
If prediction sits at the heart of victory, we must use the best methods we have.
Most humans are unreliable forecasters. Philip Tetlock’s work showed that expert judgement in geopolitics and economics often performs no better than chance – the median forecast of a crowd is more reliable. Some, whom he calls superforecasters, consistently do better through disciplined probabilistic methods.
Today, AI systems can match and in places exceed that standard. At Cassi we’ve shown materially higher accuracy than human baselines on meaningful questions from ‘Will Russia control Pokrovsk in X months’ time?’ to ‘Will the UK have to conduct a non-combatant evacuation operation in Somalia before end FY, and if they do, will it succeed?’, and—more importantly—shown that AI can identify the variables that move outcomes.
The practical lesson is straightforward. Define the outcome in measurable - we would say resolvable (how would you know you’d succeeded?) - terms; set resolution criteria; then work back to the decisions and data you need.
Build the web to do two things: predict what the adversary will do next, and predict what we must do to shift that next move in our favour—at tactical, operational and strategic levels.
Factorise, and Forecast should be the new discipline – both human and algorithmic.
Base rates beat bravado. Score your people, and compare their performance to AI. No estimate nor planned action from the Kill Web goes to the commander without a confidence-defined forecast[4], no decision is delegated to an algorithm until it demonstrably out-performs your people.
Ask yourself: when did you last see a plan fail because people were too explicit about their assumptions and updated them too quickly?
Quite.
Plans fail because we predict poorly, hide our confidence levels, and refuse to revise in time. We must predict, publish, and adapt.
The Strategic Defence Review’s predictions for the Digital Targeting Web
The Strategic Defence Review tells us we are living in an age of uncertainty. Surely then, it is now more important than ever that we bring the science of prediction, to bound and manage that uncertainty as best we are able?
On the Kill Web, the SDR says that “the targeting web epitomises how the Integrated Force must fight and adapt. Its very existence contributes to deterrence.”
We are betting that the DTW:
1. Increases our probability of winning future wars
2. Its existence deters by influencing the adversary’s forecast of the future.
Here we see how prediction centric warfare begins at the Strategic level, and how central is could be to our success at the Programmatic level.
We should test both these predictions explicitly and probabilistically, not assume them. We should factorise them, so that we can assign resources to those factors that will most increase our chances of success.
We should factorise and forecast at the Programme level too. Programmatic Predictions, if you will – ensuring we are optimising always and prioritising those actions that see us build the Kill Web as effectively and efficiently as possible. Probabilistically optimised.
To close I offer a reminder, a suggestion and a participatory starting point:
A reminder - everything is prediction—decisions, operating concepts tactics, operations, strategy.
A suggestion The Digital Targeting Web is a bet that better, faster, smarter decisions will both deter and, if needed, win. It should be the heart of a new UK Operating Concept, a Theory of Winning, Prediction Centric Warfare.
A starting point – judge it with outcomes that are resolvable – defining how you would know you had succeeded. Make a start here yourself, in the comments, on the proposals I offer, what probability would you assign to each of the following questions? What factors would most move your prediction one way or the other?
Programmatic Prediction would significantly increase the chances of the Digital Targeting Web succeeding in achieving the SDRs aims.
To optimally assign resources, we must factorise and forecast. Prediction-Centric Warfare is the key to our future success.
[1] Oxford English Dictionary. predict …2 verb trans. announce as an event that will happen in the future; say that (a thing) will happen; foretell. b Of a theory, observation etc.: have as a deducible consequence; imply. [underline emphasis added – here, to predict is to conclude via Bayesian prediction, as a consequence of priors, or from inductive premises to inductive conclusion].
[2] Importantly, because it justifies the wider claim – perception and agency are prediction too. See last week’s post for a fuller justification.
[3] In this sense, morale – maintaining ours or undermining theirs - is the ability to maintain one’s own aims in the face of adversary pressure.
[4] In theory this is our actual doctrine, but it is honoured more in the breach than in the observance.


