Everything is Prediction
From Ramsey to Yudkowsky, the case for making our bets explicit.
Everything we do is a wager on what comes next. Reading this is a bet that it will repay your time. Reaching for a coffee mug assumes it contains coffee. Strategy, policy, all decisions, even routine admin, each is an “if…then” about the future. The only real choice is whether to keep those bets tacit and unexamined, or to state them, price them, and improve them.
That is why “we don’t do prediction” is the most misleading sentence in public life. Historians sometimes say it, central bankers occasionally imply it, ministers often hide behind it.
Even explanation – at face value a solely retrospective exercise - is a forecast in disguise. To say “A influenced B” is to present a causal model that, applied tomorrow, shifts our expectations about ‘C’.
Historian Lawrence Freedman is rightly sceptical about crystal-ball claims. He writes thoughtfully on prediction and annually reviews his own calls, but still somehow judges that since “…it is always going to be difficult forecasting future developments. …it is probably best not to do so or, at least, not without qualifications and caveats.” Yet his own (excellent) explanations move readers’ inductive probabilities, and are valuable precisely because they do. An account of the past that cannot change how we bet on the future is merely decoration, it serves no purpose.
When President Harry S. Truman said “There is nothing new in the world except the history you do not know” or when General James Mattis, himself a prediction sceptic (see his opposition to ‘Effects-Based Warfare’) tells us that history and reading “…doesn’t give me all the answers, but it lights what is often a dark path ahead,” I suggest what their words mean in practice is that reading and history give them new mental models, help to shift the inductive probabilities – implicit – in their reasoning about what comes next. The eminent Kori Schake, another prediction sceptic, surely does not think her work has no relevance to the future? That when we read her work it should have no effect at all on our models of the world? If Schake, Mattis or Freedman concede their work is useful, they are with Truman, and admit to the necessity of prediction and their own roles in informing it. We cannot not predict. Prediction is unavoidable, essential. Historians are no exception – they predict causal models – the best explanation - of the past, their analyses reshape our models in the present, as we seek to shape the future.
Former Bank of England Governor Mervyn King devotes a whole book Radical Uncertainty to arguing that we should resist prediction, that “…to ascribe probabilities is misconceived…”,[1] that narratives and scenarios are better. His account is half-right but wholly incomplete. Scenarios are essential, but again, you cannot not predict. Not all scenarios are of equal likelihood. Choosing one narrative over another is itself an implicit probability judgement. Refusing to put numbers on beliefs does not make those beliefs less numerical; it merely makes them unaccountable. The responsible move is not to abandon probabilities, but to calibrate them—and to say what would change our minds. When King says that no one could have predicted a Covid-class virus would emerge in Wuhan in 2019, his counsel of despair over-reaches. After SARS (2003), MERS (2012) and repeated H5N1 scares, we could not name the city, but we could price the odds of a novel respiratory pandemic in the coming years, and the most likely places it might emerge from. Those probabilities would change with new information, they may be low everywhere, but they would not be everywhere equal - they are not useless. King says “The better strategy is to be prepared.” That is the point of probabilities: not clairvoyance, but disciplined readiness.
What should that discipline look like? There are four key elements:
(1) Find the base rate: the prior odds for events in a class (e.g. novel pathogen emergence, cross-border spread, health-system overload).
(2) Pair scenarios with forecasts as inductive premises, not a single point estimate. Accept that decisions are if/then predictions and you see that the each ‘if’ is a factor in your decision model. The ifs need probabilities, from which the probability of the ‘thens’ can be mathematically derived. In putting these forward, you are crystalising your thoughts, moving beyond fuzzy thinking and the tendency all humans have to hold incompatible beliefs, to believe like Alice in Wonderland’s White Queen six impossible things before breakfast. (At Cassi, we describe this as “decision legibility” – you’d think it was everywhere. It is rarer than you imagine!).
(3) Pre-commit to triggers for action, e.g. surge testing when the forecast of transmission exceeds a probability threshold in two regions; release stockpiles when the forecast of hospital utilisation crosses x%; tighten or loosen measures as defined indicators move.
(4) Transparency. Show how each tranche of evidence shifts the odds over time. Narratives are a communication tool – often designed to obscure more than they reveal; probabilities confront us with reality, build trust, tell us how—and when—to act.
This is the hinge where philosophy helps. Frank Ramsey, Cambridge prodigy, solved the practical problem a century ago: degrees of belief are what you are prepared to act on. If your behaviour would not change when the odds change, you didn’t believe the odds – or your model - in the first place. Ludwig Wittgenstein stripped the mystique further: probability is not a mood, it is a relation between propositions and the world, a logic of uncertainty. John Maynard Keynes added a crucial caution: evidence often has different weights; some uncertainties are harder to forecast because while the evidence all points one way there is only limited evidence or uncertainty as to its reliability. Here confidence – and thus your forecast - should reflect that uncertainty (closer to 50% than zero or 100%) - even when you are directionally convinced. Together they point to a discipline: state your beliefs in subjective probabilities and update as new evidence arrives.
Friedrich Hayek supplies the institutional corollary. Knowledge is dispersed; no planner sees the full picture. Markets work because prices compress scattered expectations into a single, continuously updated signal. In that sense, markets are prediction engines, not oracles, but social machines for aggregating beliefs about the future. Eliezer Yudkowsky pushes the point in Inadequate Equilibria: where incentives are weak or misaligned, feedback is slow, or entry is blocked, equilibria can remain wrong for a long time. The pathology at the heart of both bureaucratic failure in Government and business alike. The remedy is to improve the prediction machinery, introduce real feedback - markets, skin in the game, and to confront issues, invite in contrarians - seek out what you need to know, not what you want to hear. In policy terms: use auctions not allocations, prediction markets not pundit panels, prize challenges not closed committees. This is what UK Government tried to do with Cosmic Bazaar, and the US with both IARPA’s ACE, and the Intelligence Community Prediction Market (ICPM) forecasting tournaments that demonstrated promise — and then stalled. As Professor Philip Tetlock – doyen of crowd-based ‘Superforecasting’ - put it, we have kept “fumbling the crystal ball”: forecasts sit outside power, with limited ownership, little skin in the game, no budget consequences, and under attack from bureaucratic antibodies. This must change. If as we argue here ‘everything is prediction’ – we must start optimising our organisations to be better at it.
If prediction frames decisions, it also underwrites agency. On Karl Friston’s free-energy view, organisms survive by minimising “surprise”: they either revise their internal model to fit the data (perception) or act so the data fit the model (action). In ordinary terms: when expectations and evidence clash, you can change your mind or change your circumstances. Perception is inference; action is hypothesis-testing. The Duke of Wellington put it more tersely for the real world: “All the business of war, and indeed all the business of life, is to endeavour to find out what you don't know by what you do; that's what I called ‘guessing what was at the other side of the hill.’” Active inference before the label.
Similarly, evolutionary psychologist Michael Tomasello argues human and animal agency is layered if/then prediction. A hierarchy with each tier making more complex predictions, exerting itself to suppress the more simpler decision calculus of the tiers below.
At the bottom of the agency hierarchy are stimulus-driven simple organisms such as the ~1 mm long worm C. elegans: they perform something called ‘chemotaxis’, where they track nutrient gradients, effectively eating their way up a calorie signal predicting only that it will continue via the next ‘bite’, stopping when sated, or when the signal ends. The prediction is if nutrients, then eat in that direction. No other calculus.
Next, goal-directed agents (early vertebrates such as lizards) pursue an immediate objective, but are able to inhibit action when predicted predation or other risk outweighs reward. If prediction of predation X is greater than predicted reward Y, inhibit. If not, pursue reward.
Intentional agents (mammals like squirrels, and corvids) represent alternatives - that feeder, this route, those acorns and that jump – intuitively predicting complex if/then alternatives from multiple paths and multiple options.
Rational agents (great apes) plan with counterfactuals, stack boxes to reach greater heights, make or pick tools, they trade immediate impulses for a better expected payoff, imagining and testing abstract alternative futures.
Finally, socially normative agents (humans) align predictions through joint attention, perspective-taking and obligation: an “I” that can join a “we” – by observing “you” and predicting how you are predicting - intuitively pooling information and coordinating on shared futures.
Each layer of agency is not replaced, but is demoted as the newly evolved tier above learns to predict when to suppress its subordinate’s more simplistic calculations for a higher payoff among more complex options. At every tier, behaviour is guided by forecasts: of the world, of partners, and of the self – and subordinate levels of agentic impulse - and corrected by the surprises that reality returns.
The same stack runs through human organisations. Perception is predictive (brains pre-empt and correct), operations are predictive, with tactics micro “if…then” loops. Strategy is predictive (a scaffold of contingent moves). Science is the most explicit version: if condition X holds, outcome Y will follow. Diplomacy and business are messier but structurally the same. A sanctions package assumes particular channels will bite before spoilers adapt. A product roadmap assumes rivals will not leapfrog a core feature in the next two quarters. Prices, policies, and laws that encode ‘expectations’, are predictions by another name. Even organisational design, per the doctrines of ‘adaptive strategy’ is a prediction – you predict the optimal system structure to sense, adapt, and respond to changing circumstances. More succinctly, if a bit meta, you predict which system will make the best predictions. To govern or to run a firm is to predict. You can’t not.
What follows? First, stop hiding forecasts inside prose. Make them legible. When presenting an analysis, say: “We assess a 35–45% chance of X by June; here are the three variables that would move that by ±15 points.” Second, adopt Keynes’s “weight of evidence” explicitly, with Ramsey’s insistence on making your beliefs legible to discover what is true – what are your inductive premises? What are the probabilities they are true? Have you reflected your doubt about the sufficiency of evidence in your forecast? Third, import Hayekian aggregation into decision-making: add prediction markets (internal if need be), tournament-style scoring for analysts, score and reward effective calibration, and accuracy. Fourth, use Ramsey’s test: if your actions don’t shift with the odds, your statements were theatre.
For historians and commentators, the standard is similar. If an explanation cannot be operationalised into a forward expectation - what would be more or less likely next time a similar configuration appears - then it’s not yet an explanation. “We don’t predict” is not humility: it is a missed chance to help readers update.
For central bankers and officials, the discipline is harsher because the stakes are higher. Put numbers on your beliefs, say what a policy assumes, publish its probability of success. State the triggers that will force a pivot. Score your calibration and accuracy, so we can learn how much we should trust your judgement. The public can forgive error; we should not forgive opacity and obfuscation.
Everything is prediction: in minds, in markets, in ministries. The choice isn’t between predicting and not predicting; it is between implicit, untested bets and explicit, accountable ones. We should choose the latter—and build the habits and institutions that make our next bets a little less wrong than the last.
This is what we do at Cassi. Contact us at admin@cassi-ai.org.uk
[1] Kay, J. and King, M., 2020. Radical uncertainty: Decision-making beyond the numbers. WW Norton & Company. especially pp. 71-84, quotations from pp. 223-224.



This is an excellent and very informative post. Thank you.
I think there’s some great stuff in here, but the a few things that seem muddled, conflated and confused.
Firstly, many (Tetlock, Silver, Taleb, Popper, King) demonstrate clearly that prediction is only useful to a limited set of problems. Tetlock calls it the Goldilocks Zone. This is where probability works and/ or works as a proxy. Beyond those conditions treating it as a predictable is (in their words) dangerous!
When you say ascribing numbers is essential, what I think your are conflating here (assuming this is adopted from Superforecasting practice) is that the act of doing so force’s the person to unpack their reasoning (showing their mental working). So it is the act of making your reasoning explicit not ascribing numbers that is the value here. When things are truly uncertain (in the Knightly sense) I.e. beyond the Goldilocks Zone, the axioms of probability don’t hold so you both can’t use numbers and doing so creates an “illusion of concreteness”.
The description of the use of “inductive probabilities” and “inductive premises” sounds like it’s being confused with abductive reasoning. Abductive reasoning is The type of thinking used in thought experiments and speculation, and is essential in the consideration of the future. As Fischer (2001) says it’s illogical for the future to be based purely on the past (i.e. induction).
Having said that I think you are right that governments are obsessed by predictions - in spite of the forecasting tournaments like Cosmic Bazaar, etc. - but are not very good at forecasting (making predictions).
However, I unconvinced by the argument for treating everything as predictable. It is unsurprising that the AI folks quoted make claims that their tools are useful here, but they are not experts in psychology and their reasoning is flawed. In my opinion it is a cause for grave concern to do so in conditions of true (radical, deep, etc) uncertainty i.e. most policy, and governments are even worse at thinking about this than they are at forecasting.