If you think about the distribution of events, some will have very low probabilities. So low, in fact, that on average they occur less than once. But of course events either happen or they don’t, so an average of less than once tends to get approximated to not-at-all if you think in probabilistic terms. But if you look at the class of all those things with average frequency less than one there’s a good chance of one, or some, of them happening. And when they do, they happen at a frequency far greater than the prediction of the average (by necessity, since the average is between zero and one if they occur it will be an integer value of minimum one).
I was thinking these thoughts while watching this demo of ‘The Galton Machine’ which illustrates the central limit theorum and, peripheraly, provides an example of a system which can be approximately described by one ‘rule’ (a Gaussian distribution) but follows quite different mechanistics rules (you see, it’s always about minds and brains round here). Look at the extremes of the distibution. Soon enough a single ball will fall somewhere there, and when it does it will far exceed the predicted (average) frequency of it occuring.
It occured to me that all this was analagous to an interesting piece in the Guardian, an extract from Worst Cases: Terror and Catastrophe in the Popular Imagination, by Lee Clarke, published by the University of Chicago Press. Clarke says that thinking about probabilities lets you get away with thinking about was is most likely to happen, whereas thinking about possibilities lets you plan for rare events of serious magnitude.
The trouble is that when it comes to real worst cases – actual disasters – there are no “average events”. How could we talk about a normal distribution of extreme events? If we imagine the future in terms of probabilities, then risks look safe. That’s because almost any future big event is unlikely.
counterfactuals …help us see how power and interest mould what is considered legitimate to worry about. One lesson is that we cannot necessarily trust high-level decision-makers to learn from their mistakes. They could. But they often have an interest in not learning.
Our governments and institutions almost always have a vested interest in not engaging in possibilistic thinking, and in offering us instead the reassuring palliative of probabilistic scenarios.
I wonder if this is part of the story of why modern government seems to be about the management of existing trends rather than envisioning of future, alternative (possibly radically alternative), states that society could exist in.