I see a lot of poor decision making and thinking because people, perhaps many people, do not understand the concept ‘more things can happen than will happen.’
It is a problem with men particularly. The specific way in which they do not get it is because they have a desire for events to prove them right, or to be ‘right’, in quite binary ways. Dogmatic statements, ex cathedra, and a rather moody, cynical or sceptical manner when it comes to other scenarios are a giveaway of this psychology.
[Later insert]: I meant to say that I think it is to do with the exertion of power and its importance to many men. I wonder if being right is less important than being able to impose ‘being right’ on others. That includes minimising or in some other way diminishing the occasions when an outcome differs from the prediction. It’s not just that forgetfulness of when you were wrong causes this, it’s also a useful personal and (projected by those in power onto an organisation) institutional method of maintaining your rightness. You can impose that forgetfulness on others, or make it costly for them to call it out.
There are other methods of preserving rightness that go along with this:
- Constant caveating, so that you can always point out you were right really
- Aggressive assertion of extremely binary views, but chaotically and varying from time to time, even within the sentence-memory of, say, a meeting
It’s extraordinarily psychologically and institutionally unhealthy.
This post in part prompted by a footnote to Helen deWitt’s excellent short story My Heart Belongs to Bertie.
I began reading obsessively about statistics and probability. Peter Bernstein’s Against the Gods: The Remarkable Story of Risk was one inspiration; he says: “The revolutionary idea that defines the boundary between modern times and the past is the mastery of risk: the notion that the future is more than a whim of the gods and that men and women are not passive before nature.” Analysis of probability seemed more compelling than ever for fiction; I spent endless hours grappling with R, a programming language with strength in statistical graphics.
R is open source, and it has come a long way since I first downloaded the DMG.
What hasn’t changed, I think, is the gap between people who see why understanding chance matters and people who just don’t get it—people who don’t see why this is crucial to the most basic questions of ethics. I have more glamorous plots in my portfolio than the primitive efforts on display in this story, but the philosophical issue was what I hoped to bring into the open.
DeWitt, Helen. Some Trick (pp. 41-42). New Directions
(I mentioned in the previous post my second happiest birthday, and in fact this specific story has a direct connection with my happiest birthday, in that it was published in an art gallery exhibition catalogue that I picked up visiting the deserted exhibition on my birthday. One of the exhibits was a stack of the catalogues. The story was one of the pieces in the catalogue.)
This sent me back to Peter Bernstein’s Against the Gods: The Remarkable Story of Risk, which is a very good book, and which contains the sentence:
The Greeks understood that more things might happen in the future than actually will happen.
Bernstein, Peter L.. Against the Gods (p. 64). Wiley
Another way of looking at it is described in Superforecasting by Phil Tetlock; just because a thing has happened, it does not mean it wasn’t the less likely outcome.
If a meteorologist says there is a 70% chance of rain and it doesn’t rain, is she wrong? Not necessarily. Implicitly, her forecast also says there is a 30% chance it will not rain. So if it doesn’t rain, her forecast may have been off, or she may have been exactly right. It’s not possible to judge with only that one forecast in hand. The only way to know for sure would be to rerun the day hundreds of times. If it rained in 70% of those reruns, and didn’t rain in 30%, she would be bang on. Of course we’re not omnipotent beings, so we can’t rerun the day—and we can’t judge. But people do judge. And they always judge the same way: they look at which side of “maybe”—50%—the probability was on. If the forecast said there was a 70% chance of rain and it rains, people think the forecast was right; if it doesn’t rain, they think it was wrong. This simple mistake is extremely common.
Tetlock, Philip; Gardner, Dan. Superforecasting (pp. 57-58). Random House. Kindle Edition.
The subject is in the news again with the imminent publication of William MacAskill’s What We Owe the Future and the general salience of effective altruism.