I am reading a very useful primer, “Cognitive biases potentially affecting judgment of global risks,” by Eliezer Yudkowsky, one of the contributors to the blog Overcoming Bias (we made use of one of his posts yesterday). He focuses on existential risk, meaning risks to human existence. Since many people would regard an economic collapse as The End of Life as We Know It, his area of expertise has elements in common with the study of financial risk and market failure.
His article does not claim to be exhaustive, but it give a good layman’s description of some major types of cognitive biases. While all the topics are useful knowledge (and it’s sobering to realize how poorly we humans integrate logic into our decision processes), a couple of sections jumped out as being particularly relevant to the events of the last few weeks.
One was on the “availability heuristic,” which says that people tend to judge the likelihood of an event by the ease with which they can bring it to mind. Thus, subjects will almost without exception say that homicides are more frequent than suicides, when the reverse is true, because murders are reported obsessively in the press and also a plot driver in novels and movies.
Availability skews the assessment of large-scale risk:
People refuse to buy flood insurance even when it is heavily subsidized and priced far below an actuarially fair value. Kunreuther et. al. (1993) suggests underreaction to threats of flooding may arise from “the inability of individuals to conceptualize floods that have never occurred… Men on flood plains appear to be very much prisoners of their experience… Recently experienced floods appear to set an upward bound to the size of loss with which managers believe they ought to be concerned.” Burton et. al. (1978) report that when dams and levees are built, they reduce the frequency of floods, and thus apparently create a false sense of security, leading to reduced precautions. While building dams
decreases the frequency of floods, damage per flood is so much greater afterward that the average yearly damage increases.
It seems that people do not extrapolate from experienced small hazards to a possibility of large risks; rather, the past experience of small hazards sets a perceived upper bound on risks. A society well-protected against minor hazards will take no action against major risks (building on flood plains once the regular minor floods are eliminated). A society subject to regular minor hazards will treat those minor hazards as an upper bound on the size of the risks (guarding against regular minor floods but not occasional major floods)
In keeping, Nobel Prize winner and LTCM alum Robert Merton made this observation about the supposed advantages of risk dispersion:
[I]f you invent an advanced braking system for a car, it can reduce road accidents – but it only works if drivers do not react by driving faster.
Another important type of bias is overconfidence:
Suppose I ask you for your best guess as to an uncertain quantity, such as the number of “Physicians and Surgeons” listed in the Yellow Pages of the Boston phone directory, or total U.S. egg production in millions. You will generate some value, which surely will not be exactly correct; the true value will be more or less than your guess. Next I ask you to name a lower bound such that you are 99% confident that the true value lies above this bound, and an upper bound such that you are 99% confident the true value lies beneath this bound. These two bounds form your 98% confidence interval. If you are well-calibrated, then on a test with one hundred such questions, around 2 questions will have answers that fall outside your 98% confidence interval.
Alpert and Raiffa (1982) asked subjects a collective total of 1000 general-knowledge questions like those described above; 426 of the true values lay outside the subjects 98% confidence intervals. If the subjects were properly calibrated there would have been approximately 20 surprises. Put another way: Events to which subjects assigned a probability of 2% happened 42.6% of the time.
Humans seem hard wired to underestimate tail risk. Even experts make the same mistake of setting confidence intervals too tight on questions within their area of expertise.
And then there is the simple danger of not knowing what you don’t know:
[S]omeone….should also know how terribly dangerous it is to have an answer in your mind before you finish asking the question…. [R]emember the reply of Enrico Fermi to Leo Szilard’s proposal that a fission chain reaction could be used to build nuclear weapons. (The reply was “Nuts!” – Fermi considered the possibility so remote as to not be worth investigating.)….[R]emember the history of errors in physics calculations: the Castle Bravo nuclear test that produced a 15-megaton explosion, instead of 4 to 8, because of an unconsidered reaction in lithium-7: They correctly solved the wrong equation, failed to think of all the terms that needed to be included, and at least one person in the expanded fallout radius died….[R]emember Lord Kelvin’s careful proof, using multiple, independent quantitative calculations from well-established theories, that the Earth could not possibly have existed for so much as forty million years…..
[W]hen an expert says the probability is “a million to one” without using actuarial data or calculations from a precise, precisely confirmed model, the calibration is probably more like twenty to one (though this is not an exact conversion).
More good stuff here, along with a bibliography and recommended reading.