Yves here. This post makes some very good observations about the nature of uncertainty and the value, as well as the cost, of additional information. But it uses a personal pet peeve as the point of departure for the article, that of the so-called Trolley Problem. The people who pose it argue that the two options (saving four lives by throwing a lever that results in five people being saved at the cost of another person dying, versus saving four lives by throwing a fat man off a bridge) are morally equivalent, yet the fact that most people say they will throw the lever but will reject throwing the fat person off the bridge is a cognitive bias.
Hogwash. They aren’t comparable. The throwing a fat person off a bridge (to stop a train) is presumably meant to eliminate the “what about me jumping off the bridge” option. Second, I’d wonder if I could in fact succeed in shoving someone over. And third and potentially the most important, if you do succeed in pushing the fat person in front of the train, you are unquestionably guilty of first degree murder. Tell me how you talk your way out of it if you are caught. You were knowingly planning to have the man serve as a human brake to the train and that that would be fatal. By contrast, if you flip the lever, you can say “I was trying to save five people” and profess uncertainty as to what would happen to the other person who winds up getting killed.
In fairness, the article does treat the two cases as representing more differences from an informational perspective than most who use it as an device do, but not as pointedly as I’d like. So please try to take the horrible Trolley Problem in stride and focus on the meat of the article.
By Cameron Murray, a professional economist with a background in property development, environmental economics research and economic regulation. Cross posted from Fresh Economic Thinkings
Ignorance of the distinction between risk and uncertainty lies at the heart of many economic conundrums, particularly dynamic behaviours through time. Yet the critical importance of this distinction in predicting economic behaviour was clear to prominent economists of the 1930s, including Shackle and Knight. Knight wrote that
… that a measurable uncertainty, or ‘risk’ proper, as we shall use the term, is so far different from an unmeasurable one that it is not in effect an uncertainty at all.
Economists have now forgotten that a world of uncertainty generates a strong incentive to delay choices. We do not make immediate choices informed by the some probabilistic expectation of future outcomes. We usually can’t even know the potential scope of future outcomes. That means we delay choices to keep options alive, miss good opportunities, and sometimes commit to poor investments. Because time is irreversible, unlike in economic models, when we commit to decisions matters as well as the decisions themselves.
But the ignorance of uncertainty is evident in other social sciences as well. Approaching problems in philosophy and ethics without acknowledging uncertainty has lead to many seemingly intractable puzzles that are easily resolved in a world of uncertainty.
I hope that observing the crucial role uncertainty plays in these contexts encourages economists to take the concept more seriously and see the economy as a dynamic environment, rather than a static one.
In Scenario A a trolley is barreling down the tacks toward five people who will be killed unless the trolley is stopped. Luckily, there is a fork in the tracks, and by simply pulling a lever, the trolley can be diverted onto a second set of tracks. Unfortunately there is a single person in the path of the tolled on this track who will be killed if you pull the lever.
The dilemma is whether you should pull the lever and save five people by sacrificing one? In surveys most people say they would.
In Scenario B you find yourself on a bridge next to a fat man where below the same dilemma is playing out, with a trolley hurtling down the tracks towards five people. The question here is whether it is permissible to pushing the person next to you onto the tracks if you knew it would stop the trolley and save the five people.
Most people in this scenario would not push the man off the bridge, even though the same welfare gains in terms of lives saved would be the same as Scenario A (so you know, 68.2% of philosophers would push the man to save the five). Some philosophers and psychologists put this down to a ‘dual-process’ theory and for some reason that two different setups invoke “the operations of at least two distinct psychological/neural systems”.
Fundamentally the incompatibility of these two outcomes arises because we are presented with a dilemma in terms of risk, or knowable probabilities. In fact we have point distributions at perfect certainty for each outcome. You push the fat man off the bridge (assuming away the logical problem that a man fat enough to stop a runaway trolley is somehow easily able to be pushed off a bridge) you have a probability of 1 that the man will die and the trolley will be stopped. If you don’t, you have probability 1 that the five people on the tracks will be killed.
When you add risk by looking at possible probability distributions of choice outcomes you can generate the a balance of risks that predicts survey responses. This is a step in the right direction, since we know it is very difficult for people to comprehend the idea of perfect certainty. But it still overlooks the dynamic nature of true uncertainty.
Let us now look at the question in terms of uncertainty. For a start, how do we know the trolley is out of control? Is it possible to delay the decision to get more information?
A very simple resolution to arises when we add a time dimension to the problem, which is what is required under uncertainty. We can think in terms of an option-tree expanding over time, with choices unable to be fully anticipated in advance.
We can see in the diagram below that in Scenario A, switching the tracks leads to a new situation that opens up the set of possible choices in the grey shaded area while eliminating others. Switching the trolley onto the side track buys time and keeps options open without killing anyone.
In Scenario B, most people choose not to push the fat man. Here what the are doing is buying time before anyone gets killed. Even after the decision is made not to push the man, there will be time available for many other as-yet-unknowable situations to arise.
People are making choices in a way that allows them to navigate through a choice space over the irreversible dimension of time. I’ve highlighted in red a possible path for each scenario that could be envisages in the mind of somewhat making choices in a world of uncertainty. In both cases there is an unknowable chance that a resolution to the dilemma will involve no death if the dynamic choices that arise are navigated appropriately. But choosing to push the fat man in Scenario B eliminates the option of resolving the situation without any deaths.
The whole rational of making decision in a world of uncertainty revolves around keeping options for desirable outcomes open, and often this involves buying time by not making a decision at all.
We know that buying time to keep an option open is a strong impulse. In experiments where participants are given the choice of which of two identical drowning swimmers to save, knowing they can only save one, many are unable to make the decision in a timely enough manner and instead spend their time searching for better information in the hope of maintain the option of saving both, but in doing so letting them both drown. Because the choice to commit to save one swimmer is associated with a commitment to allow the other to drown, the logical choice is to delay to maintain the option of saving both.
In military training overcoming this instinct to delay choices to keep options open forms a integral part of the psychological training. Soldiers are known to delay making any choice in high-stakes combat dilemmas, what amounts to ‘freezing’, or in many cases they shoot to deter rather than to kill, to keep open the option of finishing a battle with fewer deaths in general.
In criminal behaviour, Becker’s expected utility framework has been called into question due to the radical difference between human behaviour in a world of uncertainty versus a world of risk. Increasing chances of being caught and increasing punishment if caught are substitute methods for changing probability distributions of expected outcomes in a world of risk, but in a world of uncertainty they will have far different effect on criminal decisions.
The same logic of uncertainty can be applied in social psychology to understand the bystander effect. The bystander effect in which there seems to be an inverse relationship between the number of people witnessing a victim in need, and the number of people offering help. Various reasons for this empirical phenomena have emerged, with the idea of a diffusion of responsibility dominating explanations.
But when we dig a little deeper we can see the logic of uncertainty at play. Repeated experiments on the bystander effect show that the degree of ambiguity is a crucial determinant of the willingness to assist, with reaction times being much slower in the presence of more ambiguous situations. The logic of how ambiguity, or uncertainty, results in the bystander effect is as follows
…most emergencies are, or at least begin as, ambiguous events. As the bystanders are deciding whether an event is an emergency, each bystander looks to the others for guidance before acting.
… Seeing others remain passive causes the bystander to interpret the ambiguous situation as non-serious.
So it is not that anyone does not want to help, but as each person individually chooses to delay their actions to gain new information, they observe others doing the same thing. By observing others they gain the new information that the situation is non-serious, and hence as a group they ultimately choose a path through the choice space over time that resolves to a belief that the situation is a non-emergency.
What happens as people delay choices here is a cascade of new information that changes the decisions of each individual and the group as a whole. In sociology there are many simulation models of these type of choice cascades, from standing ovations, to riots, and other herding behaviour including musical tastes, and crucially for economists, asset market speculation.
Uncertainty is a primarily a concept about choices in a dynamic environment. Here I have shown that human behaviour is adapted to our dynamic irreversible environment, and as such, uncertainty is required to understand behavioural logic, morality, and sociability. Moral puzzles resolve easily in an environment of uncertainty, and many psychological phenomena, from soldiers freezing in battle, to the bystander effect, to our taste in music, can been seen to arise from a result of human tendencies to delay decisions in order to cope with uncertainty.
It is not just economists who have known that uncertainty is tremendously important, but then all but ignored the concept in their analysis. Given the high stakes arising from political choices based on economic analysis, putting uncertainty front and centre in a new dynamic economics is critical.