One of the odd things I observe is the way some posts or issues regularly elicit heated reactions. For instance, early in the days of euro wobbliness, some readers in Europe would go a bit off the deep end at the suggestion that the Eurozone has serious structural weaknesses. It wasn’t so much that these readers found weaknesses or shortcomings in the post; it’s that its conclusion was clearly deeply offensive to them. While many of the upset reactions still addressed the substance of the argument, others, when you cut to the chase, simply attacked the source or were otherwise incoherent.
The problem is the difficulty of recognizing when one’s mental model of how the world works maps reasonably well onto currently available information, and the difficulty of dealing with “information” (which can include statistics, anecdotes, opinion from Credible Experts) that is inconsistent with that framework. Few of us have the intellectual flexibility of Keynes, who defended his repudiation of some of his earlier work by saying, “When the facts change, I change my mind. What do you do, sir?” When dissonant facts start showing up, is it that the data is suspect or the model that is out of whack?
A disconcerting tendency that may also impair adaptability (and this seems to be particularly pronounced in the US) is the tendency to engage in black and white thinking. If (in someone’s mind) the only alternative to one view is its polar opposite, that makes it hard to adjust one’s perspective.
Ars technica presents a more specific example of this phenomenon, of how people defend their mental models in the face of confounding evidence. A study from the Journal of Applied Social Psychology looked into some of the mechanisms that individuals use to reject scientific information that is at odds with their views. Admittedly, this is a small scale study, so one has to be cautious in generalizing from it. But it does seem consistent with some of the strategies I routinely seem in comments.
From ars technica:
It’s hardly a secret that large segments of the population choose not to accept scientific data because it conflicts with their predefined beliefs: economic, political, religious, or otherwise. But many studies have indicated that these same people aren’t happy with viewing themselves as anti-science, which can create a state of cognitive dissonance. That has left psychologists pondering the methods that these people use to rationalize the conflict.
A study published in the Journal of Applied Social Psychology takes a look at one of these methods, which the authors term “scientific impotence”—the decision that science can’t actually address the issue at hand properly. It finds evidence that not only supports the scientific impotence model, but suggests that it could be contagious. Once a subject has decided that a given topic is off limits to science, they tend to start applying the same logic to other issues…
Munro polled a set of college students about their feelings about homosexuality, and then exposed them to a series of generic scientific abstracts that presented evidence that it was or wasn’t a mental illness (a control group read the same abstracts with nonsense terms in place of sexual identities). By chance, these either challenged or confirmed the students’ preconceptions. The subjects were then given the chance to state whether they accepted the information in the abstracts and, if not, why not.
Regardless of whether the information presented confirmed or contradicted the students’ existing beliefs, all of them came away from the reading with their beliefs strengthened. As expected, a number of the subjects that had their beliefs challenged chose to indicate that the subject was beyond the ability of science to properly examine. This group then showed a weak tendency to extend that same logic to other areas, like scientific data on astrology and herbal remedies.
A second group went through the same initial abstract-reading process, but were then given an issue to research (the effectiveness of the death penalty as a deterrent to violent crime), and offered various sources of information on the issue. The group that chose to discount scientific information on the human behavior issue were more likely than their peers to evaluate nonscientific material when it came to making a decision about the death penalty.
Yves here. I’m not certain whether the authors are being tongue in cheek in this section:
….it might explain why doubts about mainstream science seem to travel in packs. For example, the Discovery Institute, famed for hosting a petition that questions our understanding of evolution, has recently taken up climate change as an additional issue (they don’t believe the scientific community on that topic, either). The Oregon Institute of Science and Medicine is best known for hosting a petition that questions the scientific consensus on climate change, but the people who run it also promote creationism and question the link between HIV and AIDS.
Yves again. It is worth considering whether some of this “science can’t evaluate this area” meme exists is at least in part because it is being marketed. Perhaps I lead a cloistered life, but when I was younger, say 20 years ago, I can’t recall encountering this line of argument.
The book Agnotology: The Making and Unmaking of Ignorance gives a detailed account of how the tobacco industry first tried to keep research about smoking-related cancers out of the public eye, and when that started to fail, to attack the science (“Doubt is our product”). One of its late-stage techniques was to promote the idea that the topic wasn’t settled when a tally of the then-available research would say otherwise. Given that knowledge is often the product of political and cultural battles, promoting higher-order anti-science ideas (“science has very considerable limits, there are a lot of areas outside its ken”) gives those who would seek to reshape mass opinion more freedom of action.