When Copernicus presented his heliocentric model, in which the Earth was not the centre of the universe, there was a strong pushback against it. Not only did the Church refuse to accept it because it posed theological problems, but other astronomers also refuted it, alleging that geocentrism better explained some phenomena.
This was despite the fact that the heliocentric model had been widely discussed by other cultures, such as the ancient Greeks and the Islamic world, and that there was empirical data challenging geocentrism. What it took for the West to begin this paradigm shift wasn’t the existence of data, but someone willing to think differently, to ask a question that went against the established consensus.
In theory, an LLM could have arrived at that conclusion if fed all the necessary information. These AI models excel at analysing data and recognising patterns. Based on that, they can generate predictive hypotheses and even run simulations. However, they could have only done so when asked the appropriate question, when prompted to do it.
Because LLMs are trained on huge amounts of text and optimized to predict what text is likely to come next, they inherit the distribution of beliefs in the training data. If most of the sources say geocentrism is correct, a model trained only on those texts would strongly favour geocentrism too. The way the models are trained actively rewards agreeing with the majority in the data, not inventing radically new theories to explain it. Most LLMs are further tuned to be helpful and safe—according to whatever that means for the developer—often being nudged to respect expert consensus.
As it stands right now—and it’s highly contentious whether this will actually change—an LLM on its own lacks the intrinsic curiosity to challenge an established paradigm. It can very powerfully elaborate on previous hypotheses and find solutions to current challenges that those hypotheses present. But to actually go against the established consensus, such as the geocentrist model, requires a type of creative thinking that we could call deviant thinking.
It seems that, right now, that type of thinking is on the decline. Adam Mastroianni has written an excellent post illustrating, with plenty of examples, how that seems to be the current trajectory. He analyses several trends, from people willing to act in criminal ways to the homogenisation of brand identities and art.
Deviant thinking is, in this context, the capacity to think against established norms. “You start out following the rules, then you never stop, then you forget that it’s possible to break the rules in the first place. Most rule-breaking is bad, but some of it is necessary. We seem to have lost both kinds at the same time,” he writes.
He also attributes a decline in scientific progress to a decline in deviant thinking: “Science requires deviant thinking. So it’s no wonder that, as we see a decline in deviance everywhere else, we’re also seeing a decline in the rate of scientific progress.”
Copernicus was a deviant thinker, at least in regard to the established theological and scientific consensus of his time in the West. To be able to look at the data and say, “Hold on a minute, perhaps the Earth is not the centre of the universe,” and to have the guts to bring that to the public, with the consequences that it would entail—death even—required someone willing to think deviantly.
The decline in that type of thinking could be related to a decline in critical thinking. To think deviantly in an effective way, one must first think critically. The American educator E.D. Hirsch Jr. pointed out in an essay published in the spring of 2001 in American Educator, titled “You Can Always Look It Up—Or Can You?”, that, because of search engines and the internet, we were losing the capacity to think critically. That was even before AI models were on the table.
What Hirsch was essentially saying is that it takes knowledge to gain knowledge and to make sense of that knowledge. He criticised educational models based solely on acquiring skills because factual data could always be found. “Yes, the Internet has placed a wealth of information at our fingertips. But to be able to use that information—to absorb it, to add to our knowledge—we must already possess a storehouse of knowledge. That is the paradox disclosed by cognitive research.”
He argues that what enables lifelong learning, reading comprehension, critical thinking, and intellectual flexibility is broad, cumulative background knowledge, beginning early in childhood. Without such a foundation, neither “skills” nor access to the internet can substitute for learning and cognition.
A recent MIT study hints at what most people can intuitively perceive: using LLM models impairs our thinking capacity. Researchers used an EEG to record writers’ brain activity across 32 regions and found that those using ChatGPT had the lowest brain engagement versus those using traditional searches or nothing at all.
E.D. Hirsch warned that teaching only skills was not enough to develop critical thinking, but now LLM chatbots are impairing even those processes. According to the MIT study, those using ChatGPT “consistently underperformed at neural, linguistic, and behavioral levels.” Over the course of several months, ChatGPT users got lazier with each subsequent essay, often resorting to copy-and-paste by the end of the study.
It is not surprising, then, that deviant thinking is on the decline. Not only are we losing the capacity to accumulate factual knowledge, which also implies the capacity to make sense of new information, but we are also losing the capacity to use the thinking skills that were supposed to make up for the loss of factual knowledge.
Perhaps we are not losing that capacity, but rather offloading it onto machines. We first delegated the ability to store knowledge and now we are delegating the thinking processes. But by delegating those, we are losing the capacity to think critically, let alone deviantly, which means that we become more conformist with the general narrative, more complacent with power.
It is tempting to think that this was not the goal all along while developing this technology. Now that the hype about how AI LLM models are going to change the world and revolutionise every industry seems to have slightly passed, and we are sobering a little, we are seeing that the impact on the productive economy is relatively small.
The actual use cases for generative AI models so far are quite niche compared to the expectations. Granted, there are some industries in which they are a game-changing tool, but another MIT study showed that 95% of companies were considering rolling back generative AI pilots because they found zero return. There are a few areas, however, in which they excel: surveillance, targeting, content reproduction, and algorithmic manipulation. They are a perfect tool for increasing control and conformity.
However, that’s not the main point I am trying to make here. Rather, it is that generative AI will not give us anything really new, only more of the same. Bigger, faster, more productive. Not only because the technology itself is not fit for it, but because it is making us more homogeneous—“fitter, happier, more productive,” as Radiohead sang—less capable of thinking deviantly. I’m not sure if that’s a good or a bad thing, but I definitely think it is a more boring thing.

