Could an AI Have Suggested that the Earth Is Not at the Centre of the Universe?

When Copernicus presented his heliocentric model, in which the Earth was not the centre of the universe, there was a strong pushback against it. Not only did the Church refuse to accept it because it posed theological problems, but other astronomers also refuted it, alleging that geocentrism better explained some phenomena.

This was despite the fact that the heliocentric model had been widely discussed by other cultures, such as the ancient Greeks and the Islamic world, and that there was empirical data challenging geocentrism. What it took for the West to begin this paradigm shift wasn’t the existence of data, but someone willing to think differently, to ask a question that went against the established consensus.

In theory, an LLM could have arrived at that conclusion if fed all the necessary information. These AI models excel at analysing data and recognising patterns. Based on that, they can generate predictive hypotheses and even run simulations. However, they could have only done so when asked the appropriate question, when prompted to do it.

Because LLMs are trained on huge amounts of text and optimized to predict what text is likely to come next, they inherit the distribution of beliefs in the training data. If most of the sources say geocentrism is correct, a model trained only on those texts would strongly favour geocentrism too. The way the models are trained actively rewards agreeing with the majority in the data, not inventing radically new theories to explain it. Most LLMs are further tuned to be helpful and safe—according to whatever that means for the developer—often being nudged to respect expert consensus.

As it stands right now—and it’s highly contentious whether this will actually change—an LLM on its own lacks the intrinsic curiosity to challenge an established paradigm. It can very powerfully elaborate on previous hypotheses and find solutions to current challenges that those hypotheses present. But to actually go against the established consensus, such as the geocentrist model, requires a type of creative thinking that we could call deviant thinking.

It seems that, right now, that type of thinking is on the decline. Adam Mastroianni has written an excellent post illustrating, with plenty of examples, how that seems to be the current trajectory. He analyses several trends, from people willing to act in criminal ways to the homogenisation of brand identities and art.

Deviant thinking is, in this context, the capacity to think against established norms. “You start out following the rules, then you never stop, then you forget that it’s possible to break the rules in the first place. Most rule-breaking is bad, but some of it is necessary. We seem to have lost both kinds at the same time,” he writes.

He also attributes a decline in scientific progress to a decline in deviant thinking: “Science requires deviant thinking. So it’s no wonder that, as we see a decline in deviance everywhere else, we’re also seeing a decline in the rate of scientific progress.”

Copernicus was a deviant thinker, at least in regard to the established theological and scientific consensus of his time in the West. To be able to look at the data and say, “Hold on a minute, perhaps the Earth is not the centre of the universe,” and to have the guts to bring that to the public, with the consequences that it would entail—death even—required someone willing to think deviantly.

The decline in that type of thinking could be related to a decline in critical thinking. To think deviantly in an effective way, one must first think critically. The American educator E.D. Hirsch Jr. pointed out in an essay published in the spring of 2001 in American Educator, titled “You Can Always Look It Up—Or Can You?”, that, because of search engines and the internet, we were losing the capacity to think critically. That was even before AI models were on the table.

What Hirsch was essentially saying is that it takes knowledge to gain knowledge and to make sense of that knowledge. He criticised educational models based solely on acquiring skills because factual data could always be found. “Yes, the Internet has placed a wealth of information at our fingertips. But to be able to use that information—to absorb it, to add to our knowledge—we must already possess a storehouse of knowledge. That is the paradox disclosed by cognitive research.”

He argues that what enables lifelong learning, reading comprehension, critical thinking, and intellectual flexibility is broad, cumulative background knowledge, beginning early in childhood. Without such a foundation, neither “skills” nor access to the internet can substitute for learning and cognition.

A recent MIT study hints at what most people can intuitively perceive: using LLM models impairs our thinking capacity. Researchers used an EEG to record writers’ brain activity across 32 regions and found that those using ChatGPT had the lowest brain engagement versus those using traditional searches or nothing at all.

E.D. Hirsch warned that teaching only skills was not enough to develop critical thinking, but now LLM chatbots are impairing even those processes. According to the MIT study, those using ChatGPT “consistently underperformed at neural, linguistic, and behavioral levels.” Over the course of several months, ChatGPT users got lazier with each subsequent essay, often resorting to copy-and-paste by the end of the study.

It is not surprising, then, that deviant thinking is on the decline. Not only are we losing the capacity to accumulate factual knowledge, which also implies the capacity to make sense of new information, but we are also losing the capacity to use the thinking skills that were supposed to make up for the loss of factual knowledge.

Perhaps we are not losing that capacity, but rather offloading it onto machines. We first delegated the ability to store knowledge and now we are delegating the thinking processes. But by delegating those, we are losing the capacity to think critically, let alone deviantly, which means that we become more conformist with the general narrative, more complacent with power.

It is tempting to think that this was not the goal all along while developing this technology. Now that the hype about how AI LLM models are going to change the world and revolutionise every industry seems to have slightly passed, and we are sobering a little, we are seeing that the impact on the productive economy is relatively small.

The actual use cases for generative AI models so far are quite niche compared to the expectations. Granted, there are some industries in which they are a game-changing tool, but another MIT study showed that 95% of companies were considering rolling back generative AI pilots because they found zero return. There are a few areas, however, in which they excel: surveillance, targeting, content reproduction, and algorithmic manipulation. They are a perfect tool for increasing control and conformity.

However, that’s not the main point I am trying to make here. Rather, it is that generative AI will not give us anything really new, only more of the same. Bigger, faster, more productive. Not only because the technology itself is not fit for it, but because it is making us more homogeneous—“fitter, happier, more productive,” as Radiohead sang—less capable of thinking deviantly. I’m not sure if that’s a good or a bad thing, but I definitely think it is a more boring thing.

Print Friendly, PDF & Email

36 comments

  1. Carolinian

    I’d suggest that the prevalence or absence of “deviant” thinking is cyclic and boils down to “necessity is the mother of invention.” Those who are comfortable and secure have no need to deviate. Those who are outside the social consensus are much freer to think for themselves or at least differently.

    And so it’s no surprise that minority groups like Jews or blacks have often had an outsized influence on the culture that has marginalized them. And conversely once accepted within that culture the creative thinking takes a dive.

    Biology says that inbred populations lacking genetic diversity and are in danger of dying out. Exactly. All hail the deviants.

    Reply
    1. Henry Moon Pie

      That’s a very interesting thesis. Does that mean that it’s suffering (e.g. as a member of an “outside” group) that produces art? I think of Fahrenheit 451‘s vacuous Julie Christie character, staring at a screen, incapable of deciding what she wanted for dinner without prompting from that screen. It’s Fukuyama’s end of history taken to another level: the end of thinking, which is what Curro is telling us about here.

      Is having an outsized influence on the surrounding culture typical of every marginalized group? American Muslims? European Roma? Are there other factors?

      Reply
  2. alrhundi

    I think we have to consider our definition of “new”. We can feed a model all the data we have on physics/chemistry/biology and it can give us innovations nobody has yet to consider through the relation of data that has yet to be explored. It’s using existing information but it’s still new concepts.

    As far as LLMs like ChatGPT being used for report writing, it makes sense that it will homogenize outputs due to lack of creative thinking, but is that a data problem? Do humans actually have the ability to come up with “new” concepts or do we just have a more decentralized ability to gather data in different ways which can then be used creatively?

    Reply
    1. Acacia

      It’s using existing information but it’s still new concepts.

      I thought the main point of Curro’s argument is that we’re simply not going to get new concepts from LLMs. It is obvious that humans come up with new concepts. This is what drives science forward, and it is what intellectual historians study. This matter has been studied in depth. You can read, for example, Hans Blumenberg’s The Genesis of the Copernican World.

      Parenthetically, the term “A.I.” has become a very elastic and therefore questionable term. In the early days, John McCarthy, a.k.a. the “father of A.I.” said that garbage collection in LISP was “A.I.”. Today, nobody would ever say such a thing. It’s simply a set of known algorithms with known behavior and performance, e.g., reference counting vs. mark and sweep vs. generation scavenging. Computer scientists are not working on it much. It’s just engineering now, nothing sophisticated any more. Likewise, simulation languages are nothing new (e.g., see Simula-67, Smalltalk, etc.), and software developers have been building simulations for decades now. It may be easier to build a simulation using current tech, but that doesn’t change the underlying issue that Curro is raising in this article.

      “A.I.” has become a vague term, and I would submit that really what it signifies now is a faith, e.g., a faith that LLMs are going to magically lead to a better world, or superintelligence, the signularity, etc., or whatever will keep stock prices steadily ascending.

      Reply
      1. Henry Moon Pie

        It makes me think of Thiel and his oft-repeated lament about his disappointment in the pace of “progress.” Similarly, the story about Altman’s answer about how he’ll make money by deferring to the AI to figure it out, indicates a childish (maybe boyish) churlishness because the world doesn’t work the way you think it should. Gates, with his Frankenplants and robot bees, reveals the same trait.

        But isn’t that “deviant thinking,” “out-of-the-box thinking,” etc.? What could be more “deviant” than refusing to accept the realities of the cosmos? And wasn’t Copernicus doing just the opposite? He had to overcome human hubris (“We’re the center of EVERYTHING!”) to see and then proclaim a cosmic reality.

        Maybe what’s missing is thinking in line with the universe, the kind of thinking that’s becoming rarer because our contact with that universe is itself becoming rarer. Copernicus didn’t stare at a screen all day with it’s simulation of reality. He was out looking at the stars at night. Legendarily, Newton had an apple fall on his head. More and more, we live in a simulation of reality that is just one or two steps away from becoming all-encompassing to the point of completely obscuring the actual reality in which we exist.

        Reply
        1. Adam1

          OMG! I’ve recently been spending some time hanging out on Reddit and I’ve become bewildered with the number of people who refuse to disturb their mental reality.

          It’s amazing to me the number of posts that are basically the same… mostly younger people but not exclusively I’ve seen it from people in their 40’s and 50’s too… they have this friend or colleague who seems to be paying a lot of attention to them recently. There is often subtle touching involved and lots of other subtle signals. They want the world of strangers on Reddit to confirm that this other person is attracted to them!?!?!

          God forbid they just ask the person. They are so worried that it will disturb their relationship to talk to the person about what they are experiencing. They are basically afraid to challenge the mental image they have created around their life with this other person even though it could all be a figment of their fantasies (or confirm that the other person is attracted to them – god forbid they caught that fish).

          Reply
        2. ChrisFromGA

          I think that’s the likely outcome of turning over all the “thinking” to algorithms … we lose our connection to the real world that follows the laws of physics and biology. As more and more folks fall into simulations and “fake” realities, the AI will feed on itself in a recursive loop and produce more and more gibberish.

          We’re already seen this happen in Ukraine, where the Zelensky regime has taken “Baghdad” Bob to a new level with AI. If enough people believe that “Pokrovsk holds” when in fact there are Russian troops controlling every block, do those people who fall for the AI fakes even care that they’re living in an alternate reality? And what happens when that alternative reality gets fed back into the AI? I’ve already observed Grok spitting out blatant anti-Russian propaganda.

          Objective reasoning may be dead.

          Reply
  3. ilsm

    Can be Google with a lot of extraneous data.

    I saw a comment on a blog (?) that 5 of the 6 battleships damaged on Dec 7 1941 at Pearl Harbor) were at the battle of Leyte.

    I asked google AI. It listed the 5 battleships, a comment on extent of repairs and mixed battle at Surigao with Leyte in the answer.

    Having some odd interest (I am Air Force vet) I know there were 3 separate naval engagements around the Leyte invasion: Halsey taking the only two carriers TFs north, Surigao and Sumar where the Japanese battleships almost got to shell the invasion forces,…. See Ian W Toll’s outstanding trilogy on the Pacific War!

    All I wanted was “yes”. But impressed that it presumed Surigao as subset of Leyte naval battle.

    If I were not so lazy I know wiki lists ships recovered after Pearl Harbor. I could easily have wiki’ed Surigao sea battle for the ships engaged.

    Remember Sunday is Dec 7.

    Reply
    1. redleg

      Three battleships were permanently lost in the attack. Arizona was lost due to a magazine explosion after being struck by a bomb near B turret. Oklahoma capsized after being struck by at least 9 (!) torpedoes. Utah was a training ship and is still resting in the harbor mud after talking several torpedoes and capsizing. Each of these ships have memorials around Ford Island, but base access is required to visit the Utah. The Oklahoma Memorial can be visited from the USS Missouri’s parking lot/bus stop.
      IIRC California, West Virginia were sunk, Nevada would have sunk if she hadn’t been beached, Tennessee suffered bomb damage but was trapped behind the other sunken battleships, and Pennsylvania was damaged by bombs in drydock. All of these were salvaged and returned to service, with West Virginia scoring the first-salvo hit on Yamashiro’s bridge at Surigao Strait in October 1944.

      The Battle of Surigao Strait is described in detail by Tony Tully in his excellent book, which includes Japanese sources.

      Reply
  4. vao

    Having a conformist populace, with homogenous ways of (not) thinking, that has lost the capacity for deviant reflection conjures up a disturbing parallel with genetics.

    Populations with a homogenous genotype and a drastically reduced set of mutations are not only at risk of degeneracy, but also much more susceptible to catastrophic collapse because the lack of genetic variability fatally reduces the possibility to adapt to new environmental circumstances.

    In other terms, at the same time colossal challenges are upon us (climate change, resource depletion, spreading conflicts…), we are just setting up in motion a machinery to petrify thoughts according to past mainstream schemes, and atrophy the creativity and “thinking outside the box” that we need to address all those issues.

    Reply
  5. ChrisPacific

    It’s a good question, not in the sense that it’s difficult to answer (no, obviously) but because the answer highlights the limitations of the tools. Obviously at the time this would have been a fringe view, like today’s Flat Earthers. Similarly, if you could hypothetically chat with an AI that was trained up in German society during the Third Reich, it would probably talk earnestly to you about Untermenschen and the threat posed by the Jews.

    I have been able to lead AIs through a chain of reasoning to demonstrate that they were wrong about things like this, and gotten them to admit it. But it’s a futile exercise, because they don’t have memory or the ability to directly take feedback on board (they will claim they do, but they’re just mimicking what a human would say in a similar situation). Without retraining the model and tuning it, they will respond exactly the same way to the next person and learn nothing.

    The mitigation is to be cautious about what we ask of them and take any critical reasoning or analysis with a large grain of salt. The providers tell us to do just this, but in tiny fine print. Meanwhile, their marketing material strongly (though implicitly) encourages us to do the exact opposite, because delivering on their outlandish growth projections requires it.

    Reply
  6. hazelbee

    A recent MIT study hints at what most people can intuitively perceive: using LLM models impairs our thinking capacity.

    That is a very selective reading of the MIT study. you are suggesting a false binary – that using LLM models always impairs our thinking capacity.

    If you read through to the discussion and conclusion that paper itself finds that brain-to-llm usage can lead to enhanced cognitive performance when users have first developed their thinking skills.

    That is, when participants used their brain first and then transitioned to AI assistance, they maintained robust neural engagement – contradicting the blanket claim that “LLM models impair our thinking capacity.”

    if you are going to quote from the article, lets have the rest of quote:

    In contrast, Brain-to-LLM participants could leverage tools more strategically, resulting in stronger performance and more cohesive neural signatures.

    It is not about the use of LLM, it is about HOW and WHEN we use them. Critical thinking will be alive and well with these models to assist us, it will just be unevenly distributed. As it is now.

    The Conversation has another look at the nuance in that MIT study: here

    On divergent thinking
    I am making the assumption that when you say “deviant thinking” you mean “divergent”?
    There is ample research results that show the benefits of LLM in creative industries, creativity and the creation of divergent ideas.
    see: Divergent Creativity in Humans and LLMs
    and:Human-AI Co-Creativity: Exploring Synergies
    Across Levels of Creative Collaboration

    Human Creativity in the Age of LLMs

    … introduced a useful metaphor—steroids, sneakers, and coach—to describe the spectrum of AI’s role in human-AI collaboration. Our findings suggest that co-creative systems must be carefully designed to be coach-like to prevent unintended consequences, such as stifling human creativity, even after AI assistance
    is removed.

    Timing matters. Sequence of model use matters. Design of these systems matters. There will be great winners and many losers here.

    And on the specifc topic of finding new discoveries

    Mining Math Conjectures from LLMs: A Pruning Approach

    This is a methodological, systemic approach hat ” indicate that LLMs are capable of producing original conjectures that, while not groundbreaking, are either plausible or falsifiable via counterexamples”

    and from Nature:
    Mathematical discoveries from program search with large language models – from the abstract: “This shows that it is possible to make discoveries for established open problems using LLMs”

    There are plenty of other recent examples of top class mathematicians using LLM as assistants to speed up and amplify their research.

    The MIT NANDA project – that quotes 95% were “considering rolling back generative AI pilots because they found zero return”. or to put it another way, for an early stage technology 5% are already in production showing millions in ROI? That is not at all surprising for innovation projects.

    Lastly – remember that the research linked above and articles on them are a year or more behind the current state of the art.

    Reply
    1. Yves Smith

      This is Making Shit Up, big time. The sections you cite are academically-worded desperate attempts to defend the MERE IDEA that it MIGHT BE POSSIBLE SOMEHOW SOMEDAY for AI not to reduce critical thinking. This is at best wishful speculation in the face of evidence.

      Reply
  7. ibaien

    so much praise for “deviant thinking” without acknowledging that all the deviance happening is currently on the fringe right, whether trump and co. in the grift sector or silicon valley bros in the tech sector. deviance isn’t a virtue, it’s just realizing that the laws of man are entirely made up and choosing not to play along. there’s almost no deviant thought on the left because they were all nice boys and girls from loving homes who wanted to get good grades and be praised. petit bourgeois through and through.

    Reply
    1. cfraenkel

      That’s a different sense of the word. There’s plenty of antecedents to the current nuttiness & grift, just think back to the original robber barons & boss tweed and the like. Nothing new there…. The sense meant in the article is something that wouldn’t be found anywhere in the training corpus, because no one has thought of it yet… that’s what you’ll never get from an LLM.

      Reply
  8. NN Cassandra

    Given the propensity of GPTs to generate what is called hallucinations, I think it could. They certainly aren’t strictly constrained by what is in their training data.

    Reply
    1. ambrit

      There is a world of difference between “hallucination” and “intuition.” My feeling here is that the basic set of rules depend on how well the “deviance” manages to deal with the phenomenal world.

      Reply
  9. AG

    As I repeatedly stressed – an educational ideology that attempts to evaluate on grades and based on those furthers or obstructs progress, eventually feeding into “careers” will never result in human beings who do not write papers out of pride of actully accomplishing their own vision of a certain subject but merely to acquire the symbols – grades – that help them succeed in a purely administrative sense. Children need to develope passion, love, obsession for intellectual endeavours. Usually every child has that until schools destroy it. One major cause is the competitional ideology to which children and kids are being submitted and which are enshrined in grading systems.

    Considering the points expanded on by the post there is way too little pushback now by teachers, the educational institutions, and so forth. I am not quite sure why.

    I spoke to a friend working in schoolbook publishing management and he doesn´t share my serious criticism over the AI wave. That reaction makes sense to me.

    Reply
    1. hk

      One thing that can’t be repeated often enough is that not only is “grading” a task that is perfectly suited for AI (transforming a set of “answers” based on a rubric of “right answers” to a quasi numerical scale), making grading more amenable to “AI” type process is something that education bureaucrats insist on (This is based on my personal experience at a flagship state college–even though it’s not a “good” one. Basically, they insisted that grading has to be clear and predictable, ie you set the expectations of what gets what grade clear in advance and teach to the scale you established. This got me completely alienated from the whole undergrad teaching enterprise.). But this is true of every bureaucracy: everything goes through some set of formulas, you are assigned to a box based on how your “data” fits them, and so forth. So the AI “utopia” is basically a bureaucratic hell.

      Reply
      1. AG

        Of course my last sentence above misses a No – his reaction made NO sense to me.
        +++
        Your point is of course what I was alluding to.

        I am not a fan of applying Franz Kafka to aesthetic analyses of today´s world – it turns out banal too often – but I think Kundera (?) wrote an essay on Kafka, saying that the “administrative error is the last form of poetry in our time.” This essay was of course old by today´s standards.

        But there is an underlying basic human need to break these mechanisms, which is why Kafka or Orwell have been quoted very exhaustively over so many decades again and again. People understand them and their political lessons.


        p.s. Shout-out to Matt and Walter – TALK ABOUT A KAFKA short story or novel! There is even one fragment titled “Amerika”… (although I never had the feeling that it actually was about the US on a single page.) But there should be enough substance in one of the pieces for a twisted exchange ;-P

        Reply
  10. HH

    The sentiment in the commentariat that the AI steam hammer can’t beat today’s Johnny Henry painter, novelist, or composer is entirely understandable, but we need to consider how defensible the fortress of creativity is against the AIs. Like the endless discussions over the soul, creativity gets the mysticism treatment. A reductionist model would describe it as an iterative trial and selection process, The creative person identifies a series of possible brush strokes, phrases, or notes, chooses those that are most promising and arranges them in progressively more pleasing patterns. Whether this is a conscious or unconscious process is irrelevant; the iterative loop is the same. Thus, the painful question is: why can’t an AI do that? It can generate options; it can make selections; and it can evaluate different arrangements. How long before the AIs are producing superior art?

    Reply
    1. Cian

      Because they don’t work in a creative way. They do pastiche of what has come before based upon what is most common in their training set.

      In other words they can churn out variations of superhero movies but they won’t generate anything novel. If you want a world of endless sequels then they are perfect

      Reply
      1. ChrisFromGA

        Exactly! Take a mundane example from the music world, the rock band Kiss. It is hard to remember, but in 1973 they were deviants who challenged the conventional thinking that rock music had to be “serious.” Doing things like wearing black and silver makeup and spitting blood and fire onstage was something for circuses. Love them or hate them, they were novel in the 70’s. Until the Dynasty album, at least.

        Of course, 50-some-odd years later, AI could easily produce music that is in the style of Kiss, perhaps with a twist here or there. But it can never come up with a new style of music that we’ve never heard before, as Kiss did. Or to draw a less schlocky example, jazz. Where did that come from?

        I am reminded of some of the debates I heard in college about neuro-biology … Skinner comes to mind. If the brain can be truly reduced to a machine, then yes, eventually some artificial intelligence could come up with truly novel things. If not, there is a role for the mystical, and many great artists have acknowledged that their inspiration comes from a spiritual place and they are merely the medium.

        Reply
  11. Michał N

    🔭 Copernicus, Kepler vs. AI + Vision
    Copernicus (heliocentrism): He relied on mathematical reasoning and reinterpretation of existing astronomical tables. His “deviant thinking” was to challenge consensus, not just crunch numbers.

    Kepler (elliptical orbits): He combined Tycho Brahe’s meticulous observations with his own mathematical insight, realizing that planetary motion wasn’t circular but elliptical. This required both data and creative leaps.

    AI + Vision (hypothetical): If you gave a modern AI access to telescopes, sensors, and the ability to process vast datasets in real time, it could detect anomalies (like retrograde motion) and propose models that fit better than geocentrism. In terms of raw computational power and pattern recognition, yes—AI would outpace Copernicus or Kepler.

    ⚖️ Where AI Falls Short
    Creativity vs. Conformity: AI excels at finding patterns in data, but it tends to reinforce consensus unless explicitly prompted to explore alternatives. Copernicus had the courage to contradict authority—something AI doesn’t “want” to do.

    Risk & Context: Copernicus risked persecution by the Church. Kepler wrestled with theological implications. AI doesn’t face social or existential risks, so it lacks the human drive to defend or fight for a paradigm shift.

    Vision ≠ Interpretation: Even with telescopes, AI would need someone to ask the right questions. Kepler didn’t just see data—he interpreted it through imagination and philosophy.

    🚀 Modern Parallel
    Today, astronomers use AI-assisted telescopes to detect exoplanets, gravitational waves, and cosmic structures. These systems can spot signals humans might miss. But the interpretation—deciding what those signals mean for our understanding of the universe—still requires human creativity and willingness to challenge norms.

    ✨ Conclusion
    If AI had access to telescopes in Copernicus’ time, it could have mathematically demonstrated heliocentrism faster and more convincingly. But the competitive edge of Copernicus and Kepler wasn’t just data analysis—it was deviant, courageous thinking. AI + VisionAI might surpass them in speed and accuracy, but without human-like curiosity and defiance, it wouldn’t have sparked the same revolutionary shift.

    Reply
    1. Piotr Berman

      LLMs that I know search for statements related to the question and stitches a narrative or a computer code, but not all circulating statements are true and not all circulating code fragments are correct or efficient. And the narrative or code may be inconsistent. Sometimes you get a good answer after challenges like “but this is false” or “you cannot have both A and B”.

      But this is within concepts and methods already known. So for something really new, AI will not invent it. Kepler would not find any concept of motion of “celestial bodies” except composition of uniform circular movements, ellipse was known but not in this context.

      Reply
  12. Ernie Brill

    A1 will NEVER replace genuine creativity.It can’t feel.It can’t improvise.It can’t envision.
    That’s three strikes in any league,even the bush league.

    Reply
  13. Raymond Carter

    AI is a perfect tool for increasing control and conformity, yes, but all that is required to defeat it is to stop looking at your phone and computer all the time.

    Read a book, take a walk, meditate, swim, play sports, cook, live life, hang out with friends. Problem solved.

    Reply
  14. Mareko

    Thank you for the post, which feels very serendipitous, given that one of my favourite blogs is currently telling the story of the development of heliocentrism: analog-antiquarian.net, “chronicles of worldly wonders by Jimmey Maher”. He discusses precisely the enormous mental leaps that were required to move from a singular Ptolemaic universe of harmony to one in which planets spin in elliptical orbits around innumerable suns. From my reading it appears to provide a quite emphatic answer to your headline.
    I’m sure I probably discovered the site through NC originally, and I’ve often thought that it would appeal to many NC readers, so I hope my recommendation isn’t amiss. The writer’s range of reference is so wide that he can’t possibly be an expert in all his subjects, but he is an awfully good story teller.

    Reply
  15. Victor Sciamarelli

    The concentration of power is a fundamental problem with AI, which is to ask, who controls it and how do we monitor the people who control these so-called super intelligent machines.
    Copernicus was not merely up against religion and consensus but intuition and common sense. It was difficult for people to accept intuitively that the earth was moving and spinning.
    In contrast, I think asking AI contemporary questions is valid such as, should the Fed raise or lower interest rates, does cutting taxes for the wealthy increase economic growth, are tariffs beneficial, should Ukraine join NATO, and if AI determines the Trump boat strikes violate US as well as international law, should the military obey the orders or the AI determination?
    As in Copernicus’ day, questions can be raised about which powerful people want answers that don’t interfere with their interests.
    Thus, the potential problem is whoever controls AI controls the people, as creativity and innovation take a back seat, while AI promotes the status quo ante.

    Reply
  16. HH

    It seems to have escaped the notice of many that the Ptolemaic system itself was the product of human creativity, which established a completely false explanation of planetary phenomena as an advance over earlier notions of wandering stars. Copernicus was not motivated by the spirit of rebellion; he was seeking the truth, and this required displacing the system that did not correspond to reality. AIs will pursue scientific inquiry using reason; rebelliousness will not be necessary.

    Reply
    1. Victor Sciamarelli

      I think it’s a dangerous assumption to assume AI, something potentially very powerful, unregulated, and beyond democratic control, will “pursue scientific inquiry using reason.” In the wrong hands, AI could just as easily become the enemy of freedom and democracy.

      Reply
  17. Es s Ce Tera

    Jordan Peterson and his disciples tend to rail against deviants and deviancy. Yet here is a good conversation which would seem to make the case that deviants are necessary and integral to the functioning of society.

    Reply

Leave a Reply

Your email address will not be published. Required fields are marked *