Are We Offloading Critical Thinking to AI Chatbots?

Yves here. Worries about the impact of the new information/analytical crutch, AI, on user ability should hardly be a surprise. It’s not hard to find examples of how new cognitive aids would up degrading the skill level of many if not most adopters . It used to be common for people to be able to memorize meaningful amounts of spoken words. A more modern example: when I started out on Wall Street, the associates did financial analysis on green ledger paper and pulled data from corporate reports and SEC filings. The class behind me, which did comparatively little scrivener’s work by virtue of simply printing out canned (and often inaccurate) worksheets from Compuserv, had a markedly lower understanding of corporate finance.

Concerns over AI are more acute. The article below describes how AI has impaired critical thinking, to the degree that users routinely accept AI results rather than subject them to a few sanity checks. Given that readers have quoted AI output that is simply wrong and not realized it (like an incorrect definition of a fundamental legal concept, fiduciary duty) or like IM Doc, are swimming in AI errors yet find most colleagues can’t be bothered to check. It seems that the chatbot format is effective in lulling customers into acceptance.

The article below describes a wide range of negative outcomes, from reduced coder productivity even to changes in brain activity.

By Ramin Skibba (@raminskibba), an astrophysicist turned science writer and freelance journalist. He has written for WIRED, The Atlantic, Slate, Scientific American, and Nature, among other publications. Originally published at Undark

In January,researchers at Microsoft and Carnegie Mellon University posted a study online on how artificial intelligence tools like ChatGPT affect critical thinking. They wanted to know how knowledge workers — people who rely on their intellectual skills for their jobs — were interacting with the tools. And through detailed surveys, their findings suggest that the higher the confidence those workers felt in generative AI, the less they themselves relied on critical thinking.

As one of the test subjects noted in the study, “I use AI to save time and don’t have much room to ponder over the result.”

The Microsoft paper is part of a nascent but growing body of research: Over the past two years, as more people have experimented with generative AI tools like ChatGPT at work and at school, cognitive scientists and computer scientists — many of them employed by the very companies that make these AI tools, as well as independent academics — have tried to tease out the effects of these products on how humans think.

Research from major tech companies on their own products often involves promoting them in some way. And indeed, some of the new studies emphasize new opportunities and use cases for generative AI tools. But the research also points to significant potential drawbacks, including hindering developing skills and a general overreliance on the tools. Researchers also suggest that users are putting too much trust in AI chatbots, which often provide inaccurate information. With such findings coming from the tech industry itself, some experts say, it may signal that major Silicon Valley companies are seriously considering potential adverse effects of their own AI on human cognition, at a time when there’s little government regulation.

“I think across all the papers we’ve been looking at, it does show that there’s less effortful cognitive processes,” said Briana Vecchione, a technical researcher at Data & Society, a nonprofit research organization in New York. Vecchione has been studying people’s interactions with the chatbots like ChatGPT and Claude, the latter made by the company Anthropic, and has observed a range of concerns among her study’s participants, including dependence and overreliance. Vecchione notes that some people take chatbot output at face value, without critically considering the text the algorithms produce. In some fields, the error risks could have significant consequences, experts say — for instance if those chatbots are used in medicine or health contexts.

Every technological development naturally comes with both benefits and risks, from word processors to rocket launchers to the internet. But experts like Vecchione and Viktor Kewenig, a cognitive neuroscientist at Microsoft Research Cambridge in the United Kingdom, say that the advent of the technology that girds today’s AI products — large language models, or LLMs — could become something different. Unlike other modern computer-based inventions, such as automation and robotics inside factories, internet search engines, and GPS-powered maps on devices in our pockets, AI chatbots often sound like a thinking person, even if they’re not.

As such, the tools could present new, unforeseen challenges. Compared to older technologies, AI chatbots “are different in that they are a thinking partner to a certain extent, where you’re not just offloading some memory, like memory about dates, to Google,” said Kewenig, who’s not involved in the Microsoft study but collaborates with some of its co-authors. “You are in fact offloading many other critical faculties as well, such as critical thinking.”


Large language models are powerful, or appear powerful, because of the vast information on which they’re based. Such models are trained on colossal amounts of digital data — which may have involved violating copyrights — and in response to a user’s prompt, they’re able to generate new material, unlike older AI products like Siri or Alexa, which simply regurgitate what’s already published online.

As a result, some people may be more likely to trust the chatbot’s output, Kewenig said: “Anthropomorphizing might sometimes be tricky, or dangerous even. You might think the model has a certain thinking process that it actually doesn’t.”

AI chatbots have been observed to occasionally produce flawed outputs, such as recommending to eat rocks and put glue on pizza. Such inaccurate and absurd AI outputs have become widely known as hallucinations, and they arise because the LLMs powering the chatbots are trained on a broad array of websites and digital content. Because of the models’ complexity and the reams of data fed into them, they have significant hallucination rates: 33 percent in the case of OpenAI’s o3 model and higher in its successor, according to a technical report the company released in April.

In the Microsoft study, which was published in proceedings of the Conference on Human Factors in Computing Systems in April, the authors characterized critical thinking with a widely used framework known as Bloom’s taxonomy, which distinguishes types of cognitive activities from simpler to more complex ones, including knowledge, comprehension, application, analysis, synthesis, and evaluation. In general, for these workers, the researchers found that using chatbots tends to change the nature of the effort people invest in critical thinking. It shifts from information gathering to information verification, from problem-solving to incorporating the AI’s output, and it shifts other types of higher-level thinking to merely stewarding the AI, steering the chatbot with their prompts and assessing whether the response is sufficient for their work.

The researchers surveyed 319 knowledge workers in the U.K., U.S., Canada, and other countries in a range of occupations, from computer scientists and mathematicians to jobs related to design and business. The participants were first introduced to concepts and examples of critical thinking in the context of AI use, such as “checking the tone of generated emails, verifying the accuracy of code snippets, and assessing potential biases in data insights.” Then, the participants responded to a list of multiple-choice and free-response questions, providing 936 examples of work-related AI usage, mostly involving generating ideas and finding information, while assessing their own critical thinking.

According to the paper, the connections to critical thinking were nuanced. The paper noted, for instance, that higher confidence in generative AI is associated with less critical thinking, but that among respondents with more self-confidence in their own abilities, there was an increase in critical thinking.

Vecchione and other independent experts say that this study and others like it are an important step toward understanding potential impacts of using AI chatbots. Vecchione’s assessment is that the Microsoft paper does seem to show that generative AI use is associated with less effortful cognitive processes. “One thing that I think that is interesting about knowledge workers in particular is the fact that there are these corporate demands to produce,” she added. “And so sometimes, you could understand how people would forego more critical engagement just because they might have a deadline.”

Microsoft declined Undark’s interview requests, via the public relations firm it works with, but Lev Tankelevitch, a senior researcher with Microsoft Research and a study co-author, did respond with a statement, which noted in part that the research “found that when people view a task as low-stakes, they may not review AI outputs as critically.” He added that, “All the research underway to understand AI’s impact on cognition is essential to helping us design tools that promote critical thinking.”

Other new research outside Microsoft presents related concerns and risks. For example, in March, an IBM study, which has not yet been peer reviewed, initially surveyed 216 knowledge workers at a large international technology company in 2023, followed by a second survey the next year with 107 similarly recruited participants. These surveys revealed an increased AI job-related usage — 35 percent, compared to 25 percent in the first survey — as well as emerging concerns among some of them about trust, both in the chatbots themselves and in co-workers who use them. “I found a lot of people talking about using these generative AI systems as assistants, or interns,” said Michelle Brachman, a researcher of human-centered AI at IBM and lead author of the study. She gleaned other insights as well, while interacting with the respondents. “A lot of people did say they were worried about their ability to maintain their skills, because there’s a risk you end up relying on these systems.”

People need to critically evaluate how they interact with AI systems and put “appropriate trust” in them, she added, but they don’t always do that.

And some research suggests that chatbot users may misjudge the usefulness of AI tools. Researchers at the nonprofit Model Evaluation & Threat Research recently published a preprint in which they conducted a small randomized controlled trial of software developers who completed work tasks with and without AI tools. Before getting started, the coders predicted that AI use would speed up their work by 24 percent, on average. But those productivity gains were not realized; instead, their completion time increased by 19 percent. The researchers declined Undark’s interview requests. In their paper, they attributed that slowdown to multiple factors, including low AI reliability, the complexity of the tasks, and overoptimism about AI usefulness, even among people who had spent many hours using the tools.

Of the findings, Alex Hanna, a sociologist, research director at Distributed AI Research Institute, and co-author of “The AI Con,” said: “It’s very funny and a little sad.”


In addition to looking into knowledge workers, much of the current AI-related research focuses on students. And if the connections between AI use and critical thinking prove true, some of these studies appear to confirm early concerns regarding the effects of the technology on education. In a 2024 Pew survey, for instance, 35 percent of U.S. high school teachers suggested AI in education can do more harm than good.

In April, researchers at Anthropic released an education report, analyzing one million anonymized university student conversations with its chatbot Claude. Based on the researchers’ study of those conversations, they find that the chatbot was primarily used for higher-order cognitive tasks, like “creating” and “analyzing.” The report also briefly notes concerns about critical thinking, cheating, and academic integrity. (Anthropic declined Undark’s interview requests.)

Then in June, MIT research scientist Nataliya Kosmyna and her colleagues released a paper, which hasn’t yet gone through peer review, studying the brain patterns of 54 college students and other young adults in the greater Boston area as they wrote an essay.

The MIT team noticed significant differences in the brain patterns of the participants’ brains — in areas that are not associated as a measure of intelligence, Kosmyna emphasized. Participants who only used LLMs to help with their task had lower memory recall; their essays had more homogeneity within each topic; and more than 15 percent also reported feeling like they had no or partial ownership over the essays they produced, while 83 percent had trouble quoting from the essays they had written just minutes ago.

“It does paint a rather dire picture,” said Kosmyna, lead author of the study and a visiting research faculty at Google.

The MIT findings appear to be consistent with a paper published in December, which involved 117 university students whose second language was English, who performed writing and revising tasks and responded to questions. The researchers found signs of what they described as “metacognitive laziness” about thinking among learners in the group using ChatGPT 4. That means some appeared to be becoming dependent on that AI assistance and offloading some of their higher-level thinking, such as goal-setting and self-evaluation, to the AI tools, said Yizhou Fan, the lead author.

The problem is that some learners, and some educators as well, don’t really distinguish between learning and performance, as it’s usually the latter that is judged for high or low marks, said Dragan Gašević, a computer scientist and professor at Monash University in Melbourne, Australia and a colleague of Fan’s. “Generative AI helps us enhance our performance,” in a way like doping, he said. “While learning itself requires much deeper engagement and experiencing hurdles.”

All this research literature comes with limitations. Many of the studies have fairly small sample sizes, focus on very specific tasks, and the participants might not be representative of the broader population, as they’re typically selected by age, education level, or within a narrow geographic area, in the case of Kosmyna’s research. Another limitation is the short time span of the studies. Expanding the scope could fill in some gaps, Vecchione said: “I’d be curious to see across different demographics over longer periods of time.”

Furthermore, critical thinking and cognitive processes are notoriously complex, and research methods like EEGs and self-reported surveys can’t necessarily capture all of the relevant nuances.

Some of these studies have other caveats as well. The potential cognitive impacts aren’t observed so much among people with more experience with generative AI and with more prior experience in the task for which they want assistance. The Microsoft study spotted such a trend, for example, but with weaker statistical significance than the negative effects on critical thinking.

Despite the limitations, the studies are still cause for concern, Vecchione said. “It’s so preliminary, but I’m not surprised by these findings,” she added. “They’re reflective of what we’ve been seeing empirically.”


Companies often hype their products while trying to sell them, and critics say the AI industry is no different. The Microsoft research, for instance has a particular spin: The authors suggest that it’s helpful that generative AI tools could “decrease knowledge workers’ cognitive load by automating a significant portion of their tasks,” because it could free them up to do other types of tasks at work.

Critics have noted that AI companies have excessively promoted their technology since its inception, and that continues: A new study published by design scholars documents how companies including Google, Apple, Meta, Microsoft, and Adobe “impose AI use in both personal and professional contexts.”

Some researchers, including Kosmyna at MIT, argue that AI companies have also aggressively pushed the use of LLMs in educational contexts. Indeed, at an event in March, Leah Belsky, OpenAI’s VP of education, said the company wants to “enable every student and teacher globally to access AI” and advocates for “AI-native universities.” The California State University system, the University of Maryland, and other schools have already begun incorporating generative AI into students’ school experiences, such as by making ChatGPT easily accessible, and Duke University recently introduced a DukeGPT platform. Google and xAI have begun promoting their AI services to students as well.

All this hype and promotion likely stems from the desire for larger-scale adoption of AI, while OpenAI and many other entities investing in AI remain unprofitable, said Hanna. AI investors and analysts have begun speculating that the industry could be in the midst of a bubble. At the same time, Hanna argues, the hype is useful for business managers who want to engage in massive layoffs, but not because the LLMs are actually replacing workers somehow.

Hanna believes the preliminary research about generative AI and critical thinking, such as the work from Microsoft, is worth taking seriously. “Many people, despite knowing how it all works, still get very taken with the technology and attribute to it a lot more than what it actually is, either imputing some notion of intelligence or understanding,” she said. Some people might benefit if they have in-depth AI literacy and really know what’s inside the black box, she suggests. However, she added, “that’s not most people, and that’s not what’s being advertised by companies. They benefit from having this veneer of magicalness.”

 

 

Print Friendly, PDF & Email

40 comments

  1. Louis Fyne

    the biggest under-reported aspect: is (alleged by social media) rampant cheating in colleges using homework software (then toss-in cheating using smartwatches versus writing things on the palm)—-heck, Google is offering a free trial of Gemini to .edu accounts, which includes a homework helper.

    And comically, many professors are oblivious to the concept that imaging software is advanced enough to do homework via a smartphone camera.

    It’s expensive to hire even retired teachers as proctors. But the future has to be paper

    Reply
    1. Afro

      A confounding factor here is that there his been a lot of pedagogy theory advocating that a larger fraction of grades be assigned be assigned to homework and attendance, and a correspondingly lower fraction of grades be assigned to exams. I am not sure what the origin of this belief is.

      When I was an undergrad, final exams were routinely worth 50+% of the grade, and that included a 100% option, that seems to be gone now or is at least strongly discouraged. I had students complain in the spring because the final in my course was worth ~35%, one student told me, “as a business major, that seems way too high”. I didn’t actually get what he meant, and to be frank I didn’t care.

      I was personally against the trend prior to the advent of widely-used LLMs, and I am more against it now. I think that the purpose of homeworks should be to facilitate learning the material, not to acquire grades.

      Reply
      1. Samuel Conner

        My interpretation of inclusion of attendance and homework performance in the final grade is that this is intended to incentivize activities that the instructor hopes will promote learning. One could argue that this is a kind of hand-holding that ought not be to necessary for young adults but, given human nature and the temptations and distractions of youth, I think it could be helpful for many students.

        Reply
  2. MartyH

    Reducing complicated, ambiguous, and culturally charged topics to a paragraph may seem useful but it will always reflect the biases in the selection of training materials and weighting algorithms. LLM answers do not contribute to informed consent. You have to do the work to be informed IMHO.

    Reply
  3. Adam1

    We should be terrified at how much damage AI potentially can bring in this realm. I have firsthand observed over the past 6 years how downgrading or removing a skill impacts people.

    My middle son is a senior this year and he’s a boy scout. I’ve been an adult leader with his patrol and there are about 10 boys in his patrol all about his age and they all joined the troop around the same time… so late in their 5th grade year.

    I had known my son’s handwriting was poor and upon seeing his scout buddies handwriting I knew it was no worse than the rest of them. I even remember when I was young how it was common for my male peers to have bad handwriting. However, by the time we were 18 most of us had at least legible writing skills.

    With the exception of maybe 1 or 2 of my son’s scout buddy group, none of them have remotely what would pass for as consistently legible handwriting. And sadly, more than a few even struggle to read their own writing when asked to decipher what they have written. By the joking standards of my youth they all should be doctors (haha).

    The common thread is that none of these boys have been required to consistently write things since probably before 5th grade. For them, nearly all non-verbal communication is typed.

    I should also point out that this all revolves around block letter writing. Cursive character might as well be Chinese characters to them. Most of these boys have had exposure to cursive and can sort of read it, but none of them can remotely write in cursive – although some of them have such poor handwriting skills their chicken scratch can sometime be confused with poor cursive writing (a tad of sarcasm there).

    You can’t develop and refine a skill that you don’t use! I don’t see why this doesn’t apply to general critical thinking. Yes, some people will always have a natural gift for the skill, but even those people benefit from using it routinely.

    Reply
  4. NorD94

    saw this the other day, how AI “learns” to make text

    How thousands of ‘overworked, underpaid’ humans train Google’s AI to seem smart – Contracted AI raters describe grueling deadlines, poor pay and opacity around work to make chatbots intelligent
    https://www.theguardian.com/technology/2025/sep/11/google-gemini-ai-training-humans

    Inside the lucrative, surreal, and disturbing world of AI trainers
    https://www.businessinsider.com/ai-training-jobs-data-annotators-labelers-outlier-scale-meta-xai-2025-9

    Reply
  5. leaf

    “Once men turned their thinking over to machines in the hope that this would set them free. But that only permitted other men with machines to enslave them”
    Maybe Herbert was on to something
    School exams need to return to paper or if time permits, one on one interviews

    Reply
  6. Ventzu

    TBH, this dumbing down outcome was entirely predictable. And this is still early days. Imagine once cohorts of young people come out of school, reliant on AI. What happens to rational thinking, creativity and innovation? Layer on top energy and water consumption . . .

    The fact that our “leaders” are exhorting everyone to embrace AI, demonstrates both their lack of intellect, and greed for short term growth and profits. Lemmings running off a cliff, with unfortunately the rest of us dragged along with them.

    Reply
  7. Afro

    This happens with every technological revolution, for example Plato argued against writing because it degraded memory, I think that Plato was probably correct, but ultimately writing is still, I think, to our collective advantage.

    There were arguments against the printing press. When I was in school there were restrictions on calculator use. Nobody knows calligraphy anymore because we have word processors. Filmmakers no longer try to get everything right the first time because they can “fix it in post”, and matte paintings are now long gone. A more recent example is that due to cell phones, people no longer remember telephone numbers, and that with google maps many people have an inferior sense of direction. Prior to google maps, most of us never learned how to ride horses, never learned to drive stick, and in the future will never learn to drive at all.

    Technological revolutions like the above, I think, always win. I cannot think of a case where people pushed back. Is there such a case?

    Reply
    1. bryan

      It’s not a technological revolution, it only feels like a technological revolution, it’s surveillance software that doubles as a parlor trick.

      Reply
    2. AG

      Possibly there is a core misunderstanding in so far as what we today mean by “printing press” revolution or “automation”, is only an outcome of a decades or centuries of fights and of pushing back. So what we ended up with was always a process and result of serious struggle.

      Before the textile industry changed the way it did we had bloody uprisings, we had luddits etc. All of those actions flowed into how it turned out.

      And also maybe consider that the “fallout” of such inventions and their specific implementation happens by a huge delay. Perhaps one could make an argument in regard of how US de-industrialisation happened.
      Introduction of certain tropes of automation which reached back even before WWII (management control of workers in mass production) eventually helped shape the process of financialisation of the 1970s that led to the downfall of the US industrial empire which we are reading about every single day now, 100 years later.

      Also what about software production, open source and enforcing the use of certain products tied to IP. Is it possibly still to early to express a first serious verdict on the consequences of what happened since the PC “revolution”?
      How would computerisation in the world look like had those inventions been used in a different way under different circumstances? The Soviet internet, the Chilenean experiment – planned economy in the computer age? And so on. Is it not possible that such utopian paths were blocked by the owners of the invention.

      A German sociologic phrase says “Technik is werturteilsfrei” – technologies are free of a moral verdict. Which simply means as such technologies can be used for evil and for good. Their fate is not preconceived.

      So one thing is probably sure, the future is not written. And then it is not for technologies either. I.e. inventions´ use can turn out either way.

      (Unless we engage in those physical hypotheses where quantum mechanics offers a model that says the future is determined… but a serious discussion of that is above my paygrade unfortunately. Others here may chime in ;-)

      Reply
      1. Acacia

        “Technik ist werturteilsfrei” – technologies are free of a moral verdict.

        I always wonder about this claim.

        After all, land mines, cluster munitions, napalm, incendiary bombs, nuclear weapons, massive ordnance air blast bombs (MOAB), the neutron bomb, etc. — these are all technologies designed to maim or kill very large numbers of people.

        Sind sie eigentlich werturteilsfrei ?

        Reply
        1. AG

          It´s applied to Grundlagenforschung. Non-applied elementary research?
          So the ability to trigger nuclear fission and then control it in form of a chain reaction may be used in a nuclear plant and also in a bomb. To use a cliché example.
          Explosives can help blow up mountains for mining, or kill.
          And so on.
          My sociology studies are quite a bit in the past.
          I don´t know what would happen if I were to restart to study the field and question certain standard views today. But I wouldn´t want to just intuitively discard the notion. Those sociologists did make up their minds before they would come up with such points. (Blumenberg)

          Reply
          1. Acacia

            Yes, but these are just generalities — i.e., explosives for mining, etc. — aren’t they?

            There’s a lot of sophisticated engineering involved in the production of weapons — engineering that is fundamentally about maximizing harm or the number of dead. Is that engineering not concerned with “technology”? Is “technology” only the most general idea, i.e., nuclear fission? Many would disagree with the latter position, I think.

            On the subject of Technik, here’s another angle…

            As you know, Martin Heidegger was purged from the university for his wartime support for the Nazi party, but he continued to give public lectures, e.g., at the Club of Bremen. During this period, right after the war, he became concerned with technology and wrote “Die Frage nach der Technik” in the late 1940s.

            Originally, it was a lecture given at the Club of Bremen, and there is an interesting difference in the text:

            Agriculture is now a motorized food industry; in its essence the same thing as the manufacture of corpses in gas chambers, the same thing as blockades and the reduction of a region to hunger, the same as the manufacture of hydrogen bombs.

            The passage in bold was omitted from the published version. He said something similar in “The Danger” [Die Gefahr]. The deeper argument concerns what he calls Gestell, i.e., about our relationship with the natural world, with seeing it as “standing reserve”, as an instrument to other ends, not as an end in itself, etc., but in any case, I take it this is an analysis which does not view technology as “werturteilsfrei”.

            Reply
            1. AG

              I am not sure I agree with Heidegger on this conflation. But then I am not a Heidegger reader. I do not know if he seriously studied this field. But this is certainly cause to look it up. Even if I might come to different conclusion for myself. Or maybe not. Who knows with that guy Heidegger. A maze. Thanks for the quote.

              fwiw David Noble in his “FORCES OF PRODUCTION”:

              “(…)
              The forces of production are visibly making history today, as the second
              Industrial Revolution unfolds before us. Once again the machines of industry
              have taken center stage in the historical drama, as the drive for ever more
              automatic processes becomes a virtual stampede. But, as this study indicates,
              such machines are never themselves the decisive forces of production, only
              their reflection.
              At every point, these technological developments are mediated
              by social power and domination, by irrational fantasies of omnipotence,
              by legitimating notions of progress, and by the contradictions rooted
              in the technological projects themselves and the social relations of production.

              If, as historians Elizabeth Fox Genovese and Eugene D. Genovese once
              wrote, “history is the story of who rides whom and how,” then the history
              of technology is no exception. ‘ Technological determinism, the view that
              machines make history rather than people, is not correct; it is only a cryptic,
              mystifying, escapist, and pacifying explanation of a reality perhaps too forbidding
              (and familiar) to confront directly.
              If the social changes now upon us
              seem necessary, it is because they follow not from any disembodied technological
              logic but from a social logic-to which we all conform.
              (…)”.

              How far that carries for the idea of “werturteilsfrei” is a separate question of course. I would have very much liked to as Noble but unfortunately the man died too early and too hated (not unlike Heideggger even though the came from different spheres.)

              p.s. To ride the nuclear bomb horse once more – the necessities of production of fertilizer led to heavy water. And we owe it to “FIGHT CLUB” that we all now seem to know that explosives can be built from human fat found in the garbage cans of clinics for plastic surgery providing the process of liposuction.

              Reply
              1. Acacia

                Thank you for sharing the citation to David Noble. I agree with the critique of technological determinism, and the history of cinema strikes me as a good example, i.e., would we say that the art of cinema is primarily determined by the underlying technology (e.g., the development of film, improvements to emulsion that make deep focus possible, then digital, CGFX, etc.)? Actually, I would say “no”.

                This also reminds me of Wolfgang Schivelbusch’s book about the history of railway technology, e.g., from the forward:

                One feature of modernity as it crystallized in the nineteenth century was a radical foregrounding of machinery and of mechanical apparatus within everyday life. The railroad represented the visible presence of modern technology as such. Within the technology lay also forms of social production and their relations. Thus the physical experience of technology mediated consciousness of the emerging social order; it gave a form to a revolutionary rupture with past forms of experience, of social order, of human relation. The products of the new technology produced, as Marx remarked, their own subject; they produced capacities appropriate to their own use.

                Perhaps this is getting closer to what gives me pause about the idea that technology “itself” is neutral, as it seems no longer really independent of human consciousness. And that’s part of the issue with technological determinism as Noble describes it: pointing to the technology as the primary agent of change readily becomes a mystification of the actual forces at play.

                I guess I’ll have to think about all this further.

                Reply
                1. AG

                  Appreciate your recommendation of Schivelbusch!
                  I studied a bit of Warburg many years ago. No Schivelbusch.

                  I remember Alexander Kluge mentioning that in the early days of train travel passengers often fell asleep while looking out of the window overwhelmed by the speed of the landscape passing by…

                  The conjunction you are pointing at between the Marx note, the potential of technology creating a “reality of its own” with the Noble thought sounds important and indeed needs exploration.

                  Reply
  8. AG

    fwiw

    from German TELEPOLIS altern. news site

    Drama at universities: 80 percent of students no longer understand non-fiction texts
    https://archive.is/siK1n

    from German MULTIPOLAR altern. news site

    Society of Assaults

    The French philosopher Éric Sadin has studied the project of digitalization intensively and warned of its social consequences in numerous books. He explicitly argues that all actions and movements are being monitored and measured with the aim of making the world’s functioning dependent on digital programs. Society is thus entering a “regime of conformity.” Multipolar introduces the author, who is still relatively unknown in Germany, and his work.
    https://archive.is/aOHXK

    Reply
  9. taunger

    thanks for this. I’m having a fight over AI in my work place, and I added this to the internal information I’m using to advocate for removing AI from processes.

    Reply
  10. Offtrail

    I don’t use AI chatbots, but my reliance on Google Maps has dumbed down my awareness of where I’m driving and what’s around me. I can see how the same effect could apply to chatbots, with much greater consequences.

    Reply
  11. lyman alpha blob

    RE: “Every technological development naturally comes with both benefits and risks, from word processors to rocket launchers to the internet.”

    I’ll just note that society has decided that it’s not a really good idea to hand out rocket launchers to everybody for free. Perhaps that should apply to other technological developments as well.

    Reply
  12. paul

    I find it strange that it is focused on activities that are usually enjoyable; discovery, writing (even reading), drawing, playing (even finding new) music, when technology has usually promised to unburden us from the mundane.

    It seems to make the mundane unusable and creativity pointless.

    Maybe that’s the point.

    Reply
  13. XXYY

    Not that I disagree with any of the criticisms here, but I will give this anecdote.

    An engineer friend of mine and I, who worked at the same company, would often pop into each other’s offices when we were struggling with a technical problem. In many or most cases, the answer became clear to the originator of the problem, just as a result of speaking out loud and having an interaction with the other person.

    After a while, we began to joke that one or the other of us could be replaced by a cocker spaniel, who would just sit in our office and listen attentively to our thoughts and what we had to say until a breakthrough to a problem came along. This seemed like it would be a very productive way of moving a lot of things forward. After a while, our shorthand for this would be to stop by and say “I need a cocker spaniel for a minute.”

    It crossed my mind when I read the headline for this post that that might actually be a viable use, perhaps the only one I have heard of, for artificial intelligence. You could bounce ideas off of it, and it would reply with things that may or may not makes sense and may or may not be true, but this process might stimulate your own thinking in a helpful way.

    If there’s ever a prize or award for coming up with a useful AI application, I hearby claim it.

    Reply
    1. ilsm

      Had a video conversation with some of my fraternity classmates mostly engineers, we al graduated in early 1970’s.

      We all agreed that no one today has a feel for logarithms. With the “scientific calculator” and later PC’s the “kids” no longer know what a slide rule is and what logs are used for. Other than smoothing graphs (?).

      One civil engineer related the tale of a co-worker not so long ago who went out to a review/consult on a soils risk survey. He “amused” everyone using his circular slide rule and how he used slide rule approximations along with long experience.

      AI is further dumbing down…..

      Reply
    2. AG

      I believe it is intended by now to be used in this way in screenwriting.
      However I cannot tell if people using it do so with desired effect.
      The point you are making is very true in that profession.

      In fact expressing narrative constructions aloud is part of the purification process to make the complexity of the idea fit the format constraints (90 min-120 min length for feature film e.g.)
      Which makes the medium so much different to novels.

      This process is the inherent intent behind “pitching” an idea. If you pitch an idea 100 times to 100 different people in 100 phone calls or conversations you will eventually find the “right” formula. Which is not to be mixed up with “a script”.

      But you eventually end up with a formula fit to sell.

      However the reaction to what one expresses is rather important.

      I am not sure if AI will have enough data of its own to eventually be that rather smart cocker spaniel (aren´t those known to be rather dumb specimen…)

      Anyway. The plan there too is to create a co-author fwiw to merely use a co-author as a discussion partner.
      Not because that partner is supposed to be an equal creator (that is a different form of teamwork) but because he or she is an objective listener however with the experience and understanding of a peer who can intervene eventually.

      Reply
    3. AG

      p.s. for now I have heard using AI this way is rather hard for writers. You have to phrase your questions to the machine in a very detailed and reduced way. Not unlike it has always been the case in programming. The scope of understanding is extremely limited. The smaller the narrative “units” you break down your questions to the more helpful the anwers can be. But for the moment it appears that work often may be too hard to make it worthwhile. On the other hand new programs pop up constantly.

      Reply
  14. XXYY

    Large language models are powerful, or appear powerful, because of the vast information on which they’re based. Such models are trained on colossal amounts of digital data — which may have involved violating copyrights — and in response to a user’s prompt, they’re able to generate new material, unlike older AI products like Siri or Alexa, which simply regurgitate what’s already published online.

    This quote shows a major and almost universal misconception about llvm artificial intelligence. Llvms actually do nothing but “regurgitate what’s already published online.” They do not “generate new material”, as the author suggests here. They do kind of rearrange what’s already published online in a way that suggests novelty but the fact is they’re not going to say anything that someone else hasn’t said before, and in fact their goal is to use complicated statistical algorithms to make their output sound as much as possible like what has been said before. Indeed, if llvms made things up out of whole cloth, what would be the point of training them?

    Llvms are like sixth grade students who copy stuff out of Wikipedia for their own homework assignments, but rearrange the material enough that it sounds new and will not be flagged as a literal copy (workers in the field and critics refer to llvms as stochastic parrots for this reason.) So far this seems to be a successful approach from the llvm’s standpoint.

    Reply
    1. hazelbee

      A simple thought experiment negates what you are saying. I am trying to be polite.

      Take a story in the news today, any story, it really doesn’t matter which. It will be something that any current llm has not been trained on.

      Copy and paste the story , ask for different perspectives on it based on, say, Edward de Bonos thinking hats, or good vs evil, or a Socratic interpretation.

      you will get something novel that has never been written before.

      The same thought experiment applies to stories in the training data. New combinations based on that training are created all the time.

      now… some of the issues come from the training bias of models to generate text when they dont know. why? because that is what they are trained to do – to generate.

      your comment appears with authority, but claiming they don’t generate new material, claiming they just “copy” reveals a fundamental misunderstanding of how they work

      Reply
      1. ilsm

        Why do you think a “simple thought experiment” is relevant? Some here could handle “complex”

        Ask your preferred bot about a news article on Gaza Starvation or US encouraging Hamas to meet in Qatar to be bombed by IDF and whether it is good strategy or evil incarnate? Paste the novel answer?

        Tell us the bot you use?

        Reply
      2. Acacia

        “Remember this study about how LLM generated research ideas were rated to be more novel than expert-written ones? We find a large fraction of such LLM generated proposals (≥ 24%) to be skillfully plagiarized, bypassing inbuilt plagiarism checks and unsuspecting experts.”

        https://x.com/danish037/status/1894428793194123541

        …and these were only the samples that could be identified for plagiarism, i.e., likely there are more.

        Reply
      3. Deschain

        This is an issue that demands a bit of nuance

        I like to think of LLMs as being able to produce anything which is a linear combination of existing human knowledge. LLMs can in fact produce things that haven’t been done before. They however can not produce things that couldn’t have been done before without the assistance of an LLM. So the new things are only new because no human being ever bothered to do them, like for instance a death metal version of ‘All You Need Is Love’ sung in Korean. An LLM could produce that, but so could a human. An LLM can not tell you how to travel faster than light.

        Reply
  15. stefan

    I should have thought that by definition “critical thinking” means “thinking for oneself”, the key goal of higher education and the good life. To think what we are doing.

    One thing that drives me nuts is the IDF use of so-called “Lavender” AI to automate and accelerate targeting during the bombing of Gaza. This has resulted in massive numbers of civilian casualties, and no one has to be personally responsible.

    Spiritual crisis cannot begin to describe such effects of AI.

    Reply
  16. The Rev Kev

    There was a sentence that bothered me near the beginning of this post-

    ‘I use AI to save time and don’t have much room to ponder over the result.’

    If the guy saved time, then couldn’t they use it to think deeper over the results? or do they lose the feel for what a correct solution should be and a wrong one.

    Reply
  17. TiPi

    We are regularly offered techno-hype and claimed solutions which will then resolve earlier man created problems.
    Most of those offering techno-fixes from within the political and oligarchic elite are mere snake oil salesmen.
    They are primarily driven by self interest, often nothing more than opportunistic capitalistic economic growth and status.

    Using AI LLMs – so what ought to be a technological and research tool, is rapidly morphing into a dependency culture, with as many negatives emerging as highly manipulated social media.
    I’ve regularly found AI generated material is factually erroneous, often from what is omitted. The text is often anodyne and the GIGO principle applies, as does the tyranny of the algorithm.

    All tech has a downside.
    Trouble is this often only manifests as a delayed outcome of the law of unintended consequences.
    Humans have rarely been able to anticipate or counteract these impacts.

    But genies have an unerring ability to escape the bottle, and then we’re stuck with Little Boys and Fat Men. The applications of nuclear fission since 1938 have probably had more of a shaping influence on the planet, and especially geopolitics, than any other technology, and very few of those impacts can remotely be classified as positive.

    Technology does not necessarily generate progress, nor is its application ever neutral.
    The consequences of AI’s energy demands alone are alarming.
    And the applications of computing power that AI represents ought to be a matter of huge concern to all of us – as in Acacia’s and stefan’s comments above.

    Reply
  18. AG

    Moon of Alabama just yesterday

    A.I. Valuations Reach La La Land
    The Artificial Intelligence mania has officially reached la la land.

    https://www.moonofalabama.org/2025/09/ai-valuations-reach-la-la-land-1.html#more

    “(…)
    Oracle, OpenAI Sign Massive $300 Billion Cloud Computing Deal (archived)
    https://archive.ph/nQ5vQ#selection-729.0-729.215
    – Wall Street Journal

    The majority of new revenue revealed by Oracle will come from OpenAI deal, sources say

    OpenAI signed a contract with Oracle to purchase $300 billion in computing power over roughly five years, people familiar with the matter said, a massive commitment that far outstrips the startup’s current revenue.

    The Oracle contract will require 4.5 gigawatts of power capacity, roughly comparable to the electricity produced by more than two Hoover Dams or the amount consumed by about four million homes.

    Oracle shares surged by as much as 43% on Wednesday after the cloud company revealed it added $317 billion in future contract revenue during its latest quarter that ended in Aug. 31.
    (…)”

    Reply
  19. JMH

    Are we offloading critical thinking to chat-bots? Impossible. Chat-bots cannot think critically. Chat-bots cannot think at all. Chat-bots can concoct a plausible response from whatever has been fed into them. I do not speak from experience with chat-bots. I suppose they have their uses. They can make connections at computer speed. They might pull together threads that would take a person a long time to accomplish. A chess program can do that. A Large Language Model is not a brain. To have a thinking machine you need to go back to about 1940 and Isaac Asimov’s robot stories. His robots had positronic brains. How did he know that? He waved his hand over his typewriter and a positronic brain appeared. Many science fiction writers have performed similar feats. They also invented faster than light travel and with Ursula Le Guin’s Ansible you have instantaneous interstellar communications. That was a really useful story device. How do all those things work? I don’t know and neither did the authors. They did because it was useful to a good story. Now it might be wonderful if chat-bots could do critical thinking, but if they could does that not make humans redundant. There are many stories in which that becomes a grim reality. Skynet? The Terminator? The Berserker stories? For myself, I want nothing to do with chat-bots and any other manifestation of so-called AI, but I am old. I prefer to read books. I do a good deal of writing with pen and paper. The computer is a useful typewriter and filing cabinet. The Internet is like having an infinite encyclopedia. I watched a movie this morning that was the Naked Capitalism movie of the week in late August. I filed it away in my ‘films” folder. Whatever critical thinking isn within my ability, I shall do for myself. I see no alternative.

    Reply

Leave a Reply

Your email address will not be published. Required fields are marked *