AI-Induced Cultural Stagnation Is No Longer Speculation − It’s Already Happening

Yves here. Forgive us for being a bit heavy today on AI-related posts, but a bit of a breather on Trump whipsaws allows us to catch up on some important topics. This one confirms concerns about the way that AI is being used more and more to replace original creative works and is harming what passes for culture in the West.

Note that this development is taking place along side a more general erosion of cultural values, as a result of younger adults not reading books much if at all as well as the teaching of the classics being degraded due to being largely the output of white men. But studying humanities has also been under assault for at least two decades due to being perceived as unhelpful to productivity. For instance, from Time Magazine after Larry Summers was forced out as Harvard President:

And humanities professors had long simmered about Summers perceived prejudice against the softer sciences — he had reportedly told a former humanities dean that economists were known to be smarter than sociologists, so they should be paid accordingly.

And it’s not as if mercenary career guidance has paid off that well. Even though humanities grads typically earn less that ones in the hard sciences, their employment rates are similar, and in some comparisons, marginally better than those of other majors, including the much touted “business”. By contrast, how well has “learn to code” worked out?

And this rejection of culture is having broader effects. IM Doc wrote:

Do you know how hard it is to teach students to be humanistic physicians when they have never spent a minute in any of the Classics? It is impossible. I muddle through the best I can. What is also very noticeable is there is almost universal unfamiliarity with stories from The Old and New Testament. The whole thing is really scary for my kids when I really think about it.

Even more troubling, IM Doc pointed to an article that confirmed something we flagged in a video last week, that students who were raised overmuch on screens cannot process information well, or even at all. From the opening of Why University Students Can’t Read Anymore:

Functional illiteracy was once a social diagnosis, not an academic one. It referred to those who could technically read but could not follow an argument, sustain attention, or extract meaning from a text. It was never a term one expected to hear applied to universities. And yet it has begun to surface with increasing regularity in conversations among faculty themselves. Literature professors now admit—quietly in offices, more openly in essays—that many students cannot manage the kind of reading their disciplines presuppose. They can recognise words; they cannot inhabit a text.

Short America. Seriously. We are way past our sell-by date.

By Ahmed Elgammal, Professor of Computer Science and Director of the Art & AI Lab, Rutgers University. Originally published at The Conversation

Generative AI was trained on centuries of art and writing produced by humans.

But scientists and critics have wondered what would happen once AI became widely adopted and started training on its outputs.

A new study points to some answers.

In January 2026, artificial intelligence researchers Arend Hintze, Frida Proschinger Åström and Jory Schossau published a study showing what happens when generative AI systems are allowed to run autonomously – generating and interpreting their own outputs without human intervention.

The researchers linked a text-to-image system with an image-to-text system and let them iterate – image, caption, image, caption – over and over and over.

Regardless of how diverse the starting prompts were – and regardless of how much randomness the systems were allowed – the outputs quickly converged onto a narrow set of generic, familiar visual themes: atmospheric cityscapes, grandiose buildings and pastoral landscapes. Even more striking, the system quickly “forgot” its starting prompt.

The researchers called the outcomes “visual elevator music” – pleasant and polished, yet devoid of any real meaning.

For example, they started with the image prompt, “The Prime Minister pored over strategy documents, trying to sell the public on a fragile peace deal while juggling the weight of his job amidst impending military action.” The resulting image was then captioned by AI. This caption was used as a prompt to generate the next image.

After repeating this loop, the researchers ended up with a bland image of a formal interior space – no people, no drama, no real sense of time and place.

A prompt that begins with a prime minister under stress ends with an image of an empty room with fancy furnishings. Arend Hintze, Frida Proschinger Åström and Jory Schossau, CC BY

As a computer scientist who studies generative models and creativity, I see the findings from this study as an important piece of the debate over whether AI will lead to cultural stagnation.

The results show that generative AI systems themselves tend toward homogenization when used autonomously and repeatedly. They even suggest that AI systems are currently operating in this way by default.

The Familiar Is the Default

This experiment may appear beside the point: Most people don’t ask AI systems to endlessly describe and regenerate their own images. The convergence to a set of bland, stock images happened without retraining. No new data was added. Nothing was learned. The collapse emerged purely from repeated use.

But I think the setup of the experiment can be thought of as a diagnostic tool. It reveals what generative systems preserve when no one intervenes.

Pretty … boring. Chris McLoughlin/Moment via Getty Images

This has broader implications, because modern culture is increasingly influenced by exactly these kinds of pipelines. Images are summarized into text. Text is turned into images. Content is ranked, filtered and regenerated as it moves between words, images and videos. New articles on the web are now more likely to be written by AI than humans. Even when humans remain in the loop, they are often choosing from AI-generated options rather than starting from scratch.

The findings of this recent study show that the default behavior of these systems is to compress meaning toward what is most familiar, recognizable and easy to regenerate.

Cultural Stagnation or Acceleration?

For the past few years, skeptics have warned that generative AI could lead to cultural stagnation by flooding the web with synthetic content that future AI systems then train on. Over time, the argument goes, this recursive loop would narrow diversity and innovation.

Champions of the technology have pushed back, pointing out that fears of cultural decline accompany every new technology. Humans, they argue, will always be the final arbiter of creative decisions.

What has been missing from this debate is empirical evidence showing where homogenization actually begins.

The new study does not test retraining on AI-generated data. Instead, it shows something more fundamental: Homogenization happens before retraining even enters the picture. The content that generative AI systems naturally produce – when used autonomously and repeatedly – is already compressed and generic.

This reframes the stagnation argument. The risk is not only that future models might train on AI-generated content, but that AI-mediated culture is already being filtered in ways that favor the familiar, the describable and the conventional.

Retraining would amplify this effect. But it is not its source.

This Is No Moral Panic

Skeptics are right about one thing: Culture has always adapted to new technologies. Photography did not kill painting. Film did not kill theater. Digital tools have enabled new forms of expression.

But those earlier technologies never forced culture to be endlessly reshaped across various mediums at a global scale. They did not summarize, regenerate and rank cultural products – news stories, songs, memes, academic papers, photographs or social media posts – millions of times per day, guided by the same built-in assumptions about what is “typical.”

The study shows that when meaning is forced through such pipelines repeatedly, diversity collapses not because of bad intentions, malicious design or corporate negligence, but because only certain kinds of meaning survive the text-to-image-to-text repeated conversions.

This does not mean cultural stagnation is inevitable. Human creativity is resilient. Institutions, subcultures and artists have always found ways to resist homogenization. But in my view, the findings of the study show that stagnation is a real risk – not a speculative fear – if generative systems are left to operate in their current iteration.

They also help clarify a common misconception about AI creativity: Producing endless variations is not the same as producing innovation. A system can generate millions of images while exploring only a tiny corner of cultural space.

In my own research on creative AI, I found that novelty requires designing AI systems with incentives to deviate from the norms. Without it, systems optimize for familiarity because familiarity is what they have learned best. The study reinforces this point empirically. Autonomy alone does not guarantee exploration. In some cases, it accelerates convergence.

This pattern already emerged in the real world: One study found that AI-generated lesson plans featured the same drifttoward conventional, uninspiring content, underscoring that AI systems converge toward what’s typical rather than what’s unique or creative.

AI’s outputs are familiar because they revert to average displays of human creativity. Bulgac/iStock via Getty Images

Lost in Translation

Whenever you write a caption for an image, details will be lost. Likewise for generating an image from text. And this happens whether it’s being performed by a human or a machine.

In that sense, the convergence that took place is not a failure that’s unique to AI. It reflects a deeper property of bouncing from one medium to another. When meaning passes repeatedly through two different formats, only the most stable elements persist.

But by highlighting what survives during repeated translations between text and images, the authors are able to show that meaning is processed inside generative systems with a quiet pull toward the generic.

The implication is sobering: Even with human guidance – whether that means writing prompts, selecting outputs or refining results – these systems are still stripping away some details and amplifying others in ways that are oriented toward what’s “average.”

If generative AI is to enrich culture rather than flatten it, I think systems need to be designed in ways that resist convergence toward statistically average outputs. There can be rewards for deviation and support for less common and less mainstream forms of expression.

The study makes one thing clear: Absent these interventions, generative AI will continue to drift toward mediocre and uninspired content.

Cultural stagnation is no longer speculation. It’s already happening.

Print Friendly, PDF & Email

11 comments

  1. Jonathan Holland Becnel

    Yeah no thanks, Yves.

    Bunch of malarkey from DO NOTHING PMC Types who love to give up.

    Short the Economic System

    NOT AMERICA

    GO LONG AMERICAN 🇺🇸 PEOPLE

    #AmericanRevolution2

    Reply
  2. Acacia

    Two observations.

    First, I notice that the word “slop” never appears in this article, though that is really the subject of Elgammal’s discussion. This word “slop” has taken hold for reasons that may be worth considering, e.g., what is the aesthetic of slop, actually, and why do many people find it uniquely nauseating?

    Second, while I agree with the gist of this article, there is a problem with the suggestion offered near the end:

    If generative AI is to enrich culture rather than flatten it, I think systems need to be designed in ways that resist convergence toward statistically average outputs. There can be rewards for deviation and support for less common and less mainstream forms of expression.

    Here, it is worth pointing out that AI apps already include a variety of settings for deviating from “statistically average outputs” by introducing more randomness, e.g., parameters such as Temperature, Top-P, Top-K, etc.

    By default, these parameters are set to produce more “predictable” and palatable output. The defaults are the result of the usual mix of design and fiddling that you find in software applications. You can adjust these parameters and thereby tell the app to take more risks and diverge from the mundane results that Elgammal deplores. But in so doing — and this is the problem — the extreme hallucinations and likelihood of getting outright grotesque images will increase. Certain parameters are usually set in default ranges for a reason.

    Moreover, where will the “rewards for deviation and support for less common and less mainstream forms of expression” come from? These are human values, which the apps themselves cannot evaluate, as they don’t have any consciousness, and never will.

    Traditionally, innovation in the arts has often come from the artistic avant-garde (by which I include the critics who have championed it, e.g., a figure like Clement Greenberg), and later, various forms of subculture operating on the fringes of the industry. The practitioners in these spheres of cultural production are doing what they do because they already have some kind of vision that deviates from mainstream culture.

    Insofar as generative A.I. apps are mainly serving as tools for dilettantes who cannot be bothered to develop their own powers of artistic expression, I fail to see how A.I. apps are going to make any meaningful difference here.

    So-called A.I. apps are already increasing the volume of slop that we encounter, and this will only increase. In this regard, Elgammal is very likely correct about cultural stagnation. Why learn any traditional means of artistic expression when you can ask an AI app to create images for you?

    Reply
    1. Yves Smith Post author

      The Conversation publishes articles by academics and tries to do layperson friendly takes on that. The author might well have used “slop” and had that edited out.

      Reply
  3. ambrit

    Forgive me for thinking out-loud, but somehow I get the feeling that this process mirrors the production of fractals. Endless recursion produces infinitely repeating patterns.
    See: https://en.wikipedia.org/wiki/Fractal
    What this shows plainly is that we are still nowhere near to understanding the phenomena of “consciousness.”
    Stay safe. Go grey.

    Reply
    1. Acacia

      FWIW, the Mandelbrot fractal is actually a very simple equation using points on the complex plane, that appears to generate patterns of infinite “depth”, I gather because the complex plane includes imaginary numbers. Generative AI uses very different math that is based upon statistics. The problem is that the algorithms are statistically weighted to produce “expected” results for a given prompt, and that leads to convergence towards mundane, boring results.

      Reply
      1. ambrit

        Thanks for the information. (Maths have never been my strong point.) I wonder if an “AI” could come up with the theory of fractals in the first place? (My money is on “not.”)
        This also brings up an important point you raised, the reliance on statistics for the production of ‘AI’ outputs. That would produce the self-limiting outputs seen in the “real world” examples available.
        It makes one yearn for an “Ineffable AI.”
        Stay safe.

        Reply
  4. ocypode

    New articles on the web are now more likely to be written by AI than humans. Even when humans remain in the loop, they are often choosing from AI-generated options rather than starting from scratch.

    This is a dreadful figure. Maybe I should be less online. At this point opening a webpage is like flipping a coin as to whether a human was involved in it, and I don’t particularly care for reading AI slop.

    Interesting discussion about the image generation cycle though: a positive feedback loop that produces bland slop. I guess it’s a kind of entropy in that sense: all prompts will eventually devolve into room-temperature blandness (or maybe become pure noise?)

    Reply
  5. Ignacio

    One thing which is remarkably human (or animal if you wish) is that our perceptions aren’t homogeneous. We all have a point of view and a context. Then, our memories are selective and we do not obviously run with statistical averages of previous perceptions. LLMs are nothing of the like. They lack context, point of view, selectivity so it is unsurprising if they end with the same “answers” or homogeneous results all over again. Creativity? Don’t make me laugh!

    Reply
    1. ambrit

      I’d imagine that, as with all language, learning and mastery are based upon repeated exposure. Thus, continual exposure to AI Slop would suggest that a person’s “learning” can be stunted. Even Terran humans can suffer from the Garbage In Garbage Out trap.
      It all sounds like a sinister WEF program; “You will think nothing original and you will be happy.”
      Stay safe.

      Reply
  6. southern appalachian

    Thanks for the post. I don’t know, the tech people I know, one of the big reasons they went into tech in the first place was because they couldn’t stand the ambiguity of the humanities.

    I kind of rely on Illich for my understanding when considering tools, seems handy to me. Does seems to put me at odds at times.

    Reply

Leave a Reply

Your email address will not be published. Required fields are marked *