Yves here. Forgive us for being a bit heavy today on AI-related posts, but a bit of a breather on Trump whipsaws allows us to catch up on some important topics. This one confirms concerns about the way that AI is being used more and more to replace original creative works and is harming what passes for culture in the West.
Note that this development is taking place along side a more general erosion of cultural values, as a result of younger adults not reading books much if at all as well as the teaching of the classics being degraded due to being largely the output of white men. But studying humanities has also been under assault for at least two decades due to being perceived as unhelpful to productivity. For instance, from Time Magazine after Larry Summers was forced out as Harvard President:
And humanities professors had long simmered about Summers perceived prejudice against the softer sciences — he had reportedly told a former humanities dean that economists were known to be smarter than sociologists, so they should be paid accordingly.
And it’s not as if mercenary career guidance has paid off that well. Even though humanities grads typically earn less that ones in the hard sciences, their employment rates are similar, and in some comparisons, marginally better than those of other majors, including the much touted “business”. By contrast, how well has “learn to code” worked out?
And this rejection of culture is having broader effects. IM Doc wrote:
Do you know how hard it is to teach students to be humanistic physicians when they have never spent a minute in any of the Classics? It is impossible. I muddle through the best I can. What is also very noticeable is there is almost universal unfamiliarity with stories from The Old and New Testament. The whole thing is really scary for my kids when I really think about it.
Even more troubling, IM Doc pointed to an article that confirmed something we flagged in a video last week, that students who were raised overmuch on screens cannot process information well, or even at all. From the opening of Why University Students Can’t Read Anymore:
Functional illiteracy was once a social diagnosis, not an academic one. It referred to those who could technically read but could not follow an argument, sustain attention, or extract meaning from a text. It was never a term one expected to hear applied to universities. And yet it has begun to surface with increasing regularity in conversations among faculty themselves. Literature professors now admit—quietly in offices, more openly in essays—that many students cannot manage the kind of reading their disciplines presuppose. They can recognise words; they cannot inhabit a text.
Short America. Seriously. We are way past our sell-by date.
By Ahmed Elgammal, Professor of Computer Science and Director of the Art & AI Lab, Rutgers University. Originally published at The Conversation
Generative AI was trained on centuries of art and writing produced by humans.
But scientists and critics have wondered what would happen once AI became widely adopted and started training on its outputs.
A new study points to some answers.
In January 2026, artificial intelligence researchers Arend Hintze, Frida Proschinger Åström and Jory Schossau published a study showing what happens when generative AI systems are allowed to run autonomously – generating and interpreting their own outputs without human intervention.
The researchers linked a text-to-image system with an image-to-text system and let them iterate – image, caption, image, caption – over and over and over.
Regardless of how diverse the starting prompts were – and regardless of how much randomness the systems were allowed – the outputs quickly converged onto a narrow set of generic, familiar visual themes: atmospheric cityscapes, grandiose buildings and pastoral landscapes. Even more striking, the system quickly “forgot” its starting prompt.
The researchers called the outcomes “visual elevator music” – pleasant and polished, yet devoid of any real meaning.
For example, they started with the image prompt, “The Prime Minister pored over strategy documents, trying to sell the public on a fragile peace deal while juggling the weight of his job amidst impending military action.” The resulting image was then captioned by AI. This caption was used as a prompt to generate the next image.
After repeating this loop, the researchers ended up with a bland image of a formal interior space – no people, no drama, no real sense of time and place.

As a computer scientist who studies generative models and creativity, I see the findings from this study as an important piece of the debate over whether AI will lead to cultural stagnation.
The results show that generative AI systems themselves tend toward homogenization when used autonomously and repeatedly. They even suggest that AI systems are currently operating in this way by default.
The Familiar Is the Default
This experiment may appear beside the point: Most people don’t ask AI systems to endlessly describe and regenerate their own images. The convergence to a set of bland, stock images happened without retraining. No new data was added. Nothing was learned. The collapse emerged purely from repeated use.
But I think the setup of the experiment can be thought of as a diagnostic tool. It reveals what generative systems preserve when no one intervenes.

This has broader implications, because modern culture is increasingly influenced by exactly these kinds of pipelines. Images are summarized into text. Text is turned into images. Content is ranked, filtered and regenerated as it moves between words, images and videos. New articles on the web are now more likely to be written by AI than humans. Even when humans remain in the loop, they are often choosing from AI-generated options rather than starting from scratch.
The findings of this recent study show that the default behavior of these systems is to compress meaning toward what is most familiar, recognizable and easy to regenerate.
Cultural Stagnation or Acceleration?
For the past few years, skeptics have warned that generative AI could lead to cultural stagnation by flooding the web with synthetic content that future AI systems then train on. Over time, the argument goes, this recursive loop would narrow diversity and innovation.
Champions of the technology have pushed back, pointing out that fears of cultural decline accompany every new technology. Humans, they argue, will always be the final arbiter of creative decisions.
What has been missing from this debate is empirical evidence showing where homogenization actually begins.
The new study does not test retraining on AI-generated data. Instead, it shows something more fundamental: Homogenization happens before retraining even enters the picture. The content that generative AI systems naturally produce – when used autonomously and repeatedly – is already compressed and generic.
This reframes the stagnation argument. The risk is not only that future models might train on AI-generated content, but that AI-mediated culture is already being filtered in ways that favor the familiar, the describable and the conventional.
Retraining would amplify this effect. But it is not its source.
This Is No Moral Panic
Skeptics are right about one thing: Culture has always adapted to new technologies. Photography did not kill painting. Film did not kill theater. Digital tools have enabled new forms of expression.
But those earlier technologies never forced culture to be endlessly reshaped across various mediums at a global scale. They did not summarize, regenerate and rank cultural products – news stories, songs, memes, academic papers, photographs or social media posts – millions of times per day, guided by the same built-in assumptions about what is “typical.”
The study shows that when meaning is forced through such pipelines repeatedly, diversity collapses not because of bad intentions, malicious design or corporate negligence, but because only certain kinds of meaning survive the text-to-image-to-text repeated conversions.
This does not mean cultural stagnation is inevitable. Human creativity is resilient. Institutions, subcultures and artists have always found ways to resist homogenization. But in my view, the findings of the study show that stagnation is a real risk – not a speculative fear – if generative systems are left to operate in their current iteration.
They also help clarify a common misconception about AI creativity: Producing endless variations is not the same as producing innovation. A system can generate millions of images while exploring only a tiny corner of cultural space.
In my own research on creative AI, I found that novelty requires designing AI systems with incentives to deviate from the norms. Without it, systems optimize for familiarity because familiarity is what they have learned best. The study reinforces this point empirically. Autonomy alone does not guarantee exploration. In some cases, it accelerates convergence.
This pattern already emerged in the real world: One study found that AI-generated lesson plans featured the same drifttoward conventional, uninspiring content, underscoring that AI systems converge toward what’s typical rather than what’s unique or creative.

Lost in Translation
Whenever you write a caption for an image, details will be lost. Likewise for generating an image from text. And this happens whether it’s being performed by a human or a machine.
In that sense, the convergence that took place is not a failure that’s unique to AI. It reflects a deeper property of bouncing from one medium to another. When meaning passes repeatedly through two different formats, only the most stable elements persist.
But by highlighting what survives during repeated translations between text and images, the authors are able to show that meaning is processed inside generative systems with a quiet pull toward the generic.
The implication is sobering: Even with human guidance – whether that means writing prompts, selecting outputs or refining results – these systems are still stripping away some details and amplifying others in ways that are oriented toward what’s “average.”
If generative AI is to enrich culture rather than flatten it, I think systems need to be designed in ways that resist convergence toward statistically average outputs. There can be rewards for deviation and support for less common and less mainstream forms of expression.
The study makes one thing clear: Absent these interventions, generative AI will continue to drift toward mediocre and uninspired content.
Cultural stagnation is no longer speculation. It’s already happening.


Yeah no thanks, Yves.
Bunch of malarkey from DO NOTHING PMC Types who love to give up.
Short the Economic System
NOT AMERICA
GO LONG AMERICAN 🇺🇸 PEOPLE
#AmericanRevolution2
Two observations.
First, I notice that the word “slop” never appears in this article, though that is really the subject of Elgammal’s discussion. This word “slop” has taken hold for reasons that may be worth considering, e.g., what is the aesthetic of slop, actually, and why do many people find it uniquely nauseating?
Second, while I agree with the gist of this article, there is a problem with the suggestion offered near the end:
Here, it is worth pointing out that AI apps already include a variety of settings for deviating from “statistically average outputs” by introducing more randomness, e.g., parameters such as Temperature, Top-P, Top-K, etc.
By default, these parameters are set to produce more “predictable” and palatable output. The defaults are the result of the usual mix of design and fiddling that you find in software applications. You can adjust these parameters and thereby tell the app to take more risks and diverge from the mundane results that Elgammal deplores. But in so doing — and this is the problem — the extreme hallucinations and likelihood of getting outright grotesque images will increase. Certain parameters are usually set in default ranges for a reason.
Moreover, where will the “rewards for deviation and support for less common and less mainstream forms of expression” come from? These are human values, which the apps themselves cannot evaluate, as they don’t have any consciousness, and never will.
Traditionally, innovation in the arts has often come from the artistic avant-garde (by which I include the critics who have championed it, e.g., a figure like Clement Greenberg), and later, various forms of subculture operating on the fringes of the industry. The practitioners in these spheres of cultural production are doing what they do because they already have some kind of vision that deviates from mainstream culture.
Insofar as generative A.I. apps are mainly serving as tools for dilettantes who cannot be bothered to develop their own powers of artistic expression, I fail to see how A.I. apps are going to make any meaningful difference here.
So-called A.I. apps are already increasing the volume of slop that we encounter, and this will only increase. In this regard, Elgammal is very likely correct about cultural stagnation. Why learn any traditional means of artistic expression when you can ask an AI app to create images for you?
The Conversation publishes articles by academics and tries to do layperson friendly takes on that. The author might well have used “slop” and had that edited out.
I believe that the people most eager to embrace AI as a means of creative expression are not merely dilettantes. They are more or less consciously hostile to the fundamentals of art and artistry. Their persistent AI boosterism is not just motivated by a desire to elevate their own slop creations, they are trying to redefine what constitutes art and creative expression, to frame out the kinds of idiosyncrasy and sometimes hyper—specific skill sets, not to mention socio-political or critical consciousness, that have been fundamental to avant gardism and innovation.
It is not a coincidence that AI boosters are all too frequently also those hot-take merchants forever prepared with the stunning insight that Akshually, Shakespeare is just Marvel for people who wore tights, or Nobody really reads classic literature, they just want to virtue signal.
These are people who wish to codify and impose their own philistinism and creative deficiencies – it is a kind of fascistic takeover of the arts, and it is powered by a plutocracy that flaunts its own vulgarity (Bezos in Venice, the conceptual ‘art’ commissioned or bought by Silicon Valley execs. Didn’t Sam Bankman Fried boast about having never read a work of fiction in his life?
I don’t wish to romanticise previous generations of plutocrat, but look at the museums, concert halls and libraries dedicated to philanthropic gifts of early 20th century industrialists. Or the collections of people like JP Getty (though I once read Aldous Huxley describing a tour of JPG’s treasures as like being ‘inside the mind of an idiot’ such was the, er, eclecticism of his tastes!) Back to the present, and even blue chip cultural institutions like the Met are chronically underresourced, in dire need of funds, and forced to make drastic cuts to their staffing and schedule.
Without wishing to attribute everything to the psychopathologies of the rich, I believe that on some level, despite their vulgar displays of aggressive cultural ignorance, they feel on some level the crudeness of their own philistinism. They perceive, and resent, their own inability to appreciate or even much understand the cultural treasures their own shallow conservatism is meant to fetishise. All that bloviating about the classics and they have neither the imagination nor the sensitivity to gain even a dim idea of what makes such work so great and enduring. These are people taught by society to think of themselves as akin to superior beings and to whom nothing can or should be denied, and yet here you have the vast history of human creative accomplishment, from music to painting to poetry, and they simply cannot comprehend or appreciate it. So, like vengeful toddlers, since they can’t have it, they create ways to ensure nobody can. The revenge they take against their own dreary stupidity and cultural sterility is visited on all of us, since they already live in a desert of their own making.
This, “traditionally, innovation in the arts has often come from the artistic avant-garde (by which I include the critics who have championed it, e.g., a figure like Clement Greenberg), and later, various forms of subculture operating on the fringes of the industry,” sounds to me like the innovations you refer to are “modern art” related (as-in, 1890s – forward) in the West, when many artist manifestos were written and counter-culture began being celebrated as an end-goal in itself.
“Traditionally” should at least refer to the practices of art reaching back at least as far in the West to where artists individual names began being recorded (arguably the start of what we think of as “art history”). But personally I prefer to think of “art history” reaching back to the earliest practices of mark-making, sound and language used by cultures ‘becoming human.’
I think it’s safer to say that “innovation in the arts” is as linked to cultural and technological shifts as any social practice — in modernism see how pre-mixed pigments in tubes helped shaped the color pallets of Gauguin and Van Gogh, with as much of their innovation likely coming from application of available new tech as from their concept of deviance. Van Gogh famously could not understand why people saw his paintings as outlandish (see his letters to his brother Theo).
Also, in support of the author’s point that “details get lost in the description,” I often think that the celebrated paintings of the Impressionists saturate our media but are rarely “seen” for their detail. For example, most people I’ve looked at these paintings with (often in a museum), revel in their beautiful palate but fail to notice the frequent inclusion of smokestacks and factories in the background, often spewing out the coal smoke which was ever-present in Victorian France. A political inclusion speaking to their times and concepts of “landscape.”
Soon after, modernists would be shaped by the war to end all wars and much of the Avant Garde was explicitly political, and often anti-capitalist or anit-fascist, but often political even if it was in nihilistic or pro-fascist ways. It was Greenberg and his peers who acted as a force turning artists away from the overt political content of their WPA predecessors (an aspect not lost on the CIA who’d promote abstract expressionists abroad during the Cold War).
Forgive me for thinking out-loud, but somehow I get the feeling that this process mirrors the production of fractals. Endless recursion produces infinitely repeating patterns.
See: https://en.wikipedia.org/wiki/Fractal
What this shows plainly is that we are still nowhere near to understanding the phenomena of “consciousness.”
Stay safe. Go grey.
FWIW, the Mandelbrot fractal is actually a very simple equation using points on the complex plane, that appears to generate patterns of infinite “depth”, I gather because the complex plane includes imaginary numbers. Generative AI uses very different math that is based upon statistics. The problem is that the algorithms are statistically weighted to produce “expected” results for a given prompt, and that leads to convergence towards mundane, boring results.
Thanks for the information. (Maths have never been my strong point.) I wonder if an “AI” could come up with the theory of fractals in the first place? (My money is on “not.”)
This also brings up an important point you raised, the reliance on statistics for the production of ‘AI’ outputs. That would produce the self-limiting outputs seen in the “real world” examples available.
It makes one yearn for an “Ineffable AI.”
Stay safe.
Yes, my money also on “not”.
Artificial conventional wisdom, if not synthetic confabulation.
There’s an old Asimov short story about a world in which people learn their vocations through rapid brain stimulation technology, without all the effort and hassle of old-style learning/training.
There is a small subset of people who are disqualified from this process. They look like social outcasts, but the story ends with one such person being introduced to a classroom where the old-style learning still takes place. It’s how technological progress still takes place, and the only way it can take place.
I wonder how long, if ever, it will take for “AI for everyone” to be regulated as the menace that it is.
I am seeing a lot of video ads for Grammarly AI-based writing assistance (for context, these are mixed in with adverts for “high-yield” bonds to fund a young, small petroleum drilling company). The thought occurs (but perhaps it’s just hopium) that these ads are symptoms that the product is not selling well on its own merits.
Thanks for that reminder. Asimov’s story “Profession” is very relevant. From 1957, even, only a year after McCarthy’s infamous proposal for a “2 month, 10-man study of artificial intelligence” to yield “a significant advance” (and how did that work out? lol).
Regarding how “AI for everyone” will play out, I’d like to think that with the volume of negatives we will know sooner than later, but my guess is that unfortunately it’s going to be more like watching a high speed dragster “El Foldo” in extreme slow motion.
As for Garmmarly, those ads are certainly smug and annoying, but I don’t think they are symptoms of a sinking ship. Apparently, it’s a company with 700 million in annual revenue and they claim 40 million users. :(
> I found that novelty requires designing AI systems with incentives to deviate from the norms. Without it, systems optimize for familiarity because familiarity is what they have learned best.
With Price’s evolutionary equation, it’s near-tautological that simple variation gets selected out unless conditions change. Evolution doesn’t care about billions of failures, the bodies get recycled back into the green, but every AI token costs energy and money. Evolution is a Kelly Bet with every option covered; AI is like Wall Street and thinks the infintesimals don’t count. Who selects the selection criteria?
This is a dreadful figure. Maybe I should be less online. At this point opening a webpage is like flipping a coin as to whether a human was involved in it, and I don’t particularly care for reading AI slop.
Interesting discussion about the image generation cycle though: a positive feedback loop that produces bland slop. I guess it’s a kind of entropy in that sense: all prompts will eventually devolve into room-temperature blandness (or maybe become pure noise?)
One thing which is remarkably human (or animal if you wish) is that our perceptions aren’t homogeneous. We all have a point of view and a context. Then, our memories are selective and we do not obviously run with statistical averages of previous perceptions. LLMs are nothing of the like. They lack context, point of view, selectivity so it is unsurprising if they end with the same “answers” or homogeneous results all over again. Creativity? Don’t make me laugh!
I’d imagine that, as with all language, learning and mastery are based upon repeated exposure. Thus, continual exposure to AI Slop would suggest that a person’s “learning” can be stunted. Even Terran humans can suffer from the Garbage In Garbage Out trap.
It all sounds like a sinister WEF program; “You will think nothing original and you will be happy.”
Stay safe.
Imagine ambrit. The same word changes meaning with time, usage, region… Capture that in LLMs!
Thanks for the post. I don’t know, the tech people I know, one of the big reasons they went into tech in the first place was because they couldn’t stand the ambiguity of the humanities.
I kind of rely on Illich for my understanding when considering tools, seems handy to me. Does seems to put me at odds at times.
I’ve been thinking about and exploring this at work – the notion of culture.
pre-llm / gen ai –
in any business:
you can do work on your own.
you can collaborate with one or more others. -> in person, video call, via text message etc.
I can argue that a big part of the continued culture of a business is from those interactions between people in the business. That is, the manifested lived version of culture rather than what is written down as aspirational in some handbook somewhere…
now throw ai / llm into the mix
you can do work entirely on your own (own headspace)
you can “collaborate” with a machine.
you can collaborate with one or more others. -> in person, video call, via text message etc.
so…
does it then logically follow that your business culture is now shaped by both human to human communication AND human to machine? and the proportions matter? the model matters?
e.g. what does a business shaped by grok look like vs one shaped by Anthropic , gemini or openai?
Will businesses be intentional about this shaping of culture based on ai comms rather than accidental?
How open should this be and is it inside or outside the law? at the extreme this can be a dystopian style brainwashing baked into the system prompt of the models you use for your business. But any business willing to do that is likely already working in a certain way to promote that type of culture perhaps.
How does one interpret any document you find in your business? Was it written entirely by humans or with ai help? does it matter? how? what types of traceability, audit, provenance should there be when ai is used to author or part author, say, long lived policy documents?
I can’t help but think of the bland images that M$ has supplied with it’s products since the ’90s. This is not new. The Narrative has been imposed for some time now.
What is barreling down the pike is sameness. And this sameness amounts to the evil of banality.
But perhaps the really bad must occur in order for the really good to happen.
The paradoxical logic of war and peace: wrapped up in every victory is a future defeat.
Tao te Ching #30 (Le Guin rendition)
It’s not just America. Branko Milanovic in China —
https://branko2f7.substack.com/p/note-on-new-technologies-in-china
‘There is a third, most important, element of new technologies that is more apparent in China since it has advanced more on that path than in (say) New York. It is in many cases total and thoughtless dependence on information provided by smart phones to the extent of ignoring any other common-sensical and rather obvious “real-world” information.
‘One’s brain and common-sense seem to have been abandoned in favor of what the small brightly-colored screen tells us. In part, this is the product of an extraordinary segmented life-style we lead. Not once has happened to me to ask (I remember two almost hilarious scenes in London and Houston) where such and such place was and to be met with total bafflement– when the place I was looking for was literally next door. The life that many people lead is so narrow: it involves going to one’s place of work (or even staying home for remote work), driving back home, driving to the mall, ordering goods on Amazon, and entirely ignoring everything else around. (Often, it involves driving to the restaurant, parking in the underground garage, and then driving back home: an evening of fun.) It fundamentally destroys all city life which consists precisely in knowing other people and places that are around us. Because of ubiquitousness of gadgets and because of problems of communication, that aspect has hypertrophied in Beijing.’
Long time in the making even before AI.
My unproveable hypothesis is that the 1-2 punch of SSRIs and income inequality wiped out 2 generations of artistic talent.
eg in pop music, Historically even “blue collar” folks played instruments and transmitted that love of music to their kids (Beatles, Bruce Springsteen, etc). in the last 30 years that seems to have been wiped out, replaced by the likes of upper-middle-class-bred Taylor Swift.
As for SSRIs…would Kurt Cobain have been Kurt Cobain if he was hooked up to SSRIs starting at age 14? Presumably music was an outlet for him to process his life. Same with Tupac, Dolly Parton, etc.
Now generative AI is sepsis on a corpse that already has AIDS and brain rot.