Brain rot explains a lot about LLMs, long covid, bad economics and polisci, or maybe it’s the other way around.
LLMs as Brain Rot Inducement Machines
Yesterday, Conor reposted a Conversation piece about OpenAI slipping shopping into ChatGPT which seems like an effort to induce brain rot on their customer base:
AI’s responses create what researchers call an “advice illusion.” When ChatGPT suggests three hotels, you don’t see them as ads. They feel like recommendations from a knowledgeable friend. But you don’t know whether those hotels paid for placement or whether better options exist that ChatGPT didn’t show you.
Traditional advertising is something most people have learned to recognize and dismiss. But AI recommendations feel objective even when they’re not. With one-tap purchasing, the entire process happens so smoothly that you might not pause to compare options.
OpenAI isn’t alone in this race. In the same month, Google announced its competing protocol, AP2. Microsoft, Amazon and Meta are building similar systems. Whoever wins will be in position to control how billions of people buy things, potentially capturing a percentage of trillions of dollars in annual transactions.
Let’s all get high on Vibes
The Atlantic covered Meta’s latest atrocity, Vibes an all AI social network.
Note: When thinking about anything done by Meta in this era, never forget that Zuckerberg blew at least $46.5 billion on his disastrous Metaverse project (but also skim this Bloomberg puff piece which claims that “analysts think that the pivot from Facebook to Meta is finally coming to fruition as the company heavily leans into its Orion augmented reality glasses.”
Time will tell if Orion catches on, I’m skeptical that augmented reality products at this time. Now let’s hear about Vibes:
Vibes, a new social network nested within the Meta AI app—except it’s devoid of any actual people. This is a place where users can create an account and ask the company’s large language model to illustrate their ideas. The resulting videos are then presented, seemingly at random, to others in a TikTok-style feed. (OpenAI’s more recent Sora 2 app is very similar.) The images are sleek and ultra-processed—a realer-than-real aesthetic that has become the house style of most generative-AI art. Each video, on its own, is a digital curio, the value of which drops to zero after the initial view. In aggregate, they take on an overwhelming, almost narcotic effect. They are contextless, stupefying, and, most important, never-ending. Each successive clip is both effortlessly consumable and wholly unsatisfying.
I toggle over to a separate tab to see a post from President Donald Trump on his personal social network. It’s an AI video, posted on the day of the “No Kings” protests: The president, wearing a crown, fires up a fighter jet painted with the words King Trump. He hovers the plane over Times Square, at which point he dumps what appears to be liquid feces onto protesters crowding the streets below. The song “Danger Zone,” by Kenny Loggins, plays.
I switch tabs. On X, the official White House account has posted an AI image of Trump and Vice President J. D. Vance wearing crowns. A MAGA influencer has fallen for an AI-generated Turning Point USA Super Bowl halftime-show poster that lists “measles” among the performers and special guests. I encounter more AI videos. One features a man in a kitchen putting the Pokémon character Pikachu in a sous-vide machine. Another is a perfectly rendered fake ’90s toy commercial for a “Jeffrey Epstein’s Island” play set. These videos had the distinctive Sora 2 watermark, which people have also started to digitally add to real videos to troll viewers.
Here’s the Trump video which has to be seen to understand our brain rot moment:
Why Is Trump Trolling?
The increasingly pro-Trump John Michael Greer has a theory:
Trump has realized that it’s far more effective to mock his enemies than to argue with them, and has teams of meme artists going at it hammer and tongs. His goal, I think, is to crack the facade of mandatory niceness that so many people on the left cultivate so assiduously, knowing that once it breaks and everything they’ve been repressing comes spilling out, they’ll alienate voters the way Biden did with his famous Reichstag speech
The reaction to Kirk’s murder is a good example of what he’s trying to goad them to do, and they’re falling into his trap with embarrassing ease.
— Nat Wilson Turner (@natwilsonturner) October 23, 2025
Now we’ll grapple with the question of whether the stress of the obvious AI bubble is causing brain rot among tech execs.
AI Number Go Down
The first thing to know to understand the stakes at play is the amount of money OpenAI is raising and spending. That’s why I called OpenAI a “money pit” in June.
Then The Information reported in September about the obligations OpenAI is looking to take on:
Revenue growth from ChatGPT is accelerating at a more rapid rate than the company projected half a year ago. The bad news? The computing costs to develop artificial intelligence that powers the chatbot, and other data center-related expenses, will rise even faster.
As a result, OpenAI projected its cash burn this year through 2029 will rise even higher than previously thought, to a total of $115 billion. That’s about $80 billion higher than the company previously expected.
The unprecedented projected cash burn, which would add to the roughly $2 billion it burned in the past two years, helps explain why the company is raising more capital than any private company in history. CEO Sam Altman has previously told employees that their company might be the “most capital intensive” startup of all time.
…
the company has projected $13 billion in total revenue this year, up three and a half times from last year and about $300 million higher than its earlier projections for 2025, the new data show. Its revenue projection for 2030 rose about 15% to roughly $200 billion compared to the prior projection.ChatGPT is a large driver of the company’s increased projections. OpenAI expects the chatbot to generate nearly $70 billion in additional revenue over the next six years, compared to earlier projections. Millions of people and thousands of businesses pay subscription fees to use ChatGPT.
OpenAI projected nearly $10 billion in revenue from ChatGPT this year, an increase of roughly $2 billion from projections earlier this year, and nearly $90 billion in revenue from the chatbot in 2030, a roughly 40% increase from the earlier projections.
OpenAI’s projections also raised expectations for how it will generate revenue from users who don’t pay for ChatGPT. It’s unclear how OpenAI plans to make money off that portion of its user base, although it could include shopping-related services or some form of advertising. The data show the company expects to generate around $110 billion in revenue between 2026 and 2030 from such services.
OpenAI has been promising massive revenue growth and Gary Marcus doesn’t think they’ll deliver:
OpenAI can’t make its ambitious revenue numbers — 13x-ing current revenue in five years — unless revenue from paying big business customers keeps going up and up and up and up and up.
And he shares a number of charts and tweets to illustrate his 5 reasons.
First the rosy projections:
— Nat Wilson Turner (@natwilsonturner) October 22, 2025
Then reports of declining use of generative AI in the American workforce, which explains why OpenAI is pursuing advertising, porn, personal shopping, and consumer video generation.
Hartley, Jolevski, Melo, and Moore have been tracking GenAI use for Americans at work since December 2024.
They find that GenAI use fell to 36.7% of survey respondents in September 2025 from 45.6% in June.
I wonder if OpenAI, Google or Anthropic are seeing a similar decline… https://t.co/hjnss5u6vj
— Erik Brynjolfsson (@erikbryn) October 22, 2025
There’s no better way to understand the awfulness of OpenAI’s consumer video generation app Sora than watching this video titled “Sora Proves the AI Bubble Is Going to Burst So Hard” by Adam Conover.
Partial transcript:
Sora 2 is one of the weirdest and in many ways worst apps to ever make it into the app store. And I’ve tried Pimple Popper Light.
So, first of all, Sora gives you the option of letting anyone make an AI deep fake video of you if you take the little step of letting them steal your face.
And almost in order to demonstrate why no one should ever do this, Sam Alman himself kindly donated his own likeness for us to have fun with. Which means that when you open Sora, almost one out of every three videos you see is Sam being physically, emotionally, and sometimes almost sexually humiliated by his own users.
Conover also discusses this Washington Post piece about the ghoulishness of the app:
Ilyasah Shabazz didn’t want to look at the AI-generated videos of her father, Malcolm X. The seemingly realistic clips — made by OpenAI’s new video-maker Sora 2 — show the legendary civil rights activist making crude jokes, wrestling with the Rev. Martin Luther King Jr. and talking about defecating on himself.
Sora’s speed and uncanny realism has helped rocket the app to the top of the download charts, and videos reanimating the dead have been among its most viral clips. Sora-produced videos of Michael Jackson, Elvis Presley and Amy Winehouse have flooded social media platforms, with many viewers saying they struggle to tell whether the videos are real or fake.
Some clips played for laughs, such a video of “Mr. Rogers’ Neighborhood” host Fred Rogers writing a rap song with hip-hop artist Tupac Shakur. Others have leaned into darker themes. One video showed police body-camera footage of Whitney Houston looking intoxicated. In some clips, King makes monkey noises during his “I Have a Dream” speech, basketball player Kobe Bryant flies aboard a helicopter mirroring the crash that killed him and his daughter in 2020, and John F. Kennedy makes a joke about the recent killing of right-wing influencer Charlie Kirk.
And a Techcrunch piece headlined: ChatGPT’s mobile app is seeing slowing download growth and daily use, analysis shows:
ChatGPT’s mobile app growth may have hit its peak, according to a new analysis of download trends and daily active users provided by the third-party app intelligence firm Apptopia. Its estimates indicate that new user growth, measured by percentage changes in new global downloads, slowed after April.
The firm looked at the global daily active user (DAU) growth and found that the numbers have begun to even out over the past month or so.
— Nat Wilson Turner (@natwilsonturner) October 22, 2025
Although October is only half over, the firm says it’s on pace to be down 8.1% in terms of a month-over-month percentage change in global downloads.
To be clear, this is a look at download growth, not total downloads. In terms of sheer number of new installs, ChatGPT’s mobile app is still doing well, with millions of downloads per day.
If and when the whole generative AI edifice comes crashing down, it will be in no small part because too many otherwise smart investors read too many graphs—from “LLM scaling” to usage statistics—too naively, assuming that things had been going up for a while would continue to go up at the same pace, indefinitely.
This is, of course, a version the trillion pound baby fallacy I have mentioned before
Unfortunately, my son has fallen off of the 7.5-trillion-pound pace… https://t.co/LSKvwPVMDi pic.twitter.com/CtazoRgl6M
— Christian Keil (@pronounced_kyle) August 2, 2024
LLMs Getting Brain Rot Themselves?!?
A group of researchers from Texas A&M University, University of Texas at Austin, and Purdue University have released a new study called “LLMs Can Get ‘Brain Rot’!”:
We propose and test the LLM Brain Rot Hypothesis: continual exposure to junk web text induces lasting cognitive decline in large language models (LLMs). To causally isolate data quality, we run controlled experiments on real Twitter/X corpora, constructing junk and reversely controlled datasets via two orthogonal operationalizations: M1 (engagement degree) and M2 (semantic quality), with matched token scale and training operations across conditions.
Contrary to the control group, continual pre-training of 4 LLMs on the junk dataset causes non-trivial declines (Hedges’ g>0.3) on reasoning, long-context understanding, safety, and inflating “dark traits” (e.g., psychopathy, narcissism). The gradual mixtures of junk and control datasets also yield dose-response cognition decay: for example, under M1, ARC-Challenge with Chain Of Thoughts drops 74.9 → 57.2 and RULER-CWE 84.4 → 52.3 as junk ratio rises from 0% to 100%.
Error forensics reveal several key insights:
- Thought-skipping as the primary lesion: models increasingly truncate or skip reasoning chains, explaining most of the error growth.
- Partial but incomplete healing: scaling instruction tuning and clean data pre-training improve the declined cognition yet cannot restore baseline capability, suggesting persistent representational drift rather than format mismatch.
Popularity as a better indicator: the popularity, a non-semantic metric, of a tweet is a better indicator of the Brain Rot effect than the length in M1.- Together, the results provide significant, multi-perspective evidence that data quality is a causal driver of LLM capability decay, reframing curation for continual pretraining as a training-time safety problem and motivating routine “cognitive health checks” for deployed LLMs.
So that doesn’t seem good for the industry’s financial prospects, or anything else.
And no discussion of brain rot and LLMs is complete without Boris Johnson discussing his ChatGPT use with Saudi media outlet Al Arabiya.
“I love ChatGPT,” the blond-mopped Brexiteer told Al Arabiya English earlier this week.
Famous for making stuff up and going on flights of fancy, Johnson served as prime minister from July 2019 until September 2022, when he was ousted after misleading colleagues over a scandal involving his government’s deputy chief whip, the party disciplinarian. OpenAI’s ChatGPT is also prone to making statements that turn out not to be entirely true.
Adopting a strange affected accent in his TV interview this week, Johnson said: “I love AI, do you use AI? Absolutely, I use ChatGPT. I love ChatGPT. I love it. ChatGPT is fantastic.”
When pressed on what he used the large language model (LLM) for, the former MP confessed he was “writing various books.”
He also said he liked to “just ask questions,” mainly, it would seem, because he wanted to hear the robot say how clever his questions were.
“‘You agree, your brain, you’re excellent. You have such insight.’ I love it,” Johnson told the interviewer.
Asked whether Johnson tells the truth, ChatGPT was a little circumspect.
“It’s complicated. Short answer: sometimes yes, but there’s quite a lot of evidence he often does not tell the full truth, or is misleading, or makes mistakes. Whether those are deliberate lies or exaggerations or misunderstandings is often in dispute,” it said.
Side note, this is what happens when you visit Al Arabiya using a VPN
Those Saudis really don't like VPNs pic.twitter.com/B75mAD7I7S
— Nat Wilson Turner (@natwilsonturner) October 22, 2025
Long Covid Also Causes Brain Rot
Rolling Stone is writing about the impact of long covid in U.S. schools which reminds me I need to do more coverage of Mr MAHA himself, Robert F. Kennedy, Jr.:
Over the next three years, Lia would develop a perplexing, debilitating, and persistent set of symptoms. Extreme fatigue that turned mornings into catatonic nightmares. Brain fog that made memories slippery. Incessant vomiting. It all kept her out of class, stuck in the nurse’s office — or out of school entirely. The straight-A’s evaporated.
Lia’s story is one I’ve heard from dozens of families over two years investigating the impact of long Covid on kids. It’s a slow-moving spiral: first their health, then their grades, then their future. And as the country throttles past the pandemic’s fifth anniversary, those for whom Covid looms very present are feeling increasingly forgotten, subject to pervasive skepticism and a kind of cultural fatigue when it comes to their illness.
…
But since the earliest days of the second Trump administration, the dollars that could help those suffering from the illness have quietly faded away. In March, cuts to the National Institutes of Health (NIH) disappeared the nearly $2 billion invested in the RECOVER (Researching Covid to Enhance Recovery) initiative, hamstringing research that might have yielded diagnostic tests or better treatments (though after protest from advocates, some research grants have been restored). The same week, the administration shuttered the Department of Health and Human Services’ Office of Long Covid Research and Practice.
…
n April, the Department of Government Efficiency (DOGE) gutted the NIH’s Vaccine Research Center — the entity whose work laid the foundation for the Moderna Covid shot. In May, health officials installed by Trump overrode career scientists at the Food and Drug Administration to limit approvals of new Covid vaccines. Around the same time, HHS Secretary Robert F. Kennedy Jr. disavowed the vaccine for healthy kids and pregnant women. (“We’re now one step closer to realizing President Trump’s promise to make America healthy again,” Kennedy said in a video announcing the policy.)In the weeks that followed, the secretary also removed every member of the Centers for Disease Control and Prevention (CDC) vaccine advisory committee, replacing them with a merry band of loyalists and skeptics. That committee has since walked back recommendations for the Covid shot. And since the committee’s suggestions only move forward with the approval of the CDC director, RFK Jr. fired Susan Monarez, whom he’d appointed to that role 29 days earlier, for her refusal to “commit in advance to approving every [committee] recommendation regardless of the scientific evidence,” as Monarez later testified at a Senate hearing.
And this X.com exchange is illustrative of the continuing brain rot around the discussion of covid:
Let’s not rewrite history. It was a bad faith critique bc everyone was masked at those protests
I was there in 2020. Can’t protect anymore since I was disabled by covid and became homebound by long covid in 2024. But I know that’s very politically inconvenient for everyone pic.twitter.com/YL5LdurG7t
— Julia Marie (@julia_doubleday) October 22, 2025
When Billionaires Buy Brain Rot
And my favorite recent interview withAaron Good talking to BetBett Media included a fascinating discussion about the reasons behind the dismal quality of the dismal science in Western academia in 2025 which :
It’s really going to take us this long to say ‘grass is green’ in political science because the methodological straight jacket they put themselves in.
Another way to understand why political science is so bad is to look at economics because that discipline is also such bullshit from top to bottom.
Like Milton Friedman saying no there’s no such thing as rent and then we look at our economy and we are just beset by rentier billionaires.
Yet the economics discipline kind of focuses as though Freriedman was correct and there’s no such thing as rent and says ‘don’t think about these questions.’
It’s way worse than Adam Smith because Adam Smith actually understood that the rents of feudalism were what were damaging the economy and it was true.
It kept the economy in a form of stasis because of these these rentier landlords.
But the problem is with with capitalism, the people at the top are basically those who are most effective rent seekers, monopolists, and the people who control the way finance is regulated and so on.
So since political economy is ultimately what determines the power structure of a civilization, politics, political science and economics are going to be the two most bullshit disciplines in the west because they have to be.
You cannot have people really working to illuminate politics or economics both or political economy because those things are the most important in terms of maintaining illusions and sort of bullshit orthodoxies so that the whole criminal enterprise can keep running.
Yeah. H.L. Mencken of all people wrote, I think, the best essay on this called “The Dismal Science” where he does a political economic analysis of academia and says, you know, people with with power and wealth, they don’t give a f**k about chemistry or astronomy or what have you, but they absolutely do care about economics.
He doesn’t (explicitly say) political science, but you can (apply Mencken’s analysis).
Any science that deals with the source of (oligarchical) power is going to be one that they really want to influence. And so I think that’s really the reason why political science is apologetics for uh capitalist democracy and economics is apologetics for neoliberal capitalism.
Brain Rot and the Bullshit Jobs Apocalypse
This video from Cy Canterel is on theme also.
Partial transcript:
Silicon Valley is in the middle of pulling off one of the most elegant cons in its history, convincing the world that building smart machines requires destroying human intelligence and jobs at industrial scale. It’s the inevitable result of mythological thinking so intoxicating that admitting it’s wrong would mean acknowledging hundreds of billions of dollars were invested in the wrong direction. If it continues, they will literally eat our future from the inside out.
…
Companies are committing over $300 billion to AI infrastructure in 2025 alone.
…
What if the entire architectural approach is not just inefficient but actively counterproductive? What if the pursuit of AGI through brute force scaling has created systems that are simultaneously incredibly expensive and also remarkably stupid?Companies are developing research that makes their infrastructure obsolete while being unable to pivot because it would kill their investments. They must sell what they’ve built and the only way is through mass job elimination. The employment apocalypse is already happening. The tech sector lost 130,000 jobs in 2024. And in January 25, we saw the lowest professional job opening since 2013, with 40% of white collar job seekers failing to get an interview at all. But here’s the twist. Many AI targeted jobs are what anthropologist David Greyber called bullshit jobs. Roles like administrative coordinators, compliance officers, and middle management that exist mainly to deal with complex hierarchy and bureaucratic processes.
These roles are often light on meaning and heavy on misery. In theory, eliminating this kind of work could free humans for care work, creativity, and community building and other roles that more directly support everyone’s flourishing. But there’s a catch. Because even these jobs distribute resources, eliminating jobs without changing the predatory and meaningless economic framework creates mass unemployment, not liberation.
In the US, cities like Nashville, Houston, and Dallas might face economic collapse. This is because these areas employ massive numbers of folks in customer service, finance operations, and insurance. They could see simultaneous unemployment spikes, property value crashes, shrinking tax bases, and secondary waves of unemployment from hospitality and service work that’s no longer supported by the economy. Unlike previous disruptions that happened gradually and created new opportunities elsewhere, like other countries, AI automation threatens to eliminate jobs faster than new ones can be created anywhere. Displaced workers face permanent exclusion from the economy, creating an hourglass economy with no middle.
That’s all for today from me, I’ll be back for Monday’s Coffee Break.
I find what people call AI to be really interesting. There is an underlying assumption that this is the next big tech thing but no one can tell you why? People compare it to the internet, but it was really to see early on why the internet would become a thing. The early apps, email, messaging, simple shopping, information search, all made sense. When we get to LLMs, other than wildly made, unsupported claims, no proponent can tell me why I need it.
Having had to deal with the investment side of the Dot-Com era at a large asset manager, it was really easy to understand the bet we were making. That doesn’t mean we got it right. But professional investors like me, and the chattering class on Wall Street and CNBC, could at least tell you in about 25 words why we were doing what we were doing and what we expected/hoped would result. Also, and quite importantly, we could play the portfolio strategy bet. In other words, we could buy a basket of companies of different market caps not being sure which would bust, which would be okay, and who the homer might be. And, really importantly, this was primarily an equity game, not credit or debt.
This looks really different. It is staggeringly concentrated and capital intensive. External credit is central to the whole thing. And, no one can tell me why or how they’ll ever make money. To put this much capital at risk just doesn’t make sense — unless they assume there is an intrinsic bailout baked in(?) which could come from DoD.
Has anyone in the commentariat seen an articulation of how this capital ever makes a return?
The rough idea seems to be that “A.I.” is going to make many tasks “more efficient”. I.e., headcount can be reduced, coders will be able to produce more code, graphic artists will be able to create more graphic art, etc., etc. Of course a lot of this code, graphic art, etc. will be workslop, but since when have corporate managers really given a toss about quality? Puke it and pack it.
Similarly, Google is saying that it can justify building moar data centers to run AI apps because AI is going to help us to make everything so much more efficient that even with more of these data centers there will be a net savings.
Idk who believes any of this, but that’s the claim I’ve been seeing on how capital will see ROI.
That is a very relevant question that. How will AI end up paying for itself enough so that it can start generating a profit? Nothing I have read has suggested that this is ever going to happen. Are people out there already shortening it? Can we expect to see The Big Short 2.0?
I’m screaming in my head about water (especially!!) and energy resources being stretched in order to enable people to make fake videos.
These companies get more tax breaks as people get rising energy bills.
And again…the water. Basic, common sense survival needs.
Next steps in the rot:
-Convert ChatGPT into your only financial advisor and let it put all your savings in crypto world. Allow extreme leverage.
-Give ChatGPT the car keys, credit card keys and codes and make it your personal buyer in charge.
-Follow ChatGPT-made recipes for your haute cuisine.
-Make of ChatGPT your lawyer, advisor, friend, lover, pet…
-Make of ChatGPT your own, personal Jesus.
-etc.
Don’t tell me there aren’t infinite possibilities for AI business. Full happiness and competitiveness are around the corner with AI.
“Sammy Altman released Sora 2, a Tik Tok knock-off ckock-a-block with AI slop.”
Erik Brynjolfsson has a way with words. Thanks for the link, Nat.
Sorry. That was Adam Conover from the video you shared “Sora Proves the AI Bubble Is Going to Burst So Hard”. I copypasted the wrong name in my previous comment.
Conover’s video is a must watch. It’s is very well researched. For its genre (angry dude with huge microphone declaiming to webcam) it’s extraordinary.
https://www.youtube.com/watch?v=55Z4cg5Fyu4
I think that Nat’s early-article mention of the AI companies’ ventures into adult content tells you everything you need to know about where they see their path to probability. At least in the U.S. market, technology adoption in the adult entertainment space has been a reliable indicator of where the rest of the culture would wind up and I think it’s due to the surreptitious nature of the way their products are consumed. Accepting that the bar for titillation is lower than for entertainment, viewing audiences acclimated from film to video to digital, and subliminal acquiescence to those new formats made it easier to accept traditional media following suit. Same thing with the pivot to mobile and short-form video. Using the niche space of adult content, they can condition people to SoraSlop and then sell it to the streamers as a way to zero their overhead.