Brain Rot: LLMs, Long Covid Edition

Brain rot explains a lot about LLMs, long covid, bad economics and polisci, or maybe it’s the other way around.

LLMs as Brain Rot Inducement Machines

Yesterday, Conor reposted a Conversation piece about OpenAI slipping shopping into ChatGPT which seems like an effort to induce brain rot on their customer base:

AI’s responses create what researchers call an “advice illusion.” When ChatGPT suggests three hotels, you don’t see them as ads. They feel like recommendations from a knowledgeable friend. But you don’t know whether those hotels paid for placement or whether better options exist that ChatGPT didn’t show you.

Traditional advertising is something most people have learned to recognize and dismiss. But AI recommendations feel objective even when they’re not. With one-tap purchasing, the entire process happens so smoothly that you might not pause to compare options.

OpenAI isn’t alone in this race. In the same month, Google announced its competing protocol, AP2. Microsoft, Amazon and Meta are building similar systems. Whoever wins will be in position to control how billions of people buy things, potentially capturing a percentage of trillions of dollars in annual transactions.

Let’s all get high on Vibes

The Atlantic covered Meta’s latest atrocity, Vibes an all AI social network.

Note: When thinking about anything done by Meta in this era, never forget that Zuckerberg blew at least $46.5 billion on his disastrous Metaverse project (for the opposing view also skim this Bloomberg puff piece which claims that “analysts think that the pivot from Facebook to Meta is finally coming to fruition as the company heavily leans into its Orion augmented reality glasses.”)

Time will tell if Orion catches on, I’m skeptical that augmented reality products at this time. Now let’s hear about Vibes:

Vibes, a new social network nested within the Meta AI app—except it’s devoid of any actual people. This is a place where users can create an account and ask the company’s large language model to illustrate their ideas. The resulting videos are then presented, seemingly at random, to others in a TikTok-style feed. (OpenAI’s more recent Sora 2 app is very similar.) The images are sleek and ultra-processed—a realer-than-real aesthetic that has become the house style of most generative-AI art. Each video, on its own, is a digital curio, the value of which drops to zero after the initial view. In aggregate, they take on an overwhelming, almost narcotic effect. They are contextless, stupefying, and, most important, never-ending. Each successive clip is both effortlessly consumable and wholly unsatisfying.

I toggle over to a separate tab to see a post from President Donald Trump on his personal social network. It’s an AI video, posted on the day of the “No Kings” protests: The president, wearing a crown, fires up a fighter jet painted with the words King Trump. He hovers the plane over Times Square, at which point he dumps what appears to be liquid feces onto protesters crowding the streets below. The song “Danger Zone,” by Kenny Loggins, plays.

I switch tabs. On X, the official White House account has posted an AI image of Trump and Vice President J. D. Vance wearing crowns. A MAGA influencer has fallen for an AI-generated Turning Point USA Super Bowl halftime-show poster that lists “measles” among the performers and special guests. I encounter more AI videos. One features a man in a kitchen putting the Pokémon character Pikachu in a sous-vide machine. Another is a perfectly rendered fake ’90s toy commercial for a “Jeffrey Epstein’s Island” play set. These videos had the distinctive Sora 2 watermark, which people have also started to digitally add to real videos to troll viewers.

Here’s the Trump video which has to be seen to understand our brain rot moment:

Why Is Trump Trolling?

The increasingly pro-Trump John Michael Greer has a theory:

Trump has realized that it’s far more effective to mock his enemies than to argue with them, and has teams of meme artists going at it hammer and tongs. His goal, I think, is to crack the facade of mandatory niceness that so many people on the left cultivate so assiduously, knowing that once it breaks and everything they’ve been repressing comes spilling out, they’ll alienate voters the way Biden did with his famous Reichstag speech

The reaction to Kirk’s murder is a good example of what he’s trying to goad them to do, and they’re falling into his trap with embarrassing ease.

Now we’ll grapple with the question of whether the stress of the obvious AI bubble is causing brain rot among tech execs.

AI Number Go Down

The first thing to know to understand the stakes at play is the amount of money OpenAI is raising and spending. That’s why I called OpenAI a “money pit” in June.

Then The Information reported in September about the obligations OpenAI is looking to take on:

Revenue growth from ChatGPT is accelerating at a more rapid rate than the company projected half a year ago. The bad news? The computing costs to develop artificial intelligence that powers the chatbot, and other data center-related expenses, will rise even faster.

As a result, OpenAI projected its cash burn this year through 2029 will rise even higher than previously thought, to a total of $115 billion. That’s about $80 billion higher than the company previously expected.

The unprecedented projected cash burn, which would add to the roughly $2 billion it burned in the past two years, helps explain why the company is raising more capital than any private company in history. CEO Sam Altman has previously told employees that their company might be the “most capital intensive” startup of all time.

the company has projected $13 billion in total revenue this year, up three and a half times from last year and about $300 million higher than its earlier projections for 2025, the new data show. Its revenue projection for 2030 rose about 15% to roughly $200 billion compared to the prior projection.

ChatGPT is a large driver of the company’s increased projections. OpenAI expects the chatbot to generate nearly $70 billion in additional revenue over the next six years, compared to earlier projections. Millions of people and thousands of businesses pay subscription fees to use ChatGPT.

OpenAI projected nearly $10 billion in revenue from ChatGPT this year, an increase of roughly $2 billion from projections earlier this year, and nearly $90 billion in revenue from the chatbot in 2030, a roughly 40% increase from the earlier projections.

OpenAI’s projections also raised expectations for how it will generate revenue from users who don’t pay for ChatGPT. It’s unclear how OpenAI plans to make money off that portion of its user base, although it could include shopping-related services or some form of advertising. The data show the company expects to generate around $110 billion in revenue between 2026 and 2030 from such services.

OpenAI has been promising massive revenue growth and Gary Marcus doesn’t think they’ll deliver:

OpenAI can’t make its ambitious revenue numbers — 13x-ing current revenue in five years — unless revenue from paying big business customers keeps going up and up and up and up and up.

And he shares a number of charts and tweets to illustrate his 5 reasons.

First the rosy projections:

Then reports of declining use of generative AI in the American workforce, which explains why OpenAI is pursuing advertising, porn, personal shopping, and consumer video generation.

There’s no better way to understand the awfulness of OpenAI’s consumer video generation app Sora than watching this video titled “Sora Proves the AI Bubble Is Going to Burst So Hard” by Adam Conover.

Partial transcript:

Sora 2 is one of the weirdest and in many ways worst apps to ever make it into the app store. And I’ve tried Pimple Popper Light.

So, first of all, Sora gives you the option of letting anyone make an AI deep fake video of you if you take the little step of letting them steal your face.

And almost in order to demonstrate why no one should ever do this, Sam Alman himself kindly donated his own likeness for us to have fun with. Which means that when you open Sora, almost one out of every three videos you see is Sam being physically, emotionally, and sometimes almost sexually humiliated by his own users.

Conover also discusses this Washington Post piece about the ghoulishness of the app:

Ilyasah Shabazz didn’t want to look at the AI-generated videos of her father, Malcolm X. The seemingly realistic clips — made by OpenAI’s new video-maker Sora 2 — show the legendary civil rights activist making crude jokes, wrestling with the Rev. Martin Luther King Jr. and talking about defecating on himself.

Sora’s speed and uncanny realism has helped rocket the app to the top of the download charts, and videos reanimating the dead have been among its most viral clips. Sora-produced videos of Michael Jackson, Elvis Presley and Amy Winehouse have flooded social media platforms, with many viewers saying they struggle to tell whether the videos are real or fake.

Some clips played for laughs, such a video of “Mr. Rogers’ Neighborhood” host Fred Rogers writing a rap song with hip-hop artist Tupac Shakur. Others have leaned into darker themes. One video showed police body-camera footage of Whitney Houston looking intoxicated. In some clips, King makes monkey noises during his “I Have a Dream” speech, basketball player Kobe Bryant flies aboard a helicopter mirroring the crash that killed him and his daughter in 2020, and John F. Kennedy makes a joke about the recent killing of right-wing influencer Charlie Kirk.

And a Techcrunch piece headlined: ChatGPT’s mobile app is seeing slowing download growth and daily use, analysis shows:

ChatGPT’s mobile app growth may have hit its peak, according to a new analysis of download trends and daily active users provided by the third-party app intelligence firm Apptopia. Its estimates indicate that new user growth, measured by percentage changes in new global downloads, slowed after April.

The firm looked at the global daily active user (DAU) growth and found that the numbers have begun to even out over the past month or so.

Although October is only half over, the firm says it’s on pace to be down 8.1% in terms of a month-over-month percentage change in global downloads.

To be clear, this is a look at download growth, not total downloads. In terms of sheer number of new installs, ChatGPT’s mobile app is still doing well, with millions of downloads per day.

Gary Marcus added:

If and when the whole generative AI edifice comes crashing down, it will be in no small part because too many otherwise smart investors read too many graphs—from “LLM scaling” to usage statistics—too naively, assuming that things had been going up for a while would continue to go up at the same pace, indefinitely.

This is, of course, a version the trillion pound baby fallacy I have mentioned before

LLMs Getting Brain Rot Themselves?!?

A group of researchers from Texas A&M University, University of Texas at Austin, and Purdue University have released a new study called “LLMs Can Get ‘Brain Rot’!”:

We propose and test the LLM Brain Rot Hypothesis: continual exposure to junk web text induces lasting cognitive decline in large language models (LLMs). To causally isolate data quality, we run controlled experiments on real Twitter/X corpora, constructing junk and reversely controlled datasets via two orthogonal operationalizations: M1 (engagement degree) and M2 (semantic quality), with matched token scale and training operations across conditions.

Contrary to the control group, continual pre-training of 4 LLMs on the junk dataset causes non-trivial declines (Hedges’ g>0.3) on reasoning, long-context understanding, safety, and inflating “dark traits” (e.g., psychopathy, narcissism). The gradual mixtures of junk and control datasets also yield dose-response cognition decay: for example, under M1, ARC-Challenge with Chain Of Thoughts drops 74.9 → 57.2 and RULER-CWE 84.4 → 52.3 as junk ratio rises from 0% to 100%.

Error forensics reveal several key insights:

  • Thought-skipping as the primary lesion: models increasingly truncate or skip reasoning chains, explaining most of the error growth.
  • Partial but incomplete healing: scaling instruction tuning and clean data pre-training improve the declined cognition yet cannot restore baseline capability, suggesting persistent representational drift rather than format mismatch.
    Popularity as a better indicator: the popularity, a non-semantic metric, of a tweet is a better indicator of the Brain Rot effect than the length in M1.
  • Together, the results provide significant, multi-perspective evidence that data quality is a causal driver of LLM capability decay, reframing curation for continual pretraining as a training-time safety problem and motivating routine “cognitive health checks” for deployed LLMs.

So that doesn’t seem good for the industry’s financial prospects, or anything else.

And no discussion of brain rot and LLMs is complete without Boris Johnson discussing his ChatGPT use with Saudi media outlet Al Arabiya.

Quotes via The Register:

“I love ChatGPT,” the blond-mopped Brexiteer told Al Arabiya English earlier this week.

Famous for making stuff up and going on flights of fancy, Johnson served as prime minister from July 2019 until September 2022, when he was ousted after misleading colleagues over a scandal involving his government’s deputy chief whip, the party disciplinarian. OpenAI’s ChatGPT is also prone to making statements that turn out not to be entirely true.

Adopting a strange affected accent in his TV interview this week, Johnson said: “I love AI, do you use AI? Absolutely, I use ChatGPT. I love ChatGPT. I love it. ChatGPT is fantastic.”

When pressed on what he used the large language model (LLM) for, the former MP confessed he was “writing various books.”

He also said he liked to “just ask questions,” mainly, it would seem, because he wanted to hear the robot say how clever his questions were.

“‘You agree, your brain, you’re excellent. You have such insight.’ I love it,” Johnson told the interviewer.

Asked whether Johnson tells the truth, ChatGPT was a little circumspect.

“It’s complicated. Short answer: sometimes yes, but there’s quite a lot of evidence he often does not tell the full truth, or is misleading, or makes mistakes. Whether those are deliberate lies or exaggerations or misunderstandings is often in dispute,” it said.

Side note, this is what happens when you visit Al Arabiya using a VPN

Long Covid Also Causes Brain Rot

Rolling Stone is writing about the impact of long covid in U.S. schools which reminds me I need to do more coverage of Mr MAHA himself, Robert F. Kennedy, Jr.:

Over the next three years, Lia would develop a perplexing, debilitating, and persistent set of symptoms. Extreme fatigue that turned mornings into catatonic nightmares. Brain fog that made memories slippery. Incessant vomiting. It all kept her out of class, stuck in the nurse’s office — or out of school entirely. The straight-A’s evaporated.

Lia’s story is one I’ve heard from dozens of families over two years investigating the impact of long Covid on kids. It’s a slow-moving spiral: first their health, then their grades, then their future. And as the country throttles past the pandemic’s fifth anniversary, those for whom Covid looms very present are feeling increasingly forgotten, subject to pervasive skepticism and a kind of cultural fatigue when it comes to their illness.

But since the earliest days of the second Trump administration, the dollars that could help those suffering from the illness have quietly faded away. In March, cuts to the National Institutes of Health (NIH) disappeared the nearly $2 billion invested in the RECOVER (Researching Covid to Enhance Recovery) initiative, hamstringing research that might have yielded diagnostic tests or better treatments (though after protest from advocates, some research grants have been restored). The same week, the administration shuttered the Department of Health and Human Services’ Office of Long Covid Research and Practice.

n April, the Department of Government Efficiency (DOGE) gutted the NIH’s Vaccine Research Center — the entity whose work laid the foundation for the Moderna Covid shot. In May, health officials installed by Trump overrode career scientists at the Food and Drug Administration to limit approvals of new Covid vaccines. Around the same time, HHS Secretary Robert F. Kennedy Jr. disavowed the vaccine for healthy kids and pregnant women. (“We’re now one step closer to realizing President Trump’s promise to make America healthy again,” Kennedy said in a video announcing the policy.)

In the weeks that followed, the secretary also removed every member of the Centers for Disease Control and Prevention (CDC) vaccine advisory committee, replacing them with a merry band of loyalists and skeptics. That committee has since walked back recommendations for the Covid shot. And since the committee’s suggestions only move forward with the approval of the CDC director, RFK Jr. fired Susan Monarez, whom he’d appointed to that role 29 days earlier, for her refusal to “commit in advance to approving every [committee] recommendation regardless of the scientific evidence,” as Monarez later testified at a Senate hearing.

And this X.com exchange is illustrative of the continuing brain rot around the discussion of covid:

When Billionaires Buy Brain Rot

And my favorite recent interview withAaron Good talking to BetBett Media included a fascinating discussion about the reasons behind the dismal quality of the dismal science in Western academia in 2025 which :

It’s really going to take us this long to say ‘grass is green’ in political science because the methodological straight jacket they put themselves in.

Another way to understand why political science is so bad is to look at economics because that discipline is also such bullshit from top to bottom.

Like Milton Friedman saying no there’s no such thing as rent and then we look at our economy and we are just beset by rentier billionaires.

Yet the economics discipline kind of focuses as though Freriedman was correct and there’s no such thing as rent and says ‘don’t think about these questions.’

It’s way worse than Adam Smith because Adam Smith actually understood that the rents of feudalism were what were damaging the economy and it was true.

It kept the economy in a form of stasis because of these these rentier landlords.

But the problem is with with capitalism, the people at the top are basically those who are most effective rent seekers, monopolists, and the people who control the way finance is regulated and so on.

So since political economy is ultimately what determines the power structure of a civilization, politics, political science and economics are going to be the two most bullshit disciplines in the west because they have to be.

You cannot have people really working to illuminate politics or economics both or political economy because those things are the most important in terms of maintaining illusions and sort of bullshit orthodoxies so that the whole criminal enterprise can keep running.

Yeah. H.L. Mencken of all people wrote, I think, the best essay on this called “The Dismal Science” where he does a political economic analysis of academia and says, you know, people with with power and wealth, they don’t give a f**k about chemistry or astronomy or what have you, but they absolutely do care about economics.

He doesn’t (explicitly say) political science, but you can (apply Mencken’s analysis).

Any science that deals with the source of (oligarchical) power is going to be one that they really want to influence. And so I think that’s really the reason why political science is apologetics for uh capitalist democracy and economics is apologetics for neoliberal capitalism.

Brain Rot and the Bullshit Jobs Apocalypse

This video from Cy Canterel is on theme also.

Partial transcript:

Silicon Valley is in the middle of pulling off one of the most elegant cons in its history, convincing the world that building smart machines requires destroying human intelligence and jobs at industrial scale. It’s the inevitable result of mythological thinking so intoxicating that admitting it’s wrong would mean acknowledging hundreds of billions of dollars were invested in the wrong direction. If it continues, they will literally eat our future from the inside out.


Companies are committing over $300 billion to AI infrastructure in 2025 alone.

What if the entire architectural approach is not just inefficient but actively counterproductive? What if the pursuit of AGI through brute force scaling has created systems that are simultaneously incredibly expensive and also remarkably stupid?

Companies are developing research that makes their infrastructure obsolete while being unable to pivot because it would kill their investments. They must sell what they’ve built and the only way is through mass job elimination. The employment apocalypse is already happening. The tech sector lost 130,000 jobs in 2024. And in January 25, we saw the lowest professional job opening since 2013, with 40% of white collar job seekers failing to get an interview at all. But here’s the twist. Many AI targeted jobs are what anthropologist David Greyber called bullshit jobs. Roles like administrative coordinators, compliance officers, and middle management that exist mainly to deal with complex hierarchy and bureaucratic processes.

These roles are often light on meaning and heavy on misery. In theory, eliminating this kind of work could free humans for care work, creativity, and community building and other roles that more directly support everyone’s flourishing. But there’s a catch. Because even these jobs distribute resources, eliminating jobs without changing the predatory and meaningless economic framework creates mass unemployment, not liberation.

In the US, cities like Nashville, Houston, and Dallas might face economic collapse. This is because these areas employ massive numbers of folks in customer service, finance operations, and insurance. They could see simultaneous unemployment spikes, property value crashes, shrinking tax bases, and secondary waves of unemployment from hospitality and service work that’s no longer supported by the economy. Unlike previous disruptions that happened gradually and created new opportunities elsewhere, like other countries, AI automation threatens to eliminate jobs faster than new ones can be created anywhere. Displaced workers face permanent exclusion from the economy, creating an hourglass economy with no middle.

That’s all for today from me, I’ll be back Friday morning.

Print Friendly, PDF & Email

58 comments

  1. Mikerw0

    I find what people call AI to be really interesting. There is an underlying assumption that this is the next big tech thing but no one can tell you why? People compare it to the internet, but it was really to see early on why the internet would become a thing. The early apps, email, messaging, simple shopping, information search, all made sense. When we get to LLMs, other than wildly made, unsupported claims, no proponent can tell me why I need it.

    Having had to deal with the investment side of the Dot-Com era at a large asset manager, it was really easy to understand the bet we were making. That doesn’t mean we got it right. But professional investors like me, and the chattering class on Wall Street and CNBC, could at least tell you in about 25 words why we were doing what we were doing and what we expected/hoped would result. Also, and quite importantly, we could play the portfolio strategy bet. In other words, we could buy a basket of companies of different market caps not being sure which would bust, which would be okay, and who the homer might be. And, really importantly, this was primarily an equity game, not credit or debt.

    This looks really different. It is staggeringly concentrated and capital intensive. External credit is central to the whole thing. And, no one can tell me why or how they’ll ever make money. To put this much capital at risk just doesn’t make sense — unless they assume there is an intrinsic bailout baked in(?) which could come from DoD.

    Has anyone in the commentariat seen an articulation of how this capital ever makes a return?

    1. Acacia

      The rough idea seems to be that “A.I.” is going to make many tasks “more efficient”. I.e., headcount can be reduced, coders will be able to produce more code, graphic artists will be able to create more graphic art, etc., etc. Of course a lot of this code, graphic art, etc. will be workslop, but since when have corporate managers really given a toss about quality? Puke it and pack it.

      Similarly, Google is saying that it can justify building moar data centers to run AI apps because AI is going to help us to make everything so much more efficient that even with more of these data centers there will be a net savings.

      Idk who believes any of this, but that’s the claim I’ve been seeing on how capital will see ROI.

    2. The Rev Kev

      That is a very relevant question that. How will AI end up paying for itself enough so that it can start generating a profit? Nothing I have read has suggested that this is ever going to happen. Are people out there already shortening it? Can we expect to see The Big Short 2.0?

      1. Windall

        I think it likely that the people shorting AI are already there or will be there soon, but I suspect that anyone who does so is in the insider trading group.

        If you or I shorted AI through a popular broker I don’t think we’ll get paid anytime soon.

      2. ChrisFromGA

        Karl Denninger made this point in one of his recent ticker forum pieces:

        AI guys are depreciating (and counting for capital requirements) their gear with a 6 to 10 year economic service life when there’s never been an emerging technology that has more than two for the compute and storage elements, and three or so for switching/routing elements

        https://market-ticker.org/akcs-www?post=254238#discuss

        My interpretation is that all these companies buying Nvidia chips are betting on a rapidly depreciating asset, one that will be technologically obsolete in two years’ time. What happens when a new competitor comes along and undercuts them with better and cheaper tech?

      3. Ed S.

        I’m not sure if it will pay for itself, but I think that there are a few insidious ways in which the various AI’s may monetize:

        1) Blackmail through advertising – presume that ChatGPT supersedes Google Search for most individuals as the way to find information then ChatGPT can charge to embed a specific recommendation as part of (or the entire) answer. Google uses ads (and payment for ad placement); ChatGPT can do the same. As an example: you ask ChatGPT for the three best steakhouses in Las Vegas. It can provide a reply (in a very friendly way) not necessarily of the three best but of the three that are willing to pay the most to be cited as “the best”. At least with Google search, you know when you’re getting an ad and can look at other links. And going one step further, if the idea of an AI agent takes hold, then rather than asking for the three best you could ask ChatGPT to make a reservation as the best steakhouse on a certain day. And if you don’t pay, you don’t get mentioned. At all.

        2) Staff replacement / augmentation – the AI’s will “dumb down” many jobs to the point where the employee may be wholly reliant of AI for even the most rudimentary tasks (while not AI related, consider what Waze has done for road map reading skills for individuals under the age of 40). If it’s widely adopted, we could end up with Dr. Lexus 20 years from now. And what will a company do? They will not be able to find anyone who can work without relying on AI. And if the dream of no employees comes true, what is to stop the AI provider from charging 99% of what it would cost to employ a person? What is it that Yves says, “If your business relies on a platform you don’t have a business?”. If your organization is reliant on AI, do you really have an organization?

        3) Personal skills erosion – I use AI for a number of tasks and it’s a great help. For example, using an AI to produce a verbatim transcript of an on-line meeting beats trying to listen and take notes; then using AI to summarize or pull out select themes from the transcript in a minute (literally!) is a huge help. If I wanted, I could probably ask the AI to provide a recommendation(s) based on the meeting. Now can I do these three tasks – you bet, but I’ve been doing it for 30+ years of my work career. But if I use these tools for the next five years, will I ever be able to go back or will my skills erode? And if they erode, will I be able to perform without AI? More likely – because of AI people starting a career today never develop those skills.

        Finally, and not a monetization opportunity: what is complete control of all knowledge and communications worth to the PTB? A trillion dollars? Two?

    3. Mikel

      I’m screaming in my head about water (especially!!) and energy resources being stretched in order to enable people to make fake videos.
      These companies get more tax breaks as people get rising energy bills.
      And again…the water. Basic, common sense survival needs.

      1. Nat Wilson Turner Post author

        I really do think there’s at a minimum a faction of the oligarchy who are in favor of mass human die offs. It is at least a pragmatic view in some senses even if unbelievably evil.

      2. Andrew

        I ascribe to the conspiracy theory that the videos, advertising, “art,” and social media slop we’re being encouraged to blame for the resource hogging is a mask for what this infrastructure is really being built for: massive domestic surveillance and military applications.

          1. t

            And it will be slop domestic surveillance, but who cares about cracking a few extra heads. We have enough robot dogs for the job!

          2. Mel

            Hmmph. The Act of Surveillance in the Age of Automatic Image Generation..What will surveillance be like when ipsi custodes can see any surveillance result they want on demand?

            ChatGPT, write an essay entitled “The Act of Surveillance in the Age of Automatic Image Generation“, about 20 pages, in the style of Walter Benjamin.

    4. matt

      general AI is supposed to make people work faster. kind of like excel, it’s a tool people can use to quickly sort through data. except it’s not excel, it sucks. but it’s meant to be a sort of subscription service for work improvements.
      you can also have besoke AI tailored to a specific process. in the class i just got out of my professor was talking about chemical plants using ML for controls systems to better deal with the massive amounts of data coming out of the plant. but bespoke and general AI are two different things and I do not think they should be conflated. being trained on all of reddit vs a selective dataset are not the same.

      1. Nat Wilson Turner Post author

        there’s actually quite a bit of promise in bespoke LLMs using very carefully curated data sets.

        1. raspberry jam

          Hard agree from someone who works in the field!

          I cannot participate much in the frenzy around the general use/public chatbot LLM implementations because it is so wildly different than what is going on in the hyper-specialized private space. If the public stuff was all I’d seen I would be just as rabidly anti-“AI” as most here.

          1. Pat

            That actually scares me more as the people who can afford those bespoke LLMs are for the most part pretty darn evil and controlling. Sure they might be less in our face about their psychopathic tendencies than the usual headline grabbing millionaires and billionaires but that doesn’t mean they have space in their plans for the general good.

            Just as I don’t believe in banning books or history but in examining them, putting them in context and illuminating the problems, my baseline would normally be one of seeking to mitigate or limit the damage from some technological advancement while keeping what good can be found. There are exceptions though, and I truly believe humanity would be best served by all AI being infected with a super virus, and eating itself until nothing remains but blithering code which renders it utterly unresponsive and uncorrectable. Mostly because there will always be some genius who is sure their AI can help them rule the world with little or no concern for the well being of anyone else.

    5. Duke of Prunes

      I’ve been working with some “Agentic AI” tools. These are AI agents that use LLMs to accomplish tasks using “tools”. Its the next big thing!

      Here’s the thing. Every tool has a tollboth. Instead of just renting the compute and some foundational services like databases, each and every step in the workflow incurs a charge.

      I think the plan is similar to Uber. Get companies hooked on the subsidized services then after they layoff staff and are completely dependent on these tools, they will jack up the rates and sweet luchre will flow.

    6. ChrisPacific

      The killer app as far as I’m concerned is human-computer interface. You can now interact with an agent using natural language and it will understand you moderately well and actually do what you ask most of the time. That’s a big step forward. It’s also a step forward in productivity apps like code development, where it functions as a suggestion engine, but a much more effective one than previous generations.

      It comes with some pretty severe built-in limitations that aren’t widely understood, and can kill any benefit realization or even make it counterproductive if you don’t manage them carefully. Despite what the hype merchants are selling, it’s NOT a replacement for competence. You need to critically evaluate its suggestions, and that actually requires a high degree of proficiency to do well, since when it’s wrong it’s often wrong in subtle and damaging ways that are hard to spot.

      The classic situation where it can help in development settings is the obscure technical problem where you have to combine technical troubleshooting with scouring masses of documentation and discussion for clues – and after you spend days or weeks on that, the answer turns out to be quite simple. AI helpers can cut those days or weeks down to hours. That’s a major win for developers, but if you apply its suggestions in ignorance then there’s a good chance you are creating an even bigger problem for someone else to solve down the road.

      So it’s a force multiplier for productivity in areas where you are otherwise already expert, but it doesn’t replace expertise itself. Organizations that buy the hype and lay off all their staff will end up getting burned. In the coming years, I suspect contract roles for fixing codebases partly or entirely corrupted by AI will be the new Y2K.

    7. IEL

      How much of the LLM output is straight up propaganda via fake social media posts and comments? There are plenty of deep pocketed people who might see that use case as a good investment. Trump’s shit-bombing and Cuomo’s disastrous ad are obvious fakes; the sneakier deepfakes are the real threat.

      1. Nat Wilson Turner Post author

        I’m thinking surveillance and mind control are the real killer apps of LLM AI and might explain some of the over investment.

  2. Ignacio

    Next steps in the rot:
    -Convert ChatGPT into your only financial advisor and let it put all your savings in crypto world. Allow extreme leverage.
    -Give ChatGPT the car keys, credit card keys and codes and make it your personal buyer in charge.
    -Follow ChatGPT-made recipes for your haute cuisine.
    -Make of ChatGPT your lawyer, advisor, friend, lover, pet…
    -Make of ChatGPT your own, personal Jesus.
    -etc.
    Don’t tell me there aren’t infinite possibilities for AI business. Full happiness and competitiveness are around the corner with AI.

  3. .Tom

    “Sammy Altman released Sora 2, a Tik Tok knock-off ckock-a-block with AI slop.”

    Erik Brynjolfsson has a way with words. Thanks for the link, Nat.

    1. .Tom

      Sorry. That was Adam Conover from the video you shared “Sora Proves the AI Bubble Is Going to Burst So Hard”. I copypasted the wrong name in my previous comment.

      Conover’s video is a must watch. It’s is very well researched. For its genre (angry dude with huge microphone declaiming to webcam) it’s extraordinary.

      https://www.youtube.com/watch?v=55Z4cg5Fyu4

  4. Cardiac

    I think that Nat’s early-article mention of the AI companies’ ventures into adult content tells you everything you need to know about where they see their path to probability. At least in the U.S. market, technology adoption in the adult entertainment space has been a reliable indicator of where the rest of the culture would wind up and I think it’s due to the surreptitious nature of the way their products are consumed. Accepting that the bar for titillation is lower than for entertainment, viewing audiences acclimated from film to video to digital, and subliminal acquiescence to those new formats made it easier to accept traditional media following suit. Same thing with the pivot to mobile and short-form video. Using the niche space of adult content, they can condition people to SoraSlop and then sell it to the streamers as a way to zero their overhead.

      1. Nat Wilson Turner Post author

        That will be the frontlines of the war, but never fear AIPAC’s biggest donor is the billionaire behind OnlyFans so I’m sure he’ll keep his cut when it’s time to crush the human talent.

        1. vao

          Such as getting fees for letting AI tools train with online and interactive OnlyFans videos — fees which will of course not be repaid to the OnlyFans human content creators, of course…

    1. ciroc

      Am I the only one hoping that AI-generated porn will dismantle the exploitative porn industry and liberate sex workers?

      1. ambrit

        The real problem here is the mindset induced in the more “amenable” population of porn viewers. For most porn, women, or children, are treated as objects to be manipulated and abused for the stimulation and satisfaction of the viewer. Carry that mind set over to the “real” world and we end up with predators of every stripe and hue.
        A simple way to look at it is to realize that porn does not talk back, but ‘real’ people do. How you handle the “talking back” is the measure of your maturity.
        I have a sneaking suspicion that “AI” porn will be standard exploitation theatre. The “objectification” of women et al will still be a main result.
        Stay safe.

        1. hk

          Yes, wrt “talking back.”

          I think this is a problem with AI more generally, or, perhaps even “customer is always right” mindset that tends to prevail in the West in all manner of settings.

  5. thoughtfulperson

    I watched the Cy Canterel video and found the discussion of small scale ai quite interesting. Hierarchical models (vs llm’s ?), using far less resources (can operate on phone or ’95 computer) yet more successful in doing puzzles such as Suduku or mazes.

    Seems like a similar pattern in many things, large centralized systems easily controlled and monetized vs small scale community or even individual size…

  6. Richard Childers

    In the late 1979s my older brother returned to San Francisco from MIT for Christmas break and he brought two friends.

    One of these friends introduced me to two books on creativity which probably changed my life.

    One of these books introduced, to the process of solving problems, the idea that there were three different categories of problems and three different categories of problem-solvers and that it was important to align the two.

    According to the authors, problems could be categorized as numeric … visual … or linguistic.

    Also according to the authors, individuals tended to be strongest in only one of these three domains and so it was recommended that one pay attention to the individual personality traits of the participants as well.

    Evidence supporting this analysis can be found in stereotypes relating to disciplines that require one specific trait in abundance – visual artists, computer programmers, lawyers, for instance.

    Through our own observation we have identified a fourth domain of human creativity; that is, kinaesthetic, or movement. We include dancers and sculptors in this category.

    Consider the problem of tying a shoelace. The problem does not easily surrender to mathematical analysis. Nor can it be adequately captured through verbal description or a series of still images. It can only be taught by doing. That is kinaesthetic.

    What does this have to do with AI, you ask.

    It is my arguement that Large Language Models, by definition, are optimized for linguistic problems; and so it is to that realm they should be applied.

    Expecting an LLM to develop a new alloy worth trillions of dollars is as likely as expecting a lawyer to develop a new alloy worth trillions of dollars; just sayin’.

    We need to stop calling it ‘artificial intelligence’ until we have a population of investors that knows what ‘artifice’ means and what ‘artifacts’ are.

    I prefer ‘machine intelligence’, ‘neural network’, or the older ‘expert system’ – comparing these different products of human intelligence to one another is counterproductive and lumping them together under the rubric ‘artificial intelligence’ is also counterproductive, in my humble opinion.

    My credentials: https://redwoodhodling.com/Exhibits/

    1. Nat Wilson Turner Post author

      great points, especially liked “Expecting an LLM to develop a new alloy worth trillions of dollars is as likely as expecting a lawyer to develop a new alloy worth trillions of dollars; just sayin’.”

  7. Siloman

    Re efficiency:
    Does anyone understand the difference between efficient and effective? Can something be both simultaneously? The sense I get is that it is assumed that if something is more efficient it is automatically more effective. It appears in practice – think human services, for example – that they are mutually exclusive. You can have either – one at the expense of the other – but not both.

  8. ChrisPacific

    I can’t stand most AI videos right now. They all look too much like a bad acid trip still (or at least, what I imagine a bad acid trip would be like). I avoid them when possible.

    Images suffer from the same problems to a degree, but it’s more controllable. I dabbled in AI image generation for a while but I got tired of it – it was a lot of hard work, mostly trial and error and hard to do systematically, and with the added problem of a disconnect in media (using words to describe image metadata). More than once I found myself thinking that developing some digital art skills would be a better and more productive use of my time.

    1. Nat Wilson Turner Post author

      Take it from a veteran AI is more like a bad Robitussin DM trip (don’t ever ever take that stuff in the doses required to get a psychedelic effect, I had one of the worst nights of my life on that shit.)

  9. hazelbee

    Why is the “LLM can get brain rot” paper result surprising?

    I dont think it is at all. I think this is a sensationalist take on the paper.

    if you read the method and results – they define two types of data from twitter/x, splitting posts by engagement and by semantic quality.

    for engagement – M1 – highly shared and liked posts are “junk”, less popular are the control
    for semantic quality – M2 – looks at the superficial or sensational nature of posts –

    Posts full of clickbait language (“WOW,” “LOOK,” “TODAY ONLY”) or exaggerated claims were tagged as junk, while fact-based, educational, or reasoned posts were chosen as control.

    so… guess what. LLM trained on posts that contain a high diet of exclamation marks and capital letters show brain rot – exactly as humans do.

    Is that really a surprise?

    Train a model on the highly shared, short form, popular, sensationalist, dopamine producing content from twitter, and.. you get a poorly reasoning model that doesnt do well in reasoning tasks!

    So take the opposite – if we take highly curated high quality information, of high semantic quality do we get a high functioning reasoning model ? One that outperforms someone fed a steady diet of twitter , instagram, and tiktok?

    Tis a mystery! I’m shocked, shocked at the result!

    The paper gives a hypothesis, a method for models to get worse. It does not say that is currently happening.

    That just means careful curation of the training data is required – as already said above by both Nat and raspberry jam in their comments.

    1. Nat Wilson Turner Post author

      the problem is all of the big AI companies are going for data at scale, all the data they can get no matter how it is generated. the more AI slop they make the more gets back into their feeds.

    2. tegnost

      That just means careful curation of the training data is required

      And exactly which unbiased curator do you have in mind?
      This is why I postulated earlier that opinion pieces/books with ridiculous explanations of official events serve to train the algorithmic prediction model with orthodox opinion.
      I have zero faith in the quality of said curators beyond free access to everything and make up your own mind. Curators are intended to guide orthodox thinking. Why should anyone know that there were side effects to the covid vaccine? Why should anyone know ukraine is a lost cause from day one feb 23 2022? Shouldn’t you just ask your digital assistant to tell you what to think?
      And who needs to read when you have an all seeing eye to guide you in proper behavior (h/t larry ellison when we’re watching you all the time you won’t misbehave so…not so different from the system set up today).
      Dystopia is here.
      History is also here, and the mf’er’s always go too far

      Ozymandias

      Percy Bysshe Shelley
      1792 –
      1822
      I met a traveller from an antique land
      Who said: “Two vast and trunkless legs of stone
      Stand in the desert . . . Near them, on the sand,
      Half sunk, a shattered visage lies, whose frown,
      And wrinkled lip, and sneer of cold command,
      Tell that its sculptor well those passions read
      Which yet survive, stamped on these lifeless things,
      The hand that mocked them, and the heart that fed:
      And on the pedestal these words appear:
      ‘My name is Ozymandias, king of kings:
      Look on my works, ye Mighty, and despair!’
      Nothing beside remains. Round the decay
      Of that colossal wreck, boundless and bare
      The lone and level sands stretch far away.”

      1. Nat Wilson Turner Post author

        I don’t think you understood what I meant by curation. I’m talking about say a factory manager using LLMs to merge and organize hundreds of local data sets related to the factory operations, selected training manuals, etc.

        These LLMs are not being used for the general public, the people and organizations using them are also the ones carefully selecting the data.

        In my own work I’ve experimented with say taking my own database of 400+ music history podcasts I’ve done and nothing else and searching for and sorting quotes from the transcripts. I’m making AI slop but it’s all from my own work so, it’s pretty good.

        1. tegnost

          I appreciate your response as I was addressing hazelbee, but a localized llm is just an algorithm and those are commonly used.
          In the end it’s still larry ellison et. al. attempting to take over the world (I wish that was hyperbole) and you’ll have a self driving car and a robot dog that talks and makes you feel good, but you’ll be punished.
          Gotta keep the plebes in line.

          1. Nat Wilson Turner Post author

            sorry I’m experimenting with commenting from the back end for efficiency and can’t see all the threads in context.

      2. hazelbee

        Caveat – I am a curious learner on this rather than practitioner.

        Who said anything about unbiased?

        Pre-training data is already curated – deduplication, removal of toxic or harmful content, or PII, etc. It is “biased” – in that it is a cleansed, filtered, manipulated set of data. Whoever writes the algorithms to do that is writing in the bias.

        What the paper shows is that if you deliberately bias towards the kind of sensationalist slop from twitter in pre-training then the model gets worse at long form and reasoning tasks. and that fine tuning or instruction tuning afterwards doesn’t rectify that.

        Its an empirical result to back up the hypothesis.

        What Nat is referring to when he talks of taking his own database of 400 music history podcasts is different. Nat I don’t actually know how you’ve implemented that but it could be done by fine tuning a model, could be done using RAG – retrieval augmented generation , or search and include snippets as part of context for a query. The point is that is all done after the large compute intensive pretraining.

  10. RW

    “effortlessly consumable and wholly unsatisfying” – is it only me that sees the similarity with ultra-processed “food”?

    the blender in action

    1. ambrit

      Some of us who remember the free wheeling days of the 1960s and 1970s, I’d modify that to say: The blander inaction.
      Stay safe.

  11. Anna Sz

    I have an impression that even people skeptical about so called “AI” miss the point when trying to guess why this thing is being hyped so much, pushed on us so hard. I think that it’s not just about another investment scam like with crypto, it’s certainly not about prospective AGI, (the Big Tech moguls and their investors know very well that nobody is building any AGI), and it’s not just about prospective savings for companies when they fire some workers and force the rest to work 5 times harder on the pretext that now they have “assistance” from AI. I think that the most important purpose of the AI hype is to limit, or even eliminate, access to information for most of us. I’m afraid that within a few years Google (and other big search engines) will simply turn off organic search, so we will only get the “AI mode”. And if there are any alternative search engines operating by then, they will be blocked by major browsers or operating systems. Meanwhile, the journalism and academia, or what is left of them, are being undermined by these tools even now. So by the time we lose organic web search, there will be not much access to knowledge left elsewhere, like in the media or in university libraries. Nobody will read anymore the publications like Naked Capitalism, because we won’t be even able to find them. And we won’t even know we should try to find them, because when we try to google answers to questions like “why renting an apartment is so expensive” or “why unemployment rate is so high”, we will be getting AI answers based on neoliberal propaganda about the invisible hand of the market or something like that. We were heading this direction for some time, but with Fake Intelligence, primarily chatbots, it will all get accelerated. And I think all these investors who feed the hype with billions of dollars have this ultimate goal in mind – total control over access to information. It’s worth the expense, regardless of when the “AI” finally starts making any concrete material profit at all. When people have no access to information, there will be no danger of protest. no perspective for any kind of mass support for possible changes in the system. Even now, after so many years of neoliberal propaganda and oppression people still rebel, still stand up sometimes for human rights, workers rights, etc. But when the rich cut us off from the sources of information for good, we’ll stop bothering them with our little cries of discontent. And then they will be really able to do whatever they want without any obstacles or criticism.

    1. Nat Wilson Turner Post author

      Wow, that’s bleak but entirely plausible and a compliment to the surveillance, mind control applications as well.

  12. Safety First

    Late to the party, but.

    I cannot speak for the entirety of “tech world”, of course. However, a couple of things about the current LLM madness do stand out, from an insider’s point of view.

    One, I think the first time I heard of the concept of teaching computers how to code, so we could fire all of the expensive programmers, was back around 2016. And I am sure it had been around before then. At the time, mind, it was independent of what is now termed “AI research” (LLMs), rather, it was sort of like – can you load a bunch of templates and functions into a piece of software, so when you asked it to produce a “customer database” or a “website frontend”, it would auto-generate the generic solution, and then you could tweak it. Yes, sounds like the LLM-vibe-coding stuff, but the first discussions that I heard weren’t of a probabilistic model, but of literally having…pre-fabricated components that would just automatically form complete software packages. Like pre-fabricated rooms for those 1960s era Soviet buildings, those things were like stacking a bunch of cubes together and dropping a roof on top.

    The problem, of course, is that this is not remotely possible. Even if you want to build your own little piece of software that says “hello” to you in the morning, you’re going to tweak whatever code templates are out ther to a greater or lesser extent. And for something like a customer database for a large retailer or something, just no. Too much technical and business customization, and while yes, coders will use common libraries of functions so they wouldn’t have to reinvent every wheel by themselves, a bunch of code will have been written from scratch. Often in such an incomprehensible manner, that if you happen to fire the guy who had written it, there was almost no way to support it without rewriting whole sections.

    So this, today, it’s like we’re back in 2016, but with a lot more bells, whistles, and vacuous technobabble. But here is the main point – the reason, from the start, was that tech executives realized long ago their biggest cost center were their employees. How do we fire the most expensive employees. Shifting a bunch of stuff to India was one way, of course, but that only gets you so far.

    Which is why I firmly believe that tech firms would, today, rather drive this particular jalopy into a brick wall than admit that LLMs and vibe coding are sheer and absolute idiocy. In other words, a bunch of these tech CEOs and such aren’t just pushing AI because of FOMO, or because they are idiots – many of them are – but also, because “it is very difficult to get a man to understand something if his salary depends on him not understanding it”.

    Two. This is a bit of a sidetrack, but I distinctly remember the very late 2010s, just before COVID. The real “cutting edge” in Computer Science was proclaimed to be crypto, and so every, and I mean every CS department in every significant university around the world set about pushing out papers on how bitcoin technologies – formally, distributed databases using blockchains for integrity checks – could be applied to…any conceivable application. Oftentimes without even understanding the underlying technology, as in, hey, what if we wrote a college course registration program that worked exactly like Bitcoin (why?! for what conceivable reason?!!).

    And then the trend died when it was realized, that there were very few good applications for the technical side of things – I’m not talking about crypto the currency, I mean the tech itself.

    Of course, the difference with LLMs is that important people put up >$100 billion in capex to fund it already. Returns on investments must be chased until the bubble not just pops, but outright explodes. But I am hopeful that at some point, after a really deep blow-up, we will “rediscover” that coding without AI is a thing, and that LLMs should have been an interesting technological diversion on a much longer path to developing anything remotely resembling AI, but not much else. Mind, I might not actually live long enough to see said, but mayhap the Buddhists were right about the whole reincarnation thing.

Comments are closed.