Coffee Break: OpenAI as The Money Pit

OpenAI has fatally flawed finances which reveal a company that presents no serious threat to any of the tech incumbents, including Google.

In the (excellent) comments to my post last Monday on Google’s inexplicable-to-me decision to risk their search monopoly by going all-in on LLM AI one from Hickory exposed that I had failed to make a key point: ChatGPT is not a threat to replace Google as the leader in search because OpenAI loses money on every ChatGPT prompt and is trying to make it up in volume.

Let’s put aside the claims about LLMs having reasoning ability for now and focus on the bullish business case for ChatGPT as a killer app that threatens Google search. Cal Newport lays out that case:

The application that has… leaped ahead to become the most exciting and popular use of these tools is smart search. If you have a question, instead of turning to Google you can query a new version of ChatGPT or Claude. These models can search the web to gather information, but unlike a traditional search engine, they can also process the information they find and summarize for you only what you care about. Want the information presented in a particular format, like a spreadsheet or a chart? A high-end model like GPT-4o can do this for you as well, saving even more extra steps.

Smart search has become the first killer app of the generative AI era because, like any good killer app, it takes an activity most people already do all the time — typing search queries into web sites — and provides a substantially, almost magically better experience. This feels similar to electronic spreadsheets conquering paper ledger books or email immediately replacing voice mail and fax. I would estimate that around 90% of the examples I see online right now from people exclaiming over the potential of AI are people conducting smart searches.

This behavioral shift is appearing in the data. A recent survey conducted by Future found that 27% of US-based respondents had used AI tools such as ChatGPT instead of a traditional search engine. From an economic perspective, this shift matters. Earlier this month, the stock price for Alphabet, the parent company for Google, fell after an Apple executive revealed that Google searches through the Safari web browser had decreased over the previous two months, likely due to the increased use of AI tools.

Keep in mind, web search is a massive business, with Google earning over $175 billion from search ads in 2023 alone. In my opinion, becoming the new Google Search is likely the best bet for a company like OpenAI to achieve profitability…

That’s a seemingly reasonable claim, but doesn’t hold up after a look into OpenAI’s business plans for ChatGPT.

Despite their seeming threat to Google search, OpenAI is the kind of self-defeating competition no monopolist should fear, much less destroy its proven business model to compete with.

The New York Times put it pretty well last September:

Morningstar summed up the NYT’s reporting well:

Financial documents reviewed by The New York Times reveal a company burning through cash at an alarming rate, raising questions about the sustainability of its current trajectory and the potential risks of prioritizing break-neck expansion over responsible AI development. Let’s discuss some of the key points from the New York Times report, which was published last week before the funding announcement:

— OpenAI’s monthly revenue hit $300 million in August 2024, a 1,700% increase since early 2023.

— The company expects to generate around $3.7 billion in annual sales this year and anticipates revenue ballooning to $11.6 billion in 2025.

— Despite rising revenues, OpenAI predicts a loss of about $5 billion. this year due to high operational costs, biggest of which is the cost of computing power it gets through its partnership with Microsoft.

— OpenAI predicts its revenue will hit $100 billion in 2029.

The Times report raises serious questions about OpenAI’s sustainability and realistic goals. The company’s monthly revenue growth from early 2023 to August 2024 is nothing short of explosive; however, the long-term projection of $100 billion in revenue by 2029 appears unrealistic. This figure would require sustaining an average annual growth rate of more than 90% for five consecutive years (93.3% to be precise, from an expected $3.7 billion in 2024 to $100 billion in 2029), a feat rarely achieved in the tech industry, especially for a company already operating at such a large scale. While impressive on paper, said projections may be masking underlying financial challenges and setting expectations that could be difficult, if not impossible, to meet.

Financial challenges become even more apparent given the current expense structure in relation to projected growth. It’s crucial to note that, even if it reaches the projected revenue targets, OpenAI is not merely failing to break even in 2024 – it’s losing significantly more money than it’s generating. This means that before OpenAI can even consider achieving its ambitious growth targets, it must first find a way to become profitable, or at the very least, break even.

Bryan McMahon pointed out the massive financial risk posed by the stock market bubble driven by faith in LLMs or as he calls it Generative AI:

Venture capital (VC) funds, drunk on a decade of “growth at all costs,” have poured about $200 billion into generative AI. Making matters worse, the stock market’s bull run is deeply dependent on the growth of the Big Tech companies fueling the AI bubble. In 2023, 71 percent of the total gains in the S&P 500 were attributable to the “Magnificent Seven”—Apple, Nvidia, Tesla, Alphabet, Meta, Amazon, and Microsoft—all of which are among the biggest spenders on AI. Just four—Microsoft, Alphabet, Amazon, and Meta—combined for $246 billion of capital expenditure in 2024 to support the AI build-out. Goldman Sachs expects Big Tech to spend over $1 trillion on chips and data centers to power AI over the next five years. Yet OpenAI, the current market leader, expects to lose $5 billion this year, and its annual losses to swell to $11 billion by 2026. If the AI bubble bursts, it not only threatens to wipe out VC firms in the Valley but also blow a gaping hole in the public markets and cause an economy-wide meltdown.

But wait it gets worse, per Ed Zitron:

It seems, from even a cursory glance, that OpenAI’s costs are increasing dramatically. The Information reported earlier in the year that OpenAI projects to spend $13 billion on compute with Microsoft alone in 2025, nearly tripling what it spent in total on compute in 2024 ($5 billion).

This suggests that OpenAI’s costs are skyrocketing, and that was before the launch of its new image generator which led to multiple complaints from Altman about a lack of available GPUs, leading to OpenAI’s CEO saying to expect “stuff to break” and delays in new products. Nevertheless, even if we assume OpenAI factored in the compute increases into its projections, it still expects to pay Microsoft $13 billion for compute this year.

This number, however, doesn’t include the $12.9 billion five-year-long compute deal signed with CoreWeave, a deal that was a result of Microsoft declining to pick up the option to buy said compute itself. Payments for this deal, according to The Information, start in October 2025, and assuming that it’s evenly paid (the terms of these contracts are generally secret, even in the case of public companies), this would still amount to roughly $2.38 billion a year.

I’ll let the Entertainment Strategy Guy nail the profitability coffin shut:

By all accounts, right now, OpenAI is losing money. Like literally billions of dollars. The energy costs of LLMs are enormous. If they’re pricing their services below market value, trying to gain market share, then we don’t know if AI can make money for the service it’s providing right now.

Two factors are driving these costs. First, the memory an AI program uses (either the more data it stores as it thinks or the longer it thinks about a problem/answer), the more it costs the AI companies in compute. Second, the AI companies are racing to build next-generation models that will require even more training, which means higher costs. And the salaries for top AI engineers/scientists are also sky-rocketing up.

This is why I’m somewhat skeptical about the sorts of things that OpenAI is promising that AI can do (like become your universal assistant that remembers everything about you); it seems like an absolute memory boondoggle of monumental proportions. How much energy will it take for AI to analyze my whole life if it’s already too taxing for an LLM to remember how to format links properly?

But wait, there’s even more bad news that just dropped. OpenAI is now in a bidding war for talent with some of the stupidest money out there, Mark Zuckerberg of Meta:

…competition for top AI researchers is heating up in Silicon Valley. Zuckerberg has been particularly aggressive in his approach, offering $100 million signing bonuses to some OpenAI staffers, according to comments Altman made on a podcast with his brother, Jack Altman. Multiple sources at OpenAI with direct knowledge of the offers confirmed the number. The Meta CEO has also been personally reaching out to potential recruits, according to the Wall Street Journal. “Over the past month, Meta has been aggressively building out their new AI effort, and has repeatedly (and mostly unsuccessfully) tried to recruit some of our strongest talent with comp-focused packages,” Chen wrote on Slack.

And speaking of dumb money and OpenAI, Softbank is involved, although maybe not as much as reported:

(In April) OpenAI closed “the largest private tech funding round in history,” where it “raised” an astonishing “$40 billion,” and the reason that I’ve put quotation marks around it is that OpenAI has only raised $10 billion of the $40 billion, with the rest arriving by “the end of the year.”

The remaining $30 billion — $20 billion of which will (allegedly) be provided by SoftBank — is partially contingent on OpenAI’s conversion from a non-profit to a for-profit by the end of 2025, and if it fails, SoftBank will only give OpenAI a further $20 billion. The round also valued OpenAI at $300 billion.

And things might not be going so well with Softbank because OpenAI is now talking to even dumber, and much more dangerous, money: Saudi Arabia.

And if you’ve ever paid attention to the actual words coming out of OpenAI CEO Sam Altman’s mouth, you’ll realize Altman attracting dumb money is just a case of birds of a feather flocking together.

Ed Zitron chronicles some of the stupid in his latest newsletter:

Here is but one of the trenchant insights from Sam Altman in his agonizing 37-minute-long podcast conversation with his brother Jack Altman from last week:

“I think there will be incredible other products. There will be crazy new social experiences. There will be, like, Google Docs style AI workflows that are just way more productive. You’ll start to see, you’ll have these virtual employees, but the thing that I think will be most impactful on that five to ten year timeframe is AI will actually discover new science.”

When asked why he believes AI will “discover new science,” Altman says that “I think we’ve cracked reasoning in the models,” adding that “we’ve a long way to go,” and that he “think[s] we know what to do,” adding that OpenAI’s o3 model “is already pretty smart,” and that he’s heard people say “wow, this is like a good PHD.”

That’s the entire answer! It’s complete nonsense! Sam Altman, the CEO of OpenAI, a company allegedly worth $300 billion to venture capitalists and SoftBank, kind of sounds like a huge idiot!

Ed also roasts Alphabet/Google’s Sundar Pichai:

Sundar Pichai, when asked one of Nilay Patel’s patented 100-word-plus-questions about Jony Ive and Sam Altman’s new (and likely heavily delayed) hardware startup:

I think AI is going to be bigger than the internet. There are going to be companies, products, and categories created that we aren’t aware of today. I think the future looks exciting. I think there’s a lot of opportunity to innovate around hardware form factors at this moment with this platform shift. I’m looking forward to seeing what they do. We are going to be doing a lot as well. I think it’s an exciting time to be a consumer, it’s an exciting time to be a developer. I’m looking forward to it.

The fuck are you on about, Sundar? Your answer to a question about whether you anticipate more competition is to say “yeah I think people are gonna make shit we haven’t come up with and uhh, hardware, can’t wait!”

While I think Pichai is likely a little smarter than Altman, in the same way that Satya Nadella is a little smarter than Pichai, and in the same way that a golden retriever is smarter than a chihuahua. That said, none of these men are superintelligences, nor, when pressed, do they ever seem to have any actual answers.

If ChatGPT were such an existential threat to Google’s search monopoly that Alphabet’s only option was risking the empire to beat OpenAI in the LLM race, it would be profitable or at least have a plausible path to profitability.

Sam Altman being a blithering idiot isn’t really the disadvantage it should be since he’s going up against competition like Mark Zuckerberg, Elon Musk, and Sundar Pichai.

This isn’t like Uber vs. the local taxi incumbents in the 2010s where despite Uber’s never going to be profitable business model they were able to take over in many markets because OpenAI does not have a huge cash advantage over Alphabet and never will.

Next week we’ll look at at Meta, the absolute stupidest tech money around — a company that put Dana White of the Ultimate Fighting Championship on its board.

And because I promised I’d get around to this, regarding LLMs ability to “reason” it was thoroughly debunked last October by six researchers from Apple in a paper called “GSM-Symbolic: Understanding the Limitations of Mathematical Reasoning in Large Language Models.”

When the paper was released the senior author, Mehrdad Farajtabar, tweeted that, ““we found no evidence of formal reasoning in language models …. Their behavior is better explained by sophisticated pattern matching—so fragile, in fact, that changing names can alter results by ~10%!”

Print Friendly, PDF & Email

68 comments

  1. XXYY

    One of the Ed Zitron pieces you cited:

    https://www.wheresyoured.at/openai-is-a-systemic-risk-to-the-tech-industry-2/

    while quite long, goes deep into the details of OpenAI’s present and future financing and how they expect to meet their financial commitments over the next two or three years. It left me in a funk for several days, not because I care a whit about OpenAI, but because the business press has now apparently become so delusional and dishonest that it’s hard to believe anything or even see how things can go on.

    After reading this, I have to conclude that OpenAI/Softbank is definitely, positively going to go out of business in the next two or three years. How much of the rest of the economy they take down with it is perhaps more worrisome.

    Reply
    1. Kurtismayfield

      This is the same business press that covered Uber and Lyft breathlessly for 15 years as Uber burnt through money.. because they wanted the businesses to succeed. The will do the same thing with OpenAI until they magically find profit.

      Reply
      1. Nat Wilson Turner Post author

        Kurtismayfield,
        yea but I don’t think the overall economy has the runway that Uber had in the 2010s. Gonna be some reality checks sooner rather than later.

        Reply
    2. Nat Wilson Turner Post author

      Yea, the LLM bubble popping might be the end of the “Everything Bubble” that George Soros wrote a whole book about 20 years ago

      Reply
  2. Sub-Boreal

    As a complement to this piece, I recommend this lengthy interview with technology journalist Karen Hao about her recent book “Empire of AI: Dreams and Nightmares in Sam Altman’s OpenAI”.

    Reply
    1. Henry Moon Pie

      Agreed. It was excellent.

      I’m not so sure that taking over search or even making money are the real goals of these people. We had a story in Links today about LLMs ability to ensnare the lonely or unhappy into “relationships” that seem quite real to the victims. That could prove a handy tool if you’re a tech billionaire trying to take over the world or even a government seeking to disable potential dissidents.

      One emerging dystopia is a world where tech billionaires use AIs to battle each other for world control. The LLMs would be useful for “converting” people to their cause, while another type of AI could control huge drone swarms that attack opponents’ data centers. Philip K. Dick could make a fun novel out of that possibility.

      Reply
      1. Nat Wilson Turner Post author

        excellent point.
        I’m always missing the story because I’m busy arguing the rational explanation. Like trying to evaluate today’s stonk market based on business fundamentals or betting on professional wrestling based on the athleticism of the performers.
        LLM’s ability to persuade might indeed be the killer app. What dark magick!

        Reply
      2. Michaelmas

        That is in fact VULCAN’S HAMMER from 1959, or damned close to it, IIRC. It’s almost universally agreed to be Dick’s worst novel, for completions only.

        Reply
        1. Nat Wilson Turner Post author

          but Dick’s visionary abilities make even his least essential work of interest. It certainly wasn’t his mastery of prose.

          Reply
    2. Alice X

      Apologies, I read the post before I saw your comment which I duplicated below. Karen Hao has substantial insights into the broader questions.

      Reply
  3. Mikel

    “Second, the AI companies are racing to build next-generation models that will require even more training, which means higher costs.”

    “Training”
    Data mining that amounts to wealth transfers and even more opportunities for surveillance and control of the flow of information are going to be the main outcome.

    Reply
      1. hunkerdown

        There is a lot to be said for local inference. “Fast” and “easy” are not usually among them. I will say that the installation process is slowly getting easier for the suitably equipped general user, month by month.

        Consider also the possibility of OpenAI or other chatbot providers being a regulatory arbitrage bid with broader ambitions, sort of like Uber. The anti-AI partisans, supplied with scary developments by researchers, are helping create the conditions for the enclosure of local private computation, with the economic and surveillance effects you’d imagine.

        The “Singapore Consensus on Global AI Safety Research Priorities” only hints at this eventuality. It does speak plainly to the general desire for surveillance among the participants, and utters some other uncomfortably guild-like noises.

        Reply
        1. Nat Wilson Turner Post author

          yikes. It’s easy to dismiss some of the “OMG AI is totes going to take over the world” crap as the opposite side of the coin of the hype but you’re right.
          There are a lot of very scary possibilities being worked toward even as the business idiots defraud believers for billions. Of course that’s scary too as they’re risking yet another financial collapse.
          I love dooming with this community! Everyone here is smarter than me and I’m learning a ton in these comment sections.
          Such a pleasant change from the various ridiculous and highly toxic electorates, “targets” and fandoms I’ve been writing for these last 28 years.

          Reply
  4. Silo Man

    To see where all this AI stuff may be headed, a viewing of “Forbidden Planet” might be predictive. Their version ruined it’s creators, the Krell,.

    Reply
    1. Nat Wilson Turner Post author

      I haven’t seen that since I was a kid and I didn’t have a clue what it was about. will give it a shot now

      Reply
    2. Henry Moon Pie

      Seconded. Not only is it a beautiful movie with stunning hand-painted sets, credible spacecraft and a great robot, but also the issues it addresses are more relevant now than when the movie was made. Human hubris and the human subconscious make for a dangerous pairing. Transhumanist dreams become nightmares.

      And Anne Francis is smokin’ hot.

      Reply
  5. Anonymous 2

    As a complete ignoramus in these matters, I have long assumed the most likely financiers of AI are going to to be the US and Chinese militaries. I imagine no self-respecting general is going to want to fight a war of the future without feeling he has the superior AI. Am I missing something?

    Reply
    1. Nat Wilson Turner Post author

      Palintir has certainly used AI as a “killer app” in Gaza. But I don’t know much about what type of AI — if it’s machine learning or LLMs or what. I suspect it’s some of all types.

      Reply
    2. Raymond Carter

      No intelligent general would depend on AI because LLM based AIs are always very convincing but one out of three times they are flat out wrong. These mistakes are sometimes called “hallucinations.”

      Reply
      1. Nat Wilson Turner Post author

        Palantir and the IDF don’t care about hallucinations or wrong data, they don’t view their targets as human so they don’t care if they get a few extra victims while they’re targeting a doctor or journalist’s family for a synchronized multi-site assassination.
        for genocidal purposes hallucinations are a feature, not a bug

        Reply
      2. Christopher Fay

        “one out of three times they are flat out wrong.” Seems like better results than those delivered by our present intelligence agencies (the surveillance and control bureaucracies).

        Reply
      3. david

        So we can guarantee about 100% of generals will buy into it. Especially once a the hint of post military career board positions are offered.

        Reply
    3. Henry Moon Pie

      A young man our family knew when he was a high school student in Ljubljana desperate to come to America and work on AI now, nearly 20 years later, has a Silicon Valley start-up working on voice recognition and mimicry. Wonder who’s interested in that.

      Reply
    4. fjallstrom

      I think you are right for two reasons:

      1. Drones with target matching that can take over in case of broken communication would make one main anti-drone defence on the battlefield in Ukraine much less useful. (Machine learning, not a chatbot)

      2. Accountability sink. When the need comes to massacre civilians or POWs, it’s useful if the machine kills and can be blamed. Also it saves on PTSD treatment for the soldiers. See Gaza.

      Reply
  6. scott s.

    Don’t know the business aspects, but ISTM that building the models and running the models are two different things. I have a couple of open source Hugging Face-type models here locally that I run in the open source Python environment on NVidia cuda with 6 Gb VRAM, though you need 12 or even 18 Gb to really be productive.

    As far as online, I’ve found I’m using Google-AI from their search where I might have used Stack Overflow in the past for simple programming solutions. Don’t see any monetization opportunities for Google from my usage.

    On a large open source project I work on, one dev has a MS CoPilot account and runs all the commits through it prior to merging. I don’t see any earth-shaking insights from it but it does in the main offer useful ideas.

    Reply
    1. Jason Boxman

      Oddly, I’ve found for basic Python, OpenAI’s Codex and its GitHub integration work surprisingly well. It’s still clearly a very early product and you can’t for example independently push to a PR branch and get Codex to update its working branch, but whatever. I was only using it for pretty straightforward personal things.

      Reply
  7. Carolinian

    Some of us are so old we can remember twenty five years ago when the web was considered a libertarian alternative to traditional media and the open source movement an ultimate expression of this. I still regard it that way and about half my searches go to information aggregator Wikipedia which is produced by humans if not always trustworthy humans. But whatever it’s flaws, I’d say Wiki is a lot more trustworthy than a money making robot run by investor sharks. If AI isn’t about surveillance then what else could this impractical technology possibly be about? True, from the above, it sounds like it is merely about suckering stock buyers.

    Balzac said behind every great fortune is a great crime and then W.C. Fields said you can’t cheat an honest man. Welcome to our world.

    Reply
    1. Nat Wilson Turner Post author

      Wiki (and Reddit) has been heavily manipulated by the worst actors on Earth for decades now.

      See Who’s Editing Wikipedia – Diebold, the CIA, a Campaign — Wired 2007

      There’s also the rumor that Ghislaine Maxwell was a huge moderator on Reddit which has never been confirmed, but this Vice article “debunking” it has many of the tropes of other “debunkings” (see Vox.com’s 2020 covid coverage, the suppression of the NY Post’s 2020 election eve Hunter Biden laptop story or MSNBC on Russiagate to this day).
      Use of terms like “incoherent and evidence-free conspiracy theory”, “The ‘evidence’ shared by conspiracy theorists”….sets off alerts in the part of my brain that triggers PTSD when the phrase “unprovoked invasion of Ukraine” is encountered.

      Counterpoint, the theory was pushed hard by this Utah MAGA Senate candidate who at a glance is walking the anti-Zionist/anti-Semite line a little to closely for my comfort.

      Reply
      1. Carolinian

        Like others I don’t get much out of Reddit.

        And of course we all know about how Wales is not to be trusted and how Wikipedia is manipulated by the spooks. But also of course there are countless topics on Wikipedia that have nothing to do with the spooks.

        Everything is manipulated now to some degree but some formats are less susceptible than others. Walter Cronkite once said he picked all the stories for the evening news from that day’s NY Times. Now the Times is a joke

        https://www.moonofalabama.org/2025/06/nyt-guessing-about-iran-with-experts-who-lack-knowledge-of-it.html

        and we webians have to become our own gatekeepers and editors rather than rely on that dubious “first draft of history” (which I too once read avidly). Computers are a tool. You control them or they control you.

        Reply
        1. Nat Wilson Turner Post author

          my difficulty is dealing with normies who don’t become their own gatekeepers. watching the democrats go blue maga in the 2016-2024 era was pretty awful since that was my closest family and friends
          my GOP family and friends were always at a bit of an arm’s remove, cousins not siblings

          Reply
      2. Tempestteacup

        I think you’ll find the full-fat version is “Vladimir Putin’s brutal, unprovoked invasion of Ukraine”. Extra points for later references to Putin’s Chef, Putin’s Banker, Putin’s Thinker and other variations on his imagined menagerie of stock villain characters.

        Reply
  8. Alice X

    Karen Hao, in an interview with Novara Media UK offers substantial information on the AI situation.

    Silicon Valley Insider Exposes Cult Like AI Companies

    As Artificial Intelligence begins to fundamentally alter the way normal people live their lives, it’s often talked about in terms of boom and doom, which makes a nuanced examination difficult. The problem with AI is that the understanding required to scrutinise the technology is rare and even if one does have that understanding, the ability to clearly communicate it is even rarer.

    This week’s guest has been both a worker in, and reporter on the tech industry and is uniquely poised to present a nuanced and informed analysis of this rapidly expanding industry.

    In her new book, ‘Empire of AI’, Karen Hao debunks myths that surround AI and exposes us to the full breadth of this global industry, from it’s cult-leader like CEOs to the workers that power the technology. She sat down with Aaron to talk about Sam Altman’s origin story, the traumatising nature of content moderation work and the striking similarities between Open AI and the British East India Company.

    Reply
  9. Random

    IMO there is a case for LLMs becoming profitable but it involves injecting ads/propaganda directly into the answers.
    I’m sure companies and governments would pay large amounts of money to make sure that a model is biased in a way that directly benefits them.
    Of course, it requires getting to the point where people don’t have any alternatives to go back to.

    Reply
    1. Nat Wilson Turner Post author

      This makes Google’s destruction of the open web make sense on multiple levels

      Reply
  10. Deschain

    I mean it’s easy to beat Google search these days because it’s been completely enshittified. I use Kagi – paying $100/year but I will never get a single ad. It’s like Google when Google was actually good. Well worth it.

    Reply
    1. Jason Boxman

      I’m doing the same, although on the monthly plan and never switched to annual.

      Worth noting unlike Neeva, Kagi doesn’t spider itself and just uses feeds from Google, Microsoft Bing, and a few others I think. Nonetheless, not being the product is worth paying $10 a month for.

      Reply
  11. Matthew

    The specific survival of OpenAI isn’t of any particular interest to me personally, but DeepThink’s achievement of a 30x improvement in cost for training an effective model showed that the link between performance improvement and cost for such a new technology is not something that should be projected forward with any confidence. Whether that will save OpenAI itself, again, to me, who cares, but it seems likely to be relevant to the future ubiquity of the technology itself.

    Reply
        1. Nat Wilson Turner Post author

          Just clarifying because they’re very different animals. Very hard to tell how much of the claims made by DeepSeek are true. There is a true fog of propaganda war over that topic at the moment.
          It seems likely they did achieve significant efficiencies. It also seems entirely plausible that they did take advantage of OpenAI and others’ training data so it might not be replicable.

          Reply
    1. Hickory

      This was my first thought. Just because chatgpt is unprofitable and expensive to operate now doesn’t mean that’s baked in to LLMs or “AI”. anyone who wants to evaluate whether it is wise for google to adopt LLMs in search needs to look at best-in-class in terms of cost like deepseek.

      Someone recently said the cost of AI converges to the cost of electricity over time. Deepseek’s improvements likely will be adopted by google eventually. What does the profitability look like at this point?

      I agree with many of the points in this article, but pointing out chatgpt’s unprofitability isn’t convincing. It’s very early stages and Chinese firms clearly show there’s massive room for cost savings. I saw one estimate that deepseek query costs are 3% of Chatgpt’s, so I think google’s decision should be judged with that future efficiency rate.

      also, whether or not chatgpt is a threat to google isn’t the issue. It’s whether LLMs and related ai tech is a threat, especially with 97% reduced query costs, and I think the answer is obviously yes. Natural language queries and responses are far more intuitive for most users and use cases than inputting a phrase and browsing links. Compare with asking “where is the nearest home depot” and being given the answer.

      Reply
      1. garbagecat

        It would be good to be skeptical of the reported cost advantages of Deepseek. I looked at it for my applications, which relate to cyber security for industrial control systems, and I was astounded by its ability to “reason.” (I’m going to sidestep here why I put quotes around “reason,” except to say the capability is valuable, even if it is not what humans do.)

        Back in February, my firm expectation was that every other engine would acquire the same reasoning capabilities within weeks or months. The fact that they have come nowhere close is suggestive that the cost advantages of Deepseek are not what they were claimed to be.

        Reply
  12. Zephyrum

    A chunk of LLM API requests get redirected from places where money is real to OpenAI where the charges are on the order of 1/6 the price. Setting up your own compute to run your own models locally is pretty pricey, so if OpenAI craters it’s going to cause some pain in certain products currently subsidized by the magic of Silicon Valley.

    Reply
    1. Nat Wilson Turner Post author

      built in lack of resilience to the industry. OpenAI collapsing could be a singularity that pulls a lot of peripheral companies down into a black hole with it.
      “oops not the singularity we promised, does a minsky moment count?”

      Reply
  13. The Rev Kev

    Great post this and reading it, I can see a future direction that they will go in. That they will want the user to talk to the AI instead of just hammering on keys. That, come to think of it, was featured in the 2013 film “Her” which had a computer AI-

    https://en.wikipedia.org/wiki/Her_(2013_film)#Plot

    They say that they want more training sets to make their AI better but where will those come from? They have ransacked the internet now and slurped it all up. They haven’t even set up standards for AI yet and we are still in the bananas stage-

    https://www.theregister.com/2025/06/27/bofh_2025_episode_12/

    Reply
    1. Nat Wilson Turner Post author

      that’s one of the reasons Gary Marcus has known all along that the mass compute/big money/big scale approach wouldn’t work.
      they’re out of content to train on and the more the LLM generated content gets mixed in the worse the output gets

      Reply
    2. david

      There is lots of content left to train on. But it isn’t openly accessible stuff. Think of all the NHS data palantir will now get it’s hands on. All the data that various governments store. All the data corporations store. Hell even think about all the messages that get sent in Whatsapp and similar every day. All of that data is out that and not currently accesible. But it will be.

      Reply
      1. Nat Wilson Turner Post author

        and private phone conversations! I suspect Palintir, Meta, Alphabet, Amazon and others are grabbing “training data” that would shock and appall even the most jaded of us

        Reply
  14. TimN

    Oh, the Dems love “labor,” because the honchos give them (dues) money to run their campaigns during which they mouth platitudes about “working families.” But as far as supporting actual rank and file workers, not at all. You see these union politicians like Sean Fain (yes, he is for all intents and purposes a Dem Party neoliberal politician). The Dems co-opted the unions, and deal almost exclusively with high- and medium-level hacks like Fain, who think like they do.

    Reply
    1. Nat Wilson Turner Post author

      Please do not leave off topic comments. The place for this is Links. This is called thread-jacking and is a violation of our written site Policies.

      Reply
    1. Nat Wilson Turner Post author

      this persuading people to believe crazy nonsense might be the killer app I failed to see when I did the initial post. very scary but as someone who’s spent decades in the “get attention > attempt to persuade” field I can see where that’s a high margin business provided they can cut way way down on the compute costs and energy burn.
      but I’m not sure the lack of need for high quality content relates to any technical cost savings. Anyone know?

      Reply
        1. Nat Wilson Turner Post author

          Well per MIT Technology Review “AI can do a better job of persuading people than we do: OpenAI’s GPT-4 is much better at getting people to accept its point of view during an argument than humans are—but there’s a catch.
          There is a huge market to be able to control exactly what crazy nonsense specific individuals believe. This could replace advertising completely — provided they can dramatically reduce cost and energy use. Big provisions.

          Reply
  15. Raymond Carter

    Re tweet that the behavior of LLMs is best explained as “sophisticated pattern matching.”

    I agree with that, but I think a better and more incisive way of explaining LLMs is to simply say that they are all about mimicry. They are consummate con artists. They can sound just like a doctor without knowing medicine. They can sound just like a PhD without even knowing how to reason or think.

    Is that some huge human achievement, that we have developed a computer program that can mimic doctors and PhDs without actually knowing anything? To me, the answer is no. It’s not much of an achievement at all to develop automated con artists. Why do we need automated con artists that sound convincing but know nothing and have no value other than how convincing they sound?

    Reply
    1. david

      Several years back i overheard a senior enginer explaining to a junior one that “the key to being s consultant isn’t knowing more than the client, it’s bullshitting them that you know more”. Perhaps AI is kust a reflection of where society has already gone.

      Reply
  16. eg

    I don’t pay any attention to the AI “features” in Google search, though that’s getting more and more difficult over time, because I don’t trust it not to make stuff up. I’m surprised more people aren’t likewise leery of it.

    Reply
    1. Nat Wilson Turner Post author

      word is getting out but reptition is the most powerful form of persuasion according to some and the whole tech industry is pushing this stuff on us over and over and over and over

      Reply
  17. AJB

    “The energy costs of LLMs are enormous.”

    Amongst all their BS and hype I can’t look past the energy aspect. Where will the affordable energy come from to power all of the data centres as they store increasing amounts of data, and the more data AI has to interrogate the more it consumes energy to ‘think’ (just consumes an increasingly larger amount of energy as it learns and grows)?

    The more I listen to energy experts discuss energy demand and affordable clean energy (is anything really clean considering the resources we dig up to build solar/wind capture infrastructure) the more I think AI will be limited by (clean/cleanish) energy availability.

    Reply

Leave a Reply

Your email address will not be published. Required fields are marked *