OpenAI has fatally flawed finances which reveal a company that presents no serious threat to any of the tech incumbents, including Google.
In the (excellent) comments to my post last Monday on Google’s inexplicable-to-me decision to risk their search monopoly by going all-in on LLM AI one from Hickory exposed that I had failed to make a key point: ChatGPT is not a threat to replace Google as the leader in search because OpenAI loses money on every ChatGPT prompt and is trying to make it up in volume.
Let’s put aside the claims about LLMs having reasoning ability for now and focus on the bullish business case for ChatGPT as a killer app that threatens Google search. Cal Newport lays out that case:
The application that has… leaped ahead to become the most exciting and popular use of these tools is smart search. If you have a question, instead of turning to Google you can query a new version of ChatGPT or Claude. These models can search the web to gather information, but unlike a traditional search engine, they can also process the information they find and summarize for you only what you care about. Want the information presented in a particular format, like a spreadsheet or a chart? A high-end model like GPT-4o can do this for you as well, saving even more extra steps.
Smart search has become the first killer app of the generative AI era because, like any good killer app, it takes an activity most people already do all the time — typing search queries into web sites — and provides a substantially, almost magically better experience. This feels similar to electronic spreadsheets conquering paper ledger books or email immediately replacing voice mail and fax. I would estimate that around 90% of the examples I see online right now from people exclaiming over the potential of AI are people conducting smart searches.
This behavioral shift is appearing in the data. A recent survey conducted by Future found that 27% of US-based respondents had used AI tools such as ChatGPT instead of a traditional search engine. From an economic perspective, this shift matters. Earlier this month, the stock price for Alphabet, the parent company for Google, fell after an Apple executive revealed that Google searches through the Safari web browser had decreased over the previous two months, likely due to the increased use of AI tools.
Keep in mind, web search is a massive business, with Google earning over $175 billion from search ads in 2023 alone. In my opinion, becoming the new Google Search is likely the best bet for a company like OpenAI to achieve profitability…
That’s a seemingly reasonable claim, but doesn’t hold up after a look into OpenAI’s business plans for ChatGPT.
Despite their seeming threat to Google search, OpenAI is the kind of self-defeating competition no monopolist should fear, much less destroy its proven business model to compete with.
The New York Times put it pretty well last September:
— Nat Wilson Turner (@natwilsonturner) June 30, 2025
Morningstar summed up the NYT’s reporting well:
Financial documents reviewed by The New York Times reveal a company burning through cash at an alarming rate, raising questions about the sustainability of its current trajectory and the potential risks of prioritizing break-neck expansion over responsible AI development. Let’s discuss some of the key points from the New York Times report, which was published last week before the funding announcement:
— OpenAI’s monthly revenue hit $300 million in August 2024, a 1,700% increase since early 2023.
— The company expects to generate around $3.7 billion in annual sales this year and anticipates revenue ballooning to $11.6 billion in 2025.
— Despite rising revenues, OpenAI predicts a loss of about $5 billion. this year due to high operational costs, biggest of which is the cost of computing power it gets through its partnership with Microsoft.
— OpenAI predicts its revenue will hit $100 billion in 2029.
The Times report raises serious questions about OpenAI’s sustainability and realistic goals. The company’s monthly revenue growth from early 2023 to August 2024 is nothing short of explosive; however, the long-term projection of $100 billion in revenue by 2029 appears unrealistic. This figure would require sustaining an average annual growth rate of more than 90% for five consecutive years (93.3% to be precise, from an expected $3.7 billion in 2024 to $100 billion in 2029), a feat rarely achieved in the tech industry, especially for a company already operating at such a large scale. While impressive on paper, said projections may be masking underlying financial challenges and setting expectations that could be difficult, if not impossible, to meet.
Financial challenges become even more apparent given the current expense structure in relation to projected growth. It’s crucial to note that, even if it reaches the projected revenue targets, OpenAI is not merely failing to break even in 2024 – it’s losing significantly more money than it’s generating. This means that before OpenAI can even consider achieving its ambitious growth targets, it must first find a way to become profitable, or at the very least, break even.
Bryan McMahon pointed out the massive financial risk posed by the stock market bubble driven by faith in LLMs or as he calls it Generative AI:
Venture capital (VC) funds, drunk on a decade of “growth at all costs,” have poured about $200 billion into generative AI. Making matters worse, the stock market’s bull run is deeply dependent on the growth of the Big Tech companies fueling the AI bubble. In 2023, 71 percent of the total gains in the S&P 500 were attributable to the “Magnificent Seven”—Apple, Nvidia, Tesla, Alphabet, Meta, Amazon, and Microsoft—all of which are among the biggest spenders on AI. Just four—Microsoft, Alphabet, Amazon, and Meta—combined for $246 billion of capital expenditure in 2024 to support the AI build-out. Goldman Sachs expects Big Tech to spend over $1 trillion on chips and data centers to power AI over the next five years. Yet OpenAI, the current market leader, expects to lose $5 billion this year, and its annual losses to swell to $11 billion by 2026. If the AI bubble bursts, it not only threatens to wipe out VC firms in the Valley but also blow a gaping hole in the public markets and cause an economy-wide meltdown.
But wait it gets worse, per Ed Zitron:
It seems, from even a cursory glance, that OpenAI’s costs are increasing dramatically. The Information reported earlier in the year that OpenAI projects to spend $13 billion on compute with Microsoft alone in 2025, nearly tripling what it spent in total on compute in 2024 ($5 billion).
This suggests that OpenAI’s costs are skyrocketing, and that was before the launch of its new image generator which led to multiple complaints from Altman about a lack of available GPUs, leading to OpenAI’s CEO saying to expect “stuff to break” and delays in new products. Nevertheless, even if we assume OpenAI factored in the compute increases into its projections, it still expects to pay Microsoft $13 billion for compute this year.
This number, however, doesn’t include the $12.9 billion five-year-long compute deal signed with CoreWeave, a deal that was a result of Microsoft declining to pick up the option to buy said compute itself. Payments for this deal, according to The Information, start in October 2025, and assuming that it’s evenly paid (the terms of these contracts are generally secret, even in the case of public companies), this would still amount to roughly $2.38 billion a year.
I’ll let the Entertainment Strategy Guy nail the profitability coffin shut:
By all accounts, right now, OpenAI is losing money. Like literally billions of dollars. The energy costs of LLMs are enormous. If they’re pricing their services below market value, trying to gain market share, then we don’t know if AI can make money for the service it’s providing right now.
Two factors are driving these costs. First, the memory an AI program uses (either the more data it stores as it thinks or the longer it thinks about a problem/answer), the more it costs the AI companies in compute. Second, the AI companies are racing to build next-generation models that will require even more training, which means higher costs. And the salaries for top AI engineers/scientists are also sky-rocketing up.
This is why I’m somewhat skeptical about the sorts of things that OpenAI is promising that AI can do (like become your universal assistant that remembers everything about you); it seems like an absolute memory boondoggle of monumental proportions. How much energy will it take for AI to analyze my whole life if it’s already too taxing for an LLM to remember how to format links properly?
But wait, there’s even more bad news that just dropped. OpenAI is now in a bidding war for talent with some of the stupidest money out there, Mark Zuckerberg of Meta:
…competition for top AI researchers is heating up in Silicon Valley. Zuckerberg has been particularly aggressive in his approach, offering $100 million signing bonuses to some OpenAI staffers, according to comments Altman made on a podcast with his brother, Jack Altman. Multiple sources at OpenAI with direct knowledge of the offers confirmed the number. The Meta CEO has also been personally reaching out to potential recruits, according to the Wall Street Journal. “Over the past month, Meta has been aggressively building out their new AI effort, and has repeatedly (and mostly unsuccessfully) tried to recruit some of our strongest talent with comp-focused packages,” Chen wrote on Slack.
And speaking of dumb money and OpenAI, Softbank is involved, although maybe not as much as reported:
(In April) OpenAI closed “the largest private tech funding round in history,” where it “raised” an astonishing “$40 billion,” and the reason that I’ve put quotation marks around it is that OpenAI has only raised $10 billion of the $40 billion, with the rest arriving by “the end of the year.”
The remaining $30 billion — $20 billion of which will (allegedly) be provided by SoftBank — is partially contingent on OpenAI’s conversion from a non-profit to a for-profit by the end of 2025, and if it fails, SoftBank will only give OpenAI a further $20 billion. The round also valued OpenAI at $300 billion.
And things might not be going so well with Softbank because OpenAI is now talking to even dumber, and much more dangerous, money: Saudi Arabia.
And if you’ve ever paid attention to the actual words coming out of OpenAI CEO Sam Altman’s mouth, you’ll realize Altman attracting dumb money is just a case of birds of a feather flocking together.
Ed Zitron chronicles some of the stupid in his latest newsletter:
Here is but one of the trenchant insights from Sam Altman in his agonizing 37-minute-long podcast conversation with his brother Jack Altman from last week:
“I think there will be incredible other products. There will be crazy new social experiences. There will be, like, Google Docs style AI workflows that are just way more productive. You’ll start to see, you’ll have these virtual employees, but the thing that I think will be most impactful on that five to ten year timeframe is AI will actually discover new science.”
When asked why he believes AI will “discover new science,” Altman says that “I think we’ve cracked reasoning in the models,” adding that “we’ve a long way to go,” and that he “think[s] we know what to do,” adding that OpenAI’s o3 model “is already pretty smart,” and that he’s heard people say “wow, this is like a good PHD.”
That’s the entire answer! It’s complete nonsense! Sam Altman, the CEO of OpenAI, a company allegedly worth $300 billion to venture capitalists and SoftBank, kind of sounds like a huge idiot!
Ed also roasts Alphabet/Google’s Sundar Pichai:
Sundar Pichai, when asked one of Nilay Patel’s patented 100-word-plus-questions about Jony Ive and Sam Altman’s new (and likely heavily delayed) hardware startup:
I think AI is going to be bigger than the internet. There are going to be companies, products, and categories created that we aren’t aware of today. I think the future looks exciting. I think there’s a lot of opportunity to innovate around hardware form factors at this moment with this platform shift. I’m looking forward to seeing what they do. We are going to be doing a lot as well. I think it’s an exciting time to be a consumer, it’s an exciting time to be a developer. I’m looking forward to it.
The fuck are you on about, Sundar? Your answer to a question about whether you anticipate more competition is to say “yeah I think people are gonna make shit we haven’t come up with and uhh, hardware, can’t wait!”
While I think Pichai is likely a little smarter than Altman, in the same way that Satya Nadella is a little smarter than Pichai, and in the same way that a golden retriever is smarter than a chihuahua. That said, none of these men are superintelligences, nor, when pressed, do they ever seem to have any actual answers.
If ChatGPT were such an existential threat to Google’s search monopoly that Alphabet’s only option was risking the empire to beat OpenAI in the LLM race, it would be profitable or at least have a plausible path to profitability.
Sam Altman being a blithering idiot isn’t really the disadvantage it should be since he’s going up against competition like Mark Zuckerberg, Elon Musk, and Sundar Pichai.
This isn’t like Uber vs. the local taxi incumbents in the 2010s where despite Uber’s never going to be profitable business model they were able to take over in many markets because OpenAI does not have a huge cash advantage over Alphabet and never will.
Next week we’ll look at at Meta, the absolute stupidest tech money around — a company that put Dana White of the Ultimate Fighting Championship on its board.
And because I promised I’d get around to this, regarding LLMs ability to “reason” it was thoroughly debunked last October by six researchers from Apple in a paper called “GSM-Symbolic: Understanding the Limitations of Mathematical Reasoning in Large Language Models.”
When the paper was released the senior author, Mehrdad Farajtabar, tweeted that, ““we found no evidence of formal reasoning in language models …. Their behavior is better explained by sophisticated pattern matching—so fragile, in fact, that changing names can alter results by ~10%!”
One of the Ed Zitron pieces you cited:
https://www.wheresyoured.at/openai-is-a-systemic-risk-to-the-tech-industry-2/
while quite long, goes deep into the details of OpenAI’s present and future financing and how they expect to meet their financial commitments over the next two or three years. It left me in a funk for several days, not because I care a whit about OpenAI, but because the business press has now apparently become so delusional and dishonest that it’s hard to believe anything or even see how things can go on.
After reading this, I have to conclude that OpenAI/Softbank is definitely, positively going to go out of business in the next two or three years. How much of the rest of the economy they take down with it is perhaps more worrisome.
This is the same business press that covered Uber and Lyft breathlessly for 15 years as Uber burnt through money.. because they wanted the businesses to succeed. The will do the same thing with OpenAI until they magically find profit.
Kurtismayfield,
yea but I don’t think the overall economy has the runway that Uber had in the 2010s. Gonna be some reality checks sooner rather than later.
Yea, the LLM bubble popping might be the end of the “Everything Bubble” that George Soros wrote a whole book about 20 years ago
As a complement to this piece, I recommend this lengthy interview with technology journalist Karen Hao about her recent book “Empire of AI: Dreams and Nightmares in Sam Altman’s OpenAI”.
thanks! I will check it out.
Agreed. It was excellent.
I’m not so sure that taking over search or even making money are the real goals of these people. We had a story in Links today about LLMs ability to ensnare the lonely or unhappy into “relationships” that seem quite real to the victims. That could prove a handy tool if you’re a tech billionaire trying to take over the world or even a government seeking to disable potential dissidents.
One emerging dystopia is a world where tech billionaires use AIs to battle each other for world control. The LLMs would be useful for “converting” people to their cause, while another type of AI could control huge drone swarms that attack opponents’ data centers. Philip K. Dick could make a fun novel out of that possibility.
excellent point.
I’m always missing the story because I’m busy arguing the rational explanation. Like trying to evaluate today’s stonk market based on business fundamentals or betting on professional wrestling based on the athleticism of the performers.
LLM’s ability to persuade might indeed be the killer app. What dark magick!
That is in fact VULCAN’S HAMMER from 1959, or damned close to it, IIRC. It’s almost universally agreed to be Dick’s worst novel, for completions only.
but Dick’s visionary abilities make even his least essential work of interest. It certainly wasn’t his mastery of prose.
Apologies, I read the post before I saw your comment which I duplicated below. Karen Hao has substantial insights into the broader questions.
“Second, the AI companies are racing to build next-generation models that will require even more training, which means higher costs.”
“Training”
Data mining that amounts to wealth transfers and even more opportunities for surveillance and control of the flow of information are going to be the main outcome.
Yea Gary Marcus is saying surveillance is where OpenAI is headed, I’ll get into that in a future post although even there Palintir is way ahead of them and is infamously using AI widely in Gaza and everywhere else.
There is a lot to be said for local inference. “Fast” and “easy” are not usually among them. I will say that the installation process is slowly getting easier for the suitably equipped general user, month by month.
Consider also the possibility of OpenAI or other chatbot providers being a regulatory arbitrage bid with broader ambitions, sort of like Uber. The anti-AI partisans, supplied with scary developments by researchers, are helping create the conditions for the enclosure of local private computation, with the economic and surveillance effects you’d imagine.
The “Singapore Consensus on Global AI Safety Research Priorities” only hints at this eventuality. It does speak plainly to the general desire for surveillance among the participants, and utters some other uncomfortably guild-like noises.
yikes. It’s easy to dismiss some of the “OMG AI is totes going to take over the world” crap as the opposite side of the coin of the hype but you’re right.
There are a lot of very scary possibilities being worked toward even as the business idiots defraud believers for billions. Of course that’s scary too as they’re risking yet another financial collapse.
I love dooming with this community! Everyone here is smarter than me and I’m learning a ton in these comment sections.
Such a pleasant change from the various ridiculous and highly toxic electorates, “targets” and fandoms I’ve been writing for these last 28 years.
To see where all this AI stuff may be headed, a viewing of “Forbidden Planet” might be predictive. Their version ruined it’s creators, the Krell,.
I haven’t seen that since I was a kid and I didn’t have a clue what it was about. will give it a shot now
Seconded. Not only is it a beautiful movie with stunning hand-painted sets, credible spacecraft and a great robot, but also the issues it addresses are more relevant now than when the movie was made. Human hubris and the human subconscious make for a dangerous pairing. Transhumanist dreams become nightmares.
And Anne Francis is smokin’ hot.
ok then, my friend’s documentary is going to have to wait
As a complete ignoramus in these matters, I have long assumed the most likely financiers of AI are going to to be the US and Chinese militaries. I imagine no self-respecting general is going to want to fight a war of the future without feeling he has the superior AI. Am I missing something?
Palintir has certainly used AI as a “killer app” in Gaza. But I don’t know much about what type of AI — if it’s machine learning or LLMs or what. I suspect it’s some of all types.
No intelligent general would depend on AI because LLM based AIs are always very convincing but one out of three times they are flat out wrong. These mistakes are sometimes called “hallucinations.”
I was not thinking of LLMs, but other forms of AI.
A young man our family knew when he was a high school student in Ljubljana desperate to come to America and work on AI now, nearly 20 years later, has a Silicon Valley start-up working on voice recognition and mimicry. Wonder who’s interested in that.
Don’t know the business aspects, but ISTM that building the models and running the models are two different things. I have a couple of open source Hugging Face-type models here locally that I run in the open source Python environment on NVidia cuda with 6 Gb VRAM, though you need 12 or even 18 Gb to really be productive.
As far as online, I’ve found I’m using Google-AI from their search where I might have used Stack Overflow in the past for simple programming solutions. Don’t see any monetization opportunities for Google from my usage.
On a large open source project I work on, one dev has a MS CoPilot account and runs all the commits through it prior to merging. I don’t see any earth-shaking insights from it but it does in the main offer useful ideas.
You haven’t seen CoPilot producing hallucinations?
Some of us are so old we can remember twenty five years ago when the web was considered a libertarian alternative to traditional media and the open source movement an ultimate expression of this. I still regard it that way and about half my searches go to information aggregator Wikipedia which is produced by humans if not always trustworthy humans. But whatever it’s flaws, I’d say Wiki is a lot more trustworthy than a money making robot run by investor sharks. If AI isn’t about surveillance then what else could this impractical technology possibly be about? True, from the above, it sounds like it is merely about suckering stock buyers.
Balzac said behind every great fortune is a great crime and then W.C. Fields said you can’t cheat an honest man. Welcome to our world.
Wiki (and Reddit) has been heavily manipulated by the worst actors on Earth for decades now.
See Who’s Editing Wikipedia – Diebold, the CIA, a Campaign — Wired 2007
There’s also the rumor that Ghislaine Maxwell was a huge moderator on Reddit which has never been confirmed, but this Vice article “debunking” it has many of the tropes of other “debunkings” (see Vox.com’s 2020 covid coverage, the suppression of the NY Post’s 2020 election eve Hunter Biden laptop story or MSNBC on Russiagate to this day).
Use of terms like “incoherent and evidence-free conspiracy theory”, “The ‘evidence’ shared by conspiracy theorists”….sets off alerts in the part of my brain that triggers PTSD when the phrase “unprovoked invasion of Ukraine” is encountered.
Counterpoint, the theory was pushed hard by this Utah MAGA Senate candidate who at a glance is walking the anti-Zionist/anti-Semite line a little to closely for my comfort.
Like others I don’t get much out of Reddit.
And of course we all know about how Wales is not to be trusted and how Wikipedia is manipulated by the spooks. But also of course there are countless topics on Wikipedia that have nothing to do with the spooks.
Everything is manipulated now to some degree but some formats are less susceptible than others. Walter Cronkite once said he picked all the stories for the evening news from that day’s NY Times. Now the Times is a joke
https://www.moonofalabama.org/2025/06/nyt-guessing-about-iran-with-experts-who-lack-knowledge-of-it.html
and we webians have to become our own gatekeepers and editors rather than rely on that dubious “first draft of history” (which I too once read avidly). Computers are a tool. You control them or they control you.
Karen Hao, in an interview with Novara Media UK offers substantial information on the AI situation.
Silicon Valley Insider Exposes Cult Like AI Companies
IMO there is a case for LLMs becoming profitable but it involves injecting ads/propaganda directly into the answers.
I’m sure companies and governments would pay large amounts of money to make sure that a model is biased in a way that directly benefits them.
Of course, it requires getting to the point where people don’t have any alternatives to go back to.
This makes Google’s destruction of the open web make sense on multiple levels
The Sam Altman and Sundar Pichai quotes reminded me of this NewsRadio clip from the 90s:
https://www.youtube.com/watch?v=lE1bS-Mn2Mk
It would be funny if they weren’t running some of the biggest and most powerful companies in the world.
I mean it’s easy to beat Google search these days because it’s been completely enshittified. I use Kagi – paying $100/year but I will never get a single ad. It’s like Google when Google was actually good. Well worth it.
The specific survival of OpenAI isn’t of any particular interest to me personally, but DeepThink’s achievement of a 30x improvement in cost for training an effective model showed that the link between performance improvement and cost for such a new technology is not something that should be projected forward with any confidence. Whether that will save OpenAI itself, again, to me, who cares, but it seems likely to be relevant to the future ubiquity of the technology itself.
A chunk of LLM API requests get redirected from places where money is real to OpenAI where the charges are on the order of 1/6 the price. Setting up your own compute to run your own models locally is pretty pricey, so if OpenAI craters it’s going to cause some pain in certain products currently subsidized by the magic of Silicon Valley.
Great post this and reading it, I can see a future direction that they will go in. That they will want the user to talk to the AI instead of just hammering on keys. That, come to think of it, was featured in the 2013 film “Her” which had a computer AI-
https://en.wikipedia.org/wiki/Her_(2013_film)#Plot
They say that they want more training sets to make their AI better but where will those come from? They have ransacked the internet now and slurped it all up. They haven’t even set up standards for AI yet and we are still in the bananas stage-
https://www.theregister.com/2025/06/27/bofh_2025_episode_12/