Yves here. This high level recap explains how the seemingly indefensible idea of an AI bailout is being packaged to make it look less appalling. Note that fresh news stories underscore the notion that this would be yet another looting of the public purse on behalf of the well-heeled and reckless, even in the face of pushback. One is that, as we flag in Links, that the Financial Times is warning that investors are starting to back away from AI connected debt…particularly when the borrowers could just as easily fund these investments from cash on hand. Another is signs of political rebellion over rising electricity costs, which are being fobbed off heavily on retail consumers when AI power hogs are the big driver. In Georgia, Democrats won not just one but two seats on the state power commission board, the first Democrat success in a statewide constitutional position since 2006. Both upstarts campaigned on affordability. An account in Georgia Reporter noted, “A PowerLines/Ipsos poll found that 3 in 4 Americans are concerned about rising utility bills.”
By Kurt Cobb, a freelance writer and communications consultant who writes frequently about energy and environment. His work has appeared in The Christian Science Monitor, Resilience, Le Monde Diplomatique, TalkMarkets, Investing.com, Business Insider and many other venues. Originally published at OilPrice
- Despite denials from Washington and AI leaders, industry executives are already discussing government “backstops” and indirect support.
- OpenAI faces massive spending commitments far beyond its revenues, raising doubts about long-term financial viability.
- Subsidized data centers and rising energy costs reveal how public resources are already propping up the AI boom – and hint at a broader bailout to come.
There’s an old adage in Washington: Don’t believe anything until it is officially denied. Now that the Trump administration’s so-called artificial intelligence (AI) czar, David Sacks, has gone on record stating that “[t]here will be no federal bailout for AI,” we can begin speculating about what form that bailout might take.
It turns out that the chief financial officer of AI behemoth OpenAI has already put forth an idea regarding the form of such a bailout. Sarah Friar told The Wall Street Journal in a recorded interview that the industry would need federal guarantees in order to make the necessary investments to ensure American leadership in AI development and deployment. Friar later “clarified” her comments in a LinkedIn post after the pushback from Sacks, saying that she had “muddied” her point by using the word “backstop” and that she really meant that AI leadership will require “government playing their part.” That sounds like the government should still do more or less what she said in the Wall Street Journal interview.
Now, maybe you are wondering why the hottest industry on the planet that is flush with hundreds of billions of dollars from investors, needs a federal bailout. It’s revealing that AI expert and commentator Gary Marcus predicted 10 months ago that the AI industry would go seeking a government bailout to make up for overspending, bad business decisions, and huge future commitments that the industry is unlikely to be able to meet. For example, in a recent podcast hosted by an outside investor in OpenAI, the company’s CEO, Sam Altman, got tetchy when asked how a company with only $13 billion in annual revenues that is running losses will somehow fulfill $1.4 trillion in spending commitments over the next few years. Altman did NOT actually answer the question.
So what possible justification could the AI industry dream up for government subsidies, loan guarantees or other handouts? For years, one of the best ways to get Washington’s attention is to say the equivalent of “China bad. Must beat China.” So that’s what Altman is telling reporters. But that doesn’t explain why OpenAI, instead of other companies, should be the target of federal largesse. In what appears to be damage control, Altman wrote on his X account that OpenAI is not asking for direct federal assistance and then later outlines how the government can give it indirect assistance by building a lot of data centers of its own (that can then presumably be leased to the AI industry so the industry doesn’t have to make the investment itself).
Maybe I’m wrong, and what we are seeing is NOT the preliminary jockeying by the AI industry and the U.S. government regarding what sort of subsidy or bailout will be provided to the industry. Lest you think that the industry has so far moved forward without government handouts, the AP noted that subsidies are offered by more than 30 state governments to attract data centers. Not everyone is happy with having data centers in their communities. And, those data centers have also sent electricity rates skyward as consumers and data centers compete for electricity and utilities seek additional funds to build the capacity necessary to power those data centers. Effectively, current electricity customers are subsidizing the AI data center build-out by paying for new generating capacity and lines to feed energy to those data centers.
The larger problem with AI is that it appears to have several limitations in its current form that will prevent it from taking over much of the work already done by humans and preclude it from being incorporated into critical systems (because it makes too many mistakes). All the grandiose claims made by AI boosters are dispatched with actual facts in this very long piece by AI critic Ed Zitron.
I am increasingly thinking of AI as a boondoggle. A boondoggle, according to Dictionary.com, is “a wasteful and worthless project undertaken for political, corporate, or personal gain.” So far, the AI industry mostly fits this definition. But there is a more expansive definition which I borrow from Dmitri Orlov, author of Reinventing Collapse: A contemporary boondoggle must not only be wasteful, it should, if possible, also create additional problems that can only be addressed by yet more boondoggles—such as the need for vast new electric generation capacity that will be unnecessary if AI turns out to be far less useful than advertised. AI boosters say that AI is going to have a big impact on society. I couldn’t agree more, except not quite in the way these boosters think.


That’s the problem when flim flam men interface with doggedly incurious politicians:
From the guardian
*This article is more than 9 months old**
** When enthusiasm for the ‘breakthrough’ in highly inefficient chatbot technology was at its peak.
I think it’s important to put aside whether the AI technology is real and focus on whether these AI company valuations make any sense, and whether it’s even possible to generate sufficient revenue and profits.
The current situation with AI has little in common with the 2008 financial crisis which involved fraud, and much in common with the 2000 Dotcom bubble and crash which was about valuations through the roof that then crashed.
Moreover, there was not a federal bailout for the Dotcom crash, many companies went bankrupt because they deserved to go bankrupt. Companies like Cisco and Oracle saw their stock drop 80% in value, but they survived though it took roughly 20-years for their stock to recover its value.
Nonetheless, there is a problem. The top ten companies within the S&P 500 account for roughly 40% of its market value; even higher than the Dotcom bubble. And when the bubble burst in 2000 the Nasdaq lost 78% of its value.
The AI bubble will burst, not if but when, and it will likely drag down the stock market too. Undoubtedly, pension funds and ordinary workers with 401k plans believe they are diversified and relatively safe because they invested in an S&P index fund, but they may be damaged when the bubble pops.
AI technology might prove transformative like the internet proved transformative but it doesn’t appear that way now. Still, there is no reason for a government bailout. Why is a company like Amazon so heavily invested in AI? Plus, there are hundreds of startups with valuations over $1 billion that produce little revenue or profit.
The government will indeed have a roll when the bubble bursts but it should manage the situation with solid ideas to protect the overall economy, worker’s pensions, and not worry about the fate of individuals like of Bezos and Altman.
China’s AI strategy has won.
[1] Hyperscalers’ model means its customers still need specialized foundation models working on those customers’ proprietary data, which OpenAI/Palantir/etc. send in ‘forward deployment experts’ or whatever to set up. It’s more effective for those customers to use open-source Chinese LLMs, which they will then own, and much cheaper. See-
Nothing is given: China’s open-source AI tsunami
https://asiatimes.com/2025/11/nothing-is-given-chinas-open-source-ai-tsunami/
‘…in many Western quarters, skepticism …ended with usage. As per media reports, Cursor’s engineers now rely on Chinese open models to power their code-generation agents. Cognition’s frontier-sized SWE-1.5 was quietly built on a Chinese base model. Airbnb, once expected to lean toward OpenAI, runs its customer service bots on Alibaba’s Qwen …has stated that some of his companies moved multiple workloads from Anthropic and OpenAI to Moonshot’s Kimi, calling it “way more performant and a ton cheaper.”
Paper – Small LMs Are the Future of Agentic AI – 2025
https://arxiv.org/abs/2506.02153
[2] The US AI strategy envisioned massive datacenters and rents, till in 2030 AI is built out to the ‘Edge’ — the Edge being AI independent, locally governed AI systems deployed in autonomous vehicles, robots, and critical infrastructure. In 2025 China is already there at the Edge. See –
Shopping for a robot? China’s new robot store in photos
https://apnews.com/photo-gallery/china-beijing-humanoid-robot-store-c9fb9f2880084b2cd6c5eda638d019fa
The Geography of PRC Robotics: Charting the Chinese Robotic Future
https://oodaloop.com/analysis/disruptive-technology/charting-chinas-robotic-future-regional-hubs-and-strategic-investments/
I was not interested in the technology per se or who won. Europe makes commercial airliners and so does the US. China will soon make its own, as well. How AI evolves I can’t guess.
The problem is AI investment and hype has been massive, and, like the Dotcom bubble, it’s their stock market valuations that will collapse, ending the investment and borrowing. The article was concerned with a government bailout.
Open AI, IIRC, is valued at $500 Billion and it signed a 5-year cloud computing deal with Oracle. Open AI will pay $300 billion, $60 billion/year, yet it has revenues of roughly $13 billion. How long can this continue?
Lastly, money for data centers is one thing, but servers and GPU’s have a life span of 3-5 years and the whole thing, excluding the building, might required replacement within 10-15 years. Where’s does that money come from if there is so little revenue and profits?
You said — I quote you — it’s necessary to focus on whether these AI company valuations make any sense, and whether it’s possible to generate sufficient revenue and profits.
I answered you. They don’t make sense and it’s not possible to generate sufficient revenue, and I told you why not : they’re an inferior, vastly more expensive model of AI, and this is now apparent.
This seems to indicate that the average lifespan of crypto Gpu’s is 2-3 years.
https://techifield.com/average-gpu-lifespan-mining/
My guess is that the LLM GPU’s will be similar.
Barclay’s out today with a sell rating on Oracle debt, they will run out of free cash flow next year with present spending plans. Fed will have to buy their debt eventually.
Yet, I have met with at least 2 large companies (fortune 500) who will not consider using a product that embeds a Chinese LLM – even an open source version running on the vendor’s hardware safely inside the US..
Large companies are very herd like so if there are 2, there are most likely many more.
Maybe things will shake out over time if/when the Chinese LLMs are “proven”, and these companies’ competitors start winning with their lower AI spend, but also don’t count out the govt from putting their thumb on the scale for the home team (especially if they end up “back stopping” them).
I agree with you. China, for example, is out front with electric vehicles and companies like Huawei. Suddenly the US decided Hauwei was a national security threat and put massive tariffs on China’s electric vehicles.
Meanwhile, technology innovation can be unpredictable. I can recall Blackberry cell phones as the must have for professionals. Then comes the IPhone and few people remember Blackberry.
Biden restricted Nvidia from selling chips to China, Trump made it more restrictive, China banned Nvidia completely, Trump relaxed tariffs, now I’m not sure what the situation is between the US and China.
Who knows, maybe India will soon be competitive in AI technology.
National Security will raise its head, and the fear of China will rule out China owned entities. However we likely will see, if it is true that the models coming out of China are more efficient, some of the current llm hyperscamers coming out with their own versions of the Chinese models.
Then the hyperscamers will need that bailout, which it won’t be called. Maybe an investment? Maybe the us Gov’t, for natsec reasons (classified of course), will throw in a couple trillions.
“if you build it they will come”…..
That works on baseball movies!
If you build a vast network of data centers around NVDA for Sam Altman you risk someone like China, or some smart people in Silicon Valley coming up with cheaper, lower energy, more usable extensions of google search.
The risks aside from lack of business cases are Moore’s lawand thinking competition..
Terrific summary. I recommend reading Matt Stoller’s most recent piece on economy by serial bubbles. The politicians have too much invested in AI.
https://www.thebignewsletter.com/p/monopoly-round-up-last-weeks-elections
This Cobb guy is a great writer. Very readable.
Does this Cobb guy appear behind the paywall in Matt Stoller’s piece? Or maybe I didn’t read closely enough the part before the paywall?
Cobb wrote the OilPrice piece that Yves reposted here which linked to the Stoller piece.
Stoller at his best. He argues that the US’s industrial policy consists of blowing bubbles to make rich people richer.
“Since the 1980s, America has done national economic development through financial bubbles, which is to say, by allowing Wall Street rather than democratic institutions to organize where we allocate capital. This form of statecraft is a result of a theory put out by the founder of private equity, former Nixon Treasury Secretary William Simon, who invented the idea of a “capital shortage” in the 1970s. According to Simon, American governance didn’t guarantee sufficient returns to capital, so there wasn’t enough investment. His framework involved cutting capital gains taxes, deregulation to juice returns, and government guarantees and bailouts to reduce risk for investors. And his vision is still dominant.
…But whether generative AI is useful is besides the point. AI data center build outs are now American industrial policy.”
The tulips bloom is fading fast.
Thanks for this article, which also sent me on a side trip to Dmitri Orlov, whose sense of humor never ceases to make me laugh.
My favorite Orlov post, from 2019:
https://cluborlov.wordpress.com/2019/08/27/resurrecting-the-american-economy/
Boondoggle indeed. At least with Juicero, one got a glass of juice in the end, albeit an expensive one. I don’t believe anyone has discovered a way to procure sustenance from edgelord memes.
Eating an orange is much better for you than drinking the juice squeezed from the same orange…
I’m forwarding this concise post widely to AI enamoured colleagues. Also the Stroller piece (h/t Mikerw0). (They do not have the staying power to plow through Ed Zitron’s 18,500 word evisceration.)
The constant refrain from advocates in my circle is that “AI is a tool.” The implication being that the user has control over it and “we use our powers for community benefit.” This epistemological silo ignores the structural overrides of individual control built into the system that seems to guarantee more consolidation of wealth, erosion of intellectual capacity for creativity and critical thinking, and devastating environmental effects.
I’m at a loss, though, to respond when challenged for alternatives. Best I’ve come up with so far is “Read a fucking book. Take a walk in the woods. Reanimate the art of conversation.” I’m open to suggestions (note: this is crowd-sourcing, not homework!)
…learn to play an instrument. A horn if you want to be part of a group; or the piano (keyboard) if you want to sing a favorite tune. The piano will teach you all about western music–rhythm, harmony, and melody. And you have the rest of your life to learn and enjoy it.
We couldn’t afford piano in my childhood home (though I wanted to learn) so I had to settle for guitar. I still find it soothing to play just for myself.
Just wondering if anyone else has read “If Anyone builds it, everyone dies” by Eliezer Yudkowsky & Nate Soares. Both are heavyweights in the AI space. They illustrate the potential independence of an AI “baby” and provide one example of how one of these “creatures” would not only wipe out mankind but go far far beyond that.
I have not, but know that Eliezer Yudkowsky is a crank. He is a high school drop out who is good at tricking tech bro’s that he is really deep and getting their money for poorly executed thought experiments. He is basically enacting his fanfiction prequel script to the Matrix, claiming that humanity will build god, and must control god. The way to control god? Oh, just give money to Eliezer Yudkowsky of course, and he will solve it.
This criti-hype has been very useful for the AI bubble blowers, as they can posit a debate space between “AI is very powerful and very useful” and “AI is very powerful and very dangerous” while ignoring “AI is mostly statistical garbage”.
For those keeping track Eliezer Yudkowsky and his ‘rationalists’ (as they call themselves) were crucial in getting the Effective Alturism movement going (remember crypto scammer Sam Bankman-Fried?). The teachings were young college students are encouraged to go into fieds were they make boatloads of money so they can “do the most good” by donating to Effective Altruism causes. Like stopping the robot apocalypse by donating to Eliezer Yudkowsky.
It is interesting if you are intereseted in weird people, but their techings should not be taken at face value.
If you listen to the AI podcasts, one thing you will hear over and over is that the US needs to “beat China” and AI is the “new Manhattan Project”. They are framing this as an issue of national security, so of course they want the government to fund it. But is it an issue of national security? Or just national pride and bragging rights?
What is the difference between first place and second place? The US was the first to develop the atomic bomb, then the hydrogen bomb, but others followed, neutralizing the first mover advantage. The Russians were the first to put a satellite into orbit, but the US followed. The US won the race to the moon, which boosted the national psyche, but other decided it was not worth it to follow. The Russians were the first to develop a Covid vaccine — beating “Operation Warp Speed” — and the initial US reaction was to denounce those reckless Russians for rushing an experimental vaccine, but that criticism was memory-holed when the US finished in second place a couple months later. DARPA developed the internet (we won! we won!), but now the whole world uses it, neutralizing the advantage. The Russians were the first to develop hypersonic weapons, but they have not used them to attack US aircraft carriers, so does it really matter who won that race?
So again, what does “beating China” and winning “first place” in the AI race really mean? What national security advantage will the US have, and for how long, if a US company is the first to develop AGI or ASI? Do they really think the Chinese (and others) will never get there? Remember: A lot of the key developers at OpenAI and the Mag7 companies are Chinese.
So yeah, it’s a swindle. But I think these tech oligarchs will get away with it. The AI infrastructure buildout is the only thing powering the US economy these days, so they have tremendous political leverage.