AI: Is it Really Different this Time?

Yves here. Wolf Richter makes an important point below, which yours truly has neglected in discussing the AI bubble: that there is leverage, not among securities-purchasers (aside from normal margin lending) but at the level of the companies funding the AI expenditures themselves.

Again, we don’t yet have evidence of leverage-on-leverage, which is what produced the catastrophic 2008 meltdown. That crisis is still not widely enough understood as a derivatives crisis and not a mortgage crisis. Credit default swaps created 4 to 6 times the real economy level of the riskiest tranches of subprime real estate debt. These exposures then would up significantly at systemically important, overleveraged financial institutions like AIG, Citibank, and quite a few Eurobanks.

But there is a distressing level of investment round-tripping, with lousy transparency. In the runup to the Great Crash, there were trust of trusts and trusts of trusts of trusts, with investors borrowing to invest in those vehicles. Again, what we have is not as crass as that, but to the extent that the circular investing has the effect of exaggerating the amount of actual hard dollars and where they sit, borrowings by any involved company could effectively be on overstated equity. This extract from a recent BBC story, along with Wolf’s account below, gives a sense of the opacity:

AI-related enterprises have accounted for 80% of the stunning gains in the American stock market this year – and Gartner estimates global spending on AI will likely reach a whopping $1.5tn (£1.1tn) before 2025 is out.

OpenAI, which brought AI into the consumer mainstream with ChatGPT in 2022, is at the centre of the tangled web of deals drawing scrutiny.
For example – last month, it entered into a $100bn deal with chipmaker Nvidia, which is itself the most valuable publicly traded company in the world.

It expands an existing investment Nvidia already had in Mr Altman’s company – with expectations that OpenAI will build data centres powered with Nvidia’s advanced chips.

Then on Monday, OpenAI announced plans to purchase billions of dollars worth of equipment for developing AI from Nvidia rival AMD, in a deal that could make it one of AMD’s largest shareholders.

Remember this is a private company, albeit one recently valued at a half-trillion dollars.
Then there’s tech giant Microsoft, which is heavily invested, and cloud computing behemoth Oracle has a $300bn deal with OpenAI, too.

OpenAI’s Stargate project in Abilene, Texas, funded with the help of Oracle and Japanese conglomerate SoftBank and announced at the White House during President Donald Trump’s first week in office, grows ever larger every few months.

And as for Nvidia, it has a stake in AI startup CoreWeave – which supplies OpenAI with some of its massive infrastructure needs.

And as these increasingly complex financing arrangements get more and more common, the experts here in Silicon Valley say they may be clouding perceptions on AI demand.
Some people aren’t mincing their words about it either, calling the deals “circular financing” or even “vendor financing” – where a company invests in or lends to its own customers so they can continue making purchases.

However, the impact of AI-related loans defaulting or having to be restructured would produce an additional downdraft when the party ends. The US could be in store for a protracted period of low/no growth and zombification of lenders a la Japan post its real estate bubble/stock market bubble implosion.

By Wolf Richter, editor at Wolf Street. Originally published at Wolf Street

What is amusing is just how much talk there has been about the AI investment bubble, and what it will do or not do to the markets and the economy when it implodes or doesn’t implode: That it’s almost like at the peak of the Dotcom Bubble. That it’s much worse than at the peak of the Dotcom Bubble. That it’s nothing like the Dotcom Bubble because this time it’s different. That even if it’s like the Dotcom Bubble and then turns into the Dotcom Bust, or worse, it’s still worth it because AI will be around and change the world, just like the Internet is still around and changed the world, even if those first investors got wiped out, or whatever.

There are many voices that loudly point this out, and point out just how risky it is to bet on hocus-pocus money, or that explain in detail why this isn’t risky at all, why this is not anything like the Dotcom Bubble, why this time it’s different – the four most dangerous words in investing.

The talk fills the spectrum, and these are people with enough stature to be quoted in the media: Jamie Dimon, Jeff Bezos, the Bank of England, Goldman Sachs analysts, IMF Managing Director Kristalina Georgieva…

The focus is on the big-tech-big-startup circularity of hocus-pocus deals between Nvidia, OpenAI, AMD, along with Amazon, Microsoft, Alphabet, Meta, Tesla, Oracle, and many others, including SoftBank, of course.

OpenAI now has an official “valuation” — based on its secondary stock offering — of $500 billion though it’s bleeding increasingly huge amounts of cash. And there are lots of players in between and around them. They all toss around announcements of AI hocus-pocus deals between them.

OpenAI has announced deals totaling $1 trillion with a small number of tech companies, at the top of which are Nvidia ($500 billion), Oracle ($300 billion), and AMD ($270 billion). Each of these announcements causes the stocks of these companies to spike massively – the direct and immediate effects of hocus-pocus money.

OpenAI obviously doesn’t have $1 trillion; it’s burning prodigious amounts of cash. And so it’s trying to rake in investment commitments from the same companies that it would buy equipment from, and engineer creative deals that cause these stock prices to spike, and so the hocus-pocus money announcements keep circulating.

OpenAI’s idea of building data centers with Nvidia GPUs that would require 10 gigawatts (GW) of power is just mindboggling. The biggest nuclear powerplant in the US, the Plant Vogtle in Georgia, with four reactors, including two that came on line in 2023 and 2024, has a generating capacity of about 4.5 GW. All nuclear powerplants in the US combined have a generating capacity of 97 GW.

But It’s Real Money Too. A Lot of Real Money.

Big Tech is letting its huge piles of cash spill out into the economy to build this vast empire of technology that requires data centers that would consume huge amounts of electricity to let AI do its thing.

And these “hyperscalers, are leveraging that money flow with borrowing, by issuing large amounts of bonds.

And private credit has jumped into the mania to provide further leverage, lending large amounts to data-center startup “neocloud” companies that plan to build data centers and rent out the computing power; those loans are backed with collateral, namely the AI GPUs. No one knows what a three-year-old used GPU, superseded by new GPUs, will be worth three years from now, when the lenders might want to collect on their defaulted loan, but that’s the collateral.

The data centers are getting built. The costs of the equipment in them – revenues for companies that provide this equipment and related services – dwarf the costs of the building. And stocks of companies that supply this equipment and the services have been surging.

The bottleneck is power, and funds are flowing into that, but it takes a long time to build powerplants and transmission infrastructure.

Is It Really Different This Time?

So there is this large-scale industrial aspect of the AI investment bubble. That was also the case in the Dotcom Bubble. The telecom infrastructure needed to be built out at great cost. Fiberoptics made the internet what it is today. Those fibers needed to be drawn and turned into cables, and the cables needed to be laid across the world, and the servers, routers, and other equipment needed to be installed, and services were invented and provided, and businesses and households needed to be connected, and it was all real, and it was all very costly, requiring huge investments, but progress was slow and revenues lagging, and then these overhyped stocks just imploded under that weight, along with the stocks that were the pioneers of ecommerce, internet advertising, streaming, and whatnot.

The Nasdaq, where much of it was concentrated, plunged by 78% over a period of two-and-a-half years, investors lost huge amounts of money, many got wiped out, thousands of companies and their stocks vanished or were bought for scrap when that investment bubble crashed. And a year into the crash, it triggered a recession in the US – and a mini-depression in Silicon Valley and San Francisco where much of this had played out.

Yet the internet thrived. Amazon barely survived and then thrived in that new environment. But Amazon was one of the exceptions.

In this mania of hype, hocus-pocus deals, and huge amounts of real money fortified by leverage – all of which caused stock prices to explode – markets become edgy. Everyone is talking about it, everyone sees it, they’re all edgy, regardless of their narrative – whether a big selloff is inevitable with deep consequences on the US economy, or whether this time it’s different and the mania can go on and isn’t even halfway done.

Whatever the narrative, it says risk in all-caps. Anything can prod these stock prices at their precarious levels to suddenly U-turn, and if the selloff goes on long enough, the investment bubble would come to a halt, and the hocus-pocus deals would be just that, and the whole construct would come apart. But AI would still be around doing its thing, just like the Internet.

Print Friendly, PDF & Email

20 comments

  1. JB

    I left my current line of work shortly after ChatGPT entered the picture – and have returned to it again this year – and yea, in terms of tech and coding, AI is a significant productivity multiplier.

    It has not come anywhere close to realizing its full potential yet, as right now its usage is tied up in legal concerns and significant caution regarding ownership of code etc. – greatly limiting its use – but the productivity boost is real, and coders now need to become rapidly proficient at using these tools quite soon, as part of the job.

    AI on the level of human intelligence we have not. AI capable of doubling/tripling/quadrupling the efficiency/productivity of a lot of expensive tech workers, we have. And it’s not yet fully integrated into peoples work, due to legal caution/care.

    Coders are increasingly going to become ‘AI shepherds’ by the end of the decade. It doesn’t take away the critical thinking and problem solving from the role – among the key skills of coders – it takes away a lot of the rote/boring crap from the job, and still needs humans to review and polish up the final output.

    Reply
  2. Ignacio

    What is in reality the main asset of Open AI? I would say it is GPT 4.5, DALL-E and may be other AI models which i don’t know. What is the value of these assets? Something that can drop to 0 in an instant? The model resembles somehow Uber, whose main value would be the cab sharing app, but worse because there are a few others of the same. With the schemes mentioned here backed on such vaporous assets… don’t know how to end the phrase.

    Reply
  3. bertl

    I have a friend who has discovered that, as a consumer, he can use AI to write 20 versions of a legal letter in seconds each beautifully calibrated to achieve the desired effect, and he can “take on large organisations like HMRC and the NHS which would otherwise drain you financially and emotionally, and would otherwise drown you with the paperwork involved”.

    “Suddenly the large organisation you are fighting does not hold all the cards, able to wear you down by attrition.”

    Perhaps this is the reason why God invented AI: to help the little peple to fight back against big organisations, and God’s little joke was to make it so tempting that big organisations are financing it to such a degree that they wii become small organisations or just vanish from the face of the earch.

    Reply
  4. mgr

    What will be the effect of a full on trade war between the US and China which seems to be starting in earnest? China is clamping down on its rare earth export licensing upon which these AI fever dreams depend.

    According to Hua Bin: “The most complicated part of rare earth production is in the processing and refining stage, where China controls over 90% global market share. In the military-critical heavy rare earth segment, China’s control is complete at over 99%… Bottom line is semiconductors cannot be built without rare earths at yield, at scale, or at 3 nm, with today’s technology.”

    Reply
    1. Michaelmas

      China is clamping down on its rare earth export licensing upon which these AI fever dreams depend.

      Not only that. Energy is central to any AI buildout. China’s electricity production is 9,456 TeraWatt/hour and 31.6 percent of global energy production, while the US is less than half of that, at 4,494 TWh and 15 percent,

      Equally significantly, at every stage it’s the US, under Trump or Biden, that’s initiated these tariff and sanction hostilities and absolutely avoidably cut itself off from necessary resources. So the US is a Moron Empire up against this —

      Western executives who visit China are coming back terrified:
      Robotics has catapulted Beijing into a dominant position in many industries

      https://www.telegraph.co.uk/business/2025/10/12/why-western-executives-visit-china-coming-back-terrified/

      Reply
  5. bertl

    Oh, as a serious collector of recorded music, I find AI helpful for discographical searches and sourcing rare recordings, mainly bootlegged live opera. Other than that, which can be easily checked, I’m pretty sure it is more dangerous than usefu when scaled up for larger tasks and any creature with the intelligence to increase it’s intelligence is an obvious danger to humankind although it might helo the planet to recover from the age of Man, and for the next intelligent living creature, it might prove to be a very helpful God, enabling them to be much better than we are.

    Reply
  6. raspberry jam

    Every time I read about the eye-watering OpenAI valuations and the incestuous web of deals between them and so many other companies I think about how I, as a person who works in the AI field on a product that uses LLMs, barely use any of the OpenAI models (my product is model-agnostic and I switch between different models depending on task and the GPT class are rarely good for my needs). Right now a lot of the consumer and enterprise tools that leverage LLMs don’t have that type of model flexibility built in, but they will eventually, because the frontier model capabilities and specialties are constantly changing. To me the web of deals looks like OpenAI is trying to financially lock in vendors and partners while they are perceived to be ahead.

    Reply
    1. Michaelmas

      raspberry jam: To me the web of deals looks like OpenAI is trying to financially lock in vendors and partners while they are perceived to be ahead.

      Sure. Here’s an interview with Alex Bouzari, CEO of DataDirect Networks, who has a level-headed take on the probable direction of travel —

      https://www.datacenterdynamics.com/en/analysis/ddn-ceo-alex-bouzari-on-surviving-the-age-of-ai/

      Obviously, like Jensen Huang with his leather jacket, Bouzari has his own visual gimmick, but he’s been successful for decades in the supercomputer infrastructure realm.

      raspberry jam: Right now a lot of the consumer and enterprise tools that leverage LLMs don’t have that type of model flexibility built in, but they will.

      Yup. Also, if you think about it, LLMs are arguably the clumsiest, most brute force approach to AI imaginable. But while this AI wave may not deliver a fraction of what’s claimed, it’s going to enable subsequent waves that do. Of course, a lot of people may not like the results.

      Reply
      1. raspberry jam

        Very interesting link, thank you for sharing!

        I think the next few years are going to be about a handful of industries trying to replicate the current generation of ‘killer use cases’, for example VFX and animation plugins/application layers on top of or inside existing professional editing software that connect to LLM APIs or other hyper-specialized LLM interfaces specific to the need (think of a Adobe Animate plugin that connects to an LLM that has been specially trained for puppet rigs to automate scene rigging based on natural language prompts). This would be an industry-specific repeat of what is currently going on with the coding assistants. You can see the nascent forms of this, for example ToonBoom has a plugin to handle masking, and there are image/text genAI subscriptions with their own models specific for anime/manga in Japan that include a lot of very specialized tooling only relevant for industry people who have to churn out a ton of content on tight deadlines, but also includes a lot of the recent RAG advances that are present in the coding assistants but haven’t fully percolated out to the commercial chat bot interfaces.

        The simulation/non-LLM angle is an interesting angle of pursuit long term. There is definitely something there and also there is a lot of foundational work to be done to get it into wide adoption…

        Reply
        1. Michaelmas

          raspberry jam: the next few years are going to be about … industries trying to replicate the current generation of ‘killer use cases’, for example VFX and animation plugins/application layers …inside existing professional editing software that connect to LLM APIs or other hyper-specialized LLM interfaces.

          All that will be visible and fine. The angle on the world I have — the parochial view I get — is from hearing a little about what’s going on in deep tech VC. From that angle, it seems a very profound impact will come from specialized LLM architectures using proprietary data to create new biogenetic tech and synthetic materials. Like this, Vant AI —

          https://www.vant.ai/neo-1
          https://www.biospace.com/vantai-enters-collaboration-with-bristol-myers-squibb-to-accelerate-molecular-glue-drug-discovery-through-artificial-intelligence

          Or what Steve Crossan, who set up AlphaFold at DeepMind, is doing at Dayhoff Labs —

          https://www.dayhofflabs.com/faq
          https://www.dayhofflabs.com/

          “What I cannot build, I do not understand.’ We’re building artificial biology to solve synthesis and compute. Life is chemistry that computes…Dayhoff Labs is committed to revolutionizing the fields of chemistry and biochemistry through the development of advanced AI models for ab initio synthetic biology.”

          Like that.

          Reply
  7. ilsm

    Who is “underwriting” the “bonds”?

    It is modestly interesting to hear that OpenAI is building a “trillion bucks ” in data centers. Sensible investors should know: what are the operating and support costs that will be paid to run these centers, and what is the upgrade/investment cycle which is the pay back period for the initial run of equipment?

    DoD history says that if I spend X to deploy a system it will cost 2X over 20 years to use the system, not in war.

    Revenue cost stream….. in operation?

    OpenAI had $4.3 billion in revenue in 1st half 2025.

    Reply
    1. Ellery O'Farrell

      No, I haven’t read Zitron’s article (having read many others, I’m sure he’s right).

      But I do have a story about circular money from long, long ago, when I was in Citi’s CFO’s office tasked with supervising securitization of its assets. The Philippines office, or an affiliate — I forget — wanted to securitize its loans to local businesses. Great idea!

      Until the accountants learned that they kept their delinquency/default numbers low by lending the businesses the amounts they owed but hadn’t paid: their delinquencies. Of course, only up to the amount the office determined they could actually pay.

      We pulled the deal. Would that happen now?

      Reply
  8. XXYY

    But AI would still be around doing its thing, just like the Internet.

    “The Internet” is a collective term for a basket of technologies used to move information around the world at high speed and with high reliability. As most people know it was originally a DARPA program allowing military command and control during a nuclear war, and was expanded for civilian use once many of the fundamentals were established.

    It’s easy to see that high reliability, high-speed data networks are going to have some utility to society. Just like the freeway system built up in the post World War II era had civilian utility even though it’s original motivation for both the Nazis and the US was to allow rapid transport of military materiel and troops. Freeways and data networks have rather obvious capabilities and it doesn’t take great imagination to see the value.

    LLVM AI, on the other hand, has the ability to quickly crank out great streams of symbols that are mathematically similar to other bodies of symbols that they were trained on. In a society that is already overflowing with text, images, and audio, this is like finding a faster way to bring coal to Newcastle (or ice to the Eskimos?). The fact that these streams are defective about half the time in various unpredictable ways makes “AI” even more useless.

    So continually whining about how AI infrastructure will be useful merely because freeways and data networks turned out to be useful is 100% hopium in my book. It could just as easily turn out to be useless and dangerous as were self-driving cars and credit default swaps. Every new technology has to be analyzed on its own merits.

    Reply
  9. ciroc

    The collapse of the metaverse bubble was not dramatic enough to leave a lasting impression. Similarly, even if the AI bubble were to burst, its impact would likely be limited to major investors and the companies involved. As with the metaverse, AI itself won’t disappear once the bubble bursts. However, it will likely become a niche product for a select group of tech enthusiasts.

    Reply
  10. fjallstrom

    My attempts at looking in the crystal ball is mostly focused on the tech sector. I think that:
    * The companies providing LLMs, OpenAI and their ilk, will crash. They have no profits and are depending on ever more fresh investments.
    * The oligopolistic giants like Microsoft and Alphabet will take a loss and recoup by gauging enterprise customers in other areas (Microsoft has already been hiking prices on licenses). They are likely to take over ownership of what is left of the companies providing LLMs. Note that this increases tech-rents on corporations in general (I think individuals are more price sensitive and have an easier time shifting to say LibreOffice).
    * The companies that are selling shovels in this gold rush, like Nvidia, could in theory be fine as companies provding goods and services, though the stocks may still crash down to a lower level. That is provided they didn’t buy into the hype and overinvested the profits based on lines going up. Which I wouldn’t bet against, considering the size of the hype and structural incompetence.

    The massive expansion of data centres may afterwards be repurposed as surveilliance analysis tools in service of the government. Who cares if they are only probablistically right and if they hallucinate, as long as the raids and the drones only are directed at non-wealthy people.The main purpose of a pre-crime unit is fear.

    Reply

Leave a Reply

Your email address will not be published. Required fields are marked *