This is Naked Capitalism fundraising week. 1321 donors have already invested in our efforts to combat corruption and predatory conduct, particularly in the financial realm. Please join us and participate via our donation page, which shows how to give via check, credit card, debit card, PayPal, Clover, or Wise. Read about why we’re doing this fundraiser, what we’ve accomplished in the last year, and our current goal, Karōshi prevention.
This humble blogger has never been keen about discussing the stock market, since it’s driven overmuch by hopium and manipulation, as in share buybacks. However, growth in the US now largely depends on the capital expenditures of a very small number of companies making ginormous wagers on AI…when they have very little in the way of corresponding revenues. These wagers are nearly all on large language models, which as we have pointed out repeatedly through links in Links and sometimes user commentary, do not deliver reliable results. Even worse, per a recent MIT study, 95% of the pilots at companies are failing. Yet virtually no one seems willing to stand back from the intensely-hyped story of inevitability and bright shining uplands.
For those who want a dose of sobriety, along with lots of contra-narrative details, please go straight to Ed Zitron’s site. His latest post describes at length (among many other things) how evidence of LLMs getting better is thin at best, and include many tart observations, such as:
Where we sit today is a time of immense tension. Mark Zuckerberg says we’re in a bubble, Sam Altman says we’re in a bubble, Alibaba Chairman and billionaire Joe Tsai says we’re in a bubble, Apollo says we’re in a bubble, nobody is making money and nobody knows why they’re actually doing this anymore, just that they must do it immediately.
And they have yet to make the case that generative AI warranted any of these expenditures.
Recall that Alan Greenspan deemed dot-com valuations to represent “irrational exuberance” at the end of 1996, yet the bubble didn’t start deflating until March 2000. And as often happens with manias, it had a three-month blowout phase right before its demise.
At least with the Internet frenzy, there were colorful justifications for business models which no way, no how would ever generate a profit. They were being valued on “eyeballs”. Perhaps with AI, there is a valuation justification somewhere that actually pencils out. But as Zitron and others have pointed out, the big spenders are generating paltry revenues, let alone profit, and have yet to make a case as to how and why that will change.
Rather than debate the possibility that AI will make a massive turnaround in terms of cash income to its big backers, let us remind readers what will be in store if and when the party ends. The big wild card is that stock market plunges, unless they were fueled to a marked degree by borrowings, do not produce financial crises, as in Asian crisis or September 2008 near or actual bank failures. Damage to banks (unless addressed very forcefully and credibly) can lead to bank runs, which can then cause payment systems and financial markets to seize up.
However, the dot-bomb era came at the end of a decade-plus of solid US growth and political stability. There was not a big private debt binge (private debt frenzies produce financial crises). So the dot-com crash provides an almost classic story of what happens when there is a massive loss of paper wealth, but not much harm to lenders. The result is major deflationary pressure, as in depressed growth. Greenspan went overboard in fighting that, with an unprecedented protracted period of negative real yields, which stoked leverages speculation in derivatives and housing that helped tee up the global financial crisis (see ECONNED for a detailed discussion).
But now, we have the striking contrast between the touching investors faith in all things AI, versus gold breaching $4,000, a flashing alarm of distrust in financial assets and the once almighty dollar. We’ll return to the idea that a crisis might kick off in non-AI plays and then precipitate an AI unwind.
First, a conventional view of what might be in store, from the Financial Times in IMF and BoE warn AI boom risks ‘abrupt’ stock market correction:
Global stock markets are at risk of a sudden correction as the artificial intelligence boom pushes valuations towards dotcom bubble levels, both the IMF and Bank of England have warned.
Kristalina Georgieva, IMF managing director, said on Wednesday that bullish market sentiment about “the productivity-enhancing potential of AI” could “turn abruptly”, hitting the world economy.
She was speaking hours after the BoE body overseeing financial stability risks also drew parallels with the 2000 crash that followed the dotcom boom, warning of the risk of a “sudden correction” in global financial markets.
“Today’s valuations are heading towards levels we saw during the bullishness about the internet 25 years ago,” Georgieva said in a speech delivered ahead of the IMF’s annual meetings next week…
In similar language, the BoE’s Financial Policy Committee warned that “the risk of a sharp market correction has increased” in the record of its latest meeting on Wednesday.
It said that the cyclically adjusted price-to-earnings ratio for US shares, a closely watched measure of valuations, had come close to the levels of 25 years ago — “comparable with the peak of the dotcom bubble”.
The article also took note of the usual “This time it’s different” mantra, troublingly from a Fed official:
US Federal Reserve officials have played down the prospect of a damaging market correction. Mary Daly, the head of the San Francisco Fed, said this week that an AI bubble was not a threat to financial stability.
“Research and economics call it more like a good bubble, where you’re getting a ton of investment,” she told Axios. “Even if the investors don’t get all the returns that the early enthusiasts think when they invest, it doesn’t leave us with nothing. It leaves us with something productive.”
One of the classic signs of a market peaks is that the remaining bears have thrown in the towel. The pink paper’s comment section on this article contained scarcely a negative word about AI and lots of contempt for government officials. Perhaps no one posting there is old enough to remember that the Bank of England was solid and fact-based in its warnings in the runup to the 2008 crisis.
Or perhaps they might take Jamie Dimon more seriously. From BBC:
There is a higher risk of a serious fall in US stocks than is currently being reflected in the market, the head of JP Morgan has told the BBC.
Jamie Dimon, who leads America’s largest bank, said he was “far more worried than others” about a serious market correction, which he said could come in the next six months to two years.
Admittedly Dimon is more concerned about the totality of risks than AI per se. But the flip side is he might not find it in JP Morgan’s commercial interest to rain on the AI parade:
However, on the broader economic picture, he felt there were increased risks US stock markets were overheated.
“I am far more worried about that than others,” he said….
There were a “lot of things out there” creating an atmosphere of uncertainty, he added, pointing to risk factors like the geopolitical environment, fiscal spending and the remilitarisation of the world…
Much of the rapid growth in the stock market in recent years has been driven by investment in AI.
On Wednesday, the Bank of England drew a comparison with the dotcom boom (and subsequent bust) of the late 1990s – and warned that the value of AI tech companies “appear stretched” with a rising risk of a “sharp correction”.
“The way I look at it is AI is real, AI in total will pay off,” he said.
“Just like cars in total paid off, and TVs in total paid off, but most people involved in them didn’t do well.”
He added some of the money being invested in AI would “probably be lost”.
What is not sufficiently acknowledged is the degree to which what appears to be growth in the US is dependent on AI. We linked to a report yesterday that ex data centers, US growth in the first six months of 2025 was only 0.1%. And data center expansion is nearly entirely AI driven.
Another recent story in the Financial Times describes how America is now one big bet on AI:
The hundreds of billions of dollars companies are investing in AI now account for an astonishing 40 per cent share of US GDP growth this year. And some analysts believe that estimate doesn’t fully capture the AI spend, so the real share could be even higher.
AI companies have accounted for 80 per cent of the gains in US stocks so far in 2025. That is helping to fund and drive US growth, as the AI-driven stock market draws in money from all over the world, and feeds a boom in consumer spending by the rich.
Since the wealthiest 10 per cent of the population own 85 per cent of US stocks, they enjoy the largest wealth effect when they go up. Little wonder then that the latest data shows America’s consumer economy rests largely on spending by the wealthy. The top 10 per cent of earners account for half of consumer spending, the highest share on record since the data begins.
But without all the excitement around AI, the US economy might be stalling out, given the multiple threats.
No nation has seen an immigration boom-bust cycle near the scale of the one roiling America….
This labour force squeeze alone will reduce America’s growth potential by more than a fifth, Goldman Sachs analysis suggests…..
Likewise, government deficits and debt are increasing faster in the US than in other developed markets. At around 100 per cent of GDP, US government debt is near its second world war peak and on its current trajectory, that burden could keep rising. Unless, of course, AI saves the day…
Global markets appear to be counting on the happy scenario…
The main reason AI is regarded as a magic fix for so many different threats is that it is expected to deliver a significant boost to productivity growth, especially in the US…
The one discordant note in this “buy America, no matter what” narrative is the dollar. But many analysts explain its recent decline as the result of foreign investors hedging their exposure to more normal levels, after being overly exposed to a very expensive currency.
Foreigners poured a record $290bn into US stocks in the second quarter and now own about 30 per cent of the market — the highest share in post-second world war history. Europeans and Canadians have been boycotting American goods but continue buying US stocks in bulk — especially the tech giants…
What that suggests is that AI better deliver for the US, or its economy and markets will lose the one leg they are now standing on.
A recent VoxEU analysis found that there was a flight from the dollar after Liberation Day, but the dollar has since resumed it safe haven status, with Treasury buying on edgy news. In keeping, in September, Reuters reported that foreign holdings of Treasuries reached an all time peak in July.
So in other words, the implicit base case is a replay of sorts of the dot com crash, of the stock market plunge harming the economy by a sharp falloff of capital expenditures due to them having depended on the mania scenario continuing, and the effect of loss of stock market wealth on spending. As mentioned above, that could produce a bigger downdraft than in the early 2000s due to the much larger role of spending by the rich in propping up demand.
But yours truly is concerned about debt bombs in addition to an AI bust. In recent years, we have had far too many blow-ups that came seemingly out of the blue: Archegos and total return swaps. Silicon Valley Bank et al being way too dependent on super-sized deposits and also being dopes and loading up on long-dated Treasuries when rates were low. The latter was at risk of becoming a more serious financial crisis; the authorities ginned up a broad-based bailout mechanism. Now we have the unexpected bankruptcy of auto parts supplier First Brands leaving investors in credit funds who’d had an appetite for First Brands’ debt nervous about the caliber of due diligence on other loans in their portfolios. Some detail from Bloomberg:
Since First Brands Group filed for bankruptcy with over $10 billion of liabilities, the market has been focused on blows to its broadly syndicated investors and trade finance providers. Some of the debt has plunged to around 36 cents on the dollar… But the company benefited from another set of lenders that are now asking to be paid back: Private credit. These firms gave First Brands its last infusion of cash before its collapse, an unraveling that capped weeks of investor concern about the company’s use of opaque, off-balance-sheet financing… Sagard agreed to arrange a new $250 million facility for the company in April… Others were brought in, including Strategic Value Partners, which became the largest lender on the deal… The largest holder of the loan, listed as Bryam Ridge LLC with the same address as SVP’s headquarters, holds $100 million of the debt… Private credit firms pitch themselves on the fact they can provide fast funding from only a handful of sources… Private lenders also have limited options to cash out or sell investments when things go south… First Brands’ private credit deal was designed to boost up its balance sheet for acquisitions until the company pitched a refinancing of its leveraged loans… In July, Jefferies Financial Group Inc. was tapped to market a $6.2 billion refinancing for First Brands in the public markets. But the deal fizzled after investors asked for further diligence… If that deal had been successful, the private credit loan would have been paid off… Private credit lenders say they’re owed about $276 million in total… They’ll have to wait with around 80 other creditors to get paid back.”
We have been warning about private credit funds for some time. Like private equity, they amount to blind pools. Investors make capital commitments to the fund manager, who is typically part of a private equity complex. They are limited partners and thus have no say in what the fund manager actually does.
Another source of opacity and concern about leverage on leverage in in private equity itself, where fund managers have been borrowing at the fund level via so-called subscription lines of credit as well as against the companies themselves. The companies are often subjected to higher levels of operating leverage by what amount to sale-leasebacks of their real estate and heavy use of supplier credit.
And there are plenty of things that could put overly-levered entities into distress, above all continued deterioration of the economy due to tariff-induced price increases kicking in as the jobs outlook is also faltering, as well as direct disruption due to Trump policies, such as the blowback if Trump’s use of emergency authority to impose tariffs is found to be illegal by the Supreme Court. That would wreck Trump’s budget even before getting to exposure to having to make tariff refunds (an expert deems that not likely and regardless years away from being finally adjudicated, but law firms are nevertheless rounding up clients now, so there would be significant uncertainty about how that would play out).
In other words, debt wobbles could be the trigger for the stock market reset and interact with them. That happened in 1987. Experts expected Japan to be where a crash would occur. But the US stock market had taken a big run up in 1987, fueled significant by leveraged buyouts. The two triggers for the 1987 crash, per the Brady Commission report, were a proposal by the Treasury Department to put a surtax on interest from highly leveraged transactions, and wobbliness in the Treasury market, due significantly to Japan adjusting its policies as part of the Louvre Accord intervention (my copy of the Brady Commission report is in storage; George Soros discussed the issue but admitted to being not clear on exactly what transpired). Unbeknownst to many, the Treasury market actually seized up after that meltdown; I was in Japan when the Fed called the Bank of Japan and told it to start buying Treasuries. The BoJ called the Japanese city banks like my then employer Sumitomo Bank and told them to swing into action.
In other words, we could see the effects of debt wobbles turn out to be the detonator for a stock market plunge, as happened in 1987. And as the linked Soros account reminds us, no one saw the US stacks as all that exposed then. But we have a new variable now, that of the dependence of the dollar on the health of US capital markets. So an AI unwind, whether somewhat a function of debt market contagion or simply falling apart due to its own excesses, has the potential to be the Big One in terms of setting off a highly disruptive dollar plunge. So stay tuned.


IMF and BoE warn AI boom risks ‘abrupt’ stock market correction:
Good to hear from Kristalina who appears to have no problem with extending more debt to serial defaulter Argentina to the tune of $42B which is 34% of the IMF’s loans outstanding. Adding in $15B to the even bigger deadbeat Ukraine is chump change. Geez…all these people who went to the School of Failing Upwards! Where is that located anyway?
Where does the IMF get the money to be loaning out to deadbeats like Ukraine and Argentina? I presume if they take losses, US and European taxpayers are somehow on the hook?
Are the IMF’s Special Drawing Rights created out of thin air like fiat money in general or is there something behind them?
My friend was in the cabinet of Grigirieva’s predecessor at UNESCO and stayed on for a while before leaving herself. She says that Grigorieva’s mind is as laudable as her morals.
Thank you for the detailed overview of the debt and risk aspects of the AI bubble.
Let me offer a suggestion for what a significant amount of the data center buildout is actually going to be used for in the short and medium term in the US even if the consumer genAI push fails (as it likely will): domestic surveillance. The current administration is funding and building out a massive domestic repression apparatus. A lot of that work is going to be subcontracted out to entities like Palantir and others that do not require keeping all their workloads in the federal data centers. The federal data centers are already strapped on GPU capacity and if the surveillance is nationwide then there will need to be many regional and state-level data/processing facilities especially if this is going to be relying on live cellular/wifi data (locational and otherwise) that requires at-time processing as opposed to after the fact. The local/regional sites will handle the immediate processing needs and replicate it back to the federal sites for long term storage and data mining. It can never really be enough if they’re really going to try to track everyone, everywhere, all the time.
But as I understand it, these data centers are custom built for Nvidia GPUs, and there are few workloads suitable for them, and definitely not general purpose compute tasks.
So some of this capacity might be absorbed for other purposes, but it won’t be a drop-in replacement in many instances and will require retooling the data center site for general purpose compute systems.
But as I understand it, these data centers are custom built for Nvidia GPUs, and there are few workloads suitable for them, and definitely not general purpose compute tasks.
Your understanding is not correct.
Yeah… a very high level overview of how corporate end users are actually using LLMs that are deployed on their own GPUs looks something like this:
1. GPU capacity is contracted through a cloud provider or purchased off the shelf and deployed in a data center
2. an open source or custom build LLM is deployed into something like Ollama on the hardware with the GPUs and configured to be accessible through Kubernetes network APIs to provide some pod lifecycle management outside of real HA/DR and then sets up real network routing to the Kubernetes cluster/Ollama endpoints/model access API
3. customer accesses the LLM over the network and if desired builds an application that calls into the LLM as needed
4. if customer wants to distill/fine tune their own model, they do that before completing steps 2-3 above
If you look at the Ollama docs you’ll see it supports Nvidia, AMD and some consumer-level GPUs or GPU accelerations like Apple’s.
The data centers themselves are not all buying the GPUs directly (some are, not all of them). One company I work with is doing AI-specific data centers with specialized thermal management chain and power management because of all the special GPU-specific requirements. But the actual GPUs are purchased one level down from those who build the data center (the hyperscalers).
Surveillance is only a small part of what they’re after (and let’s be honest, they’ve already achieved it). The real prize is complete information dominance: total influence over all forms of media, automated coercion and opinion herding, massive and effective propaganda campaigns, the dissolution of any shared consensus reality. That this is possible is already amply demonstrated by the mental health effects on users of AI companionship apps.
It wouldn’t surprise me if the ostensible weapons budget of the DoD and black budgets is no longer spent on weapons but on the simulation of weapons and capability – and that now includes AI. A mixture of Baudrillard and 1984. The sheeple will eat the AI reality. We were always at war with East Asia etc.
There will be no actual wars because the gerontocrats of East Asia and Airstrip One don’t want to risk their position, they just need an enemy to justify the social and economic repression at home.
There are quite a number of Philip K Dick stories and novels that do variations on that theme.
They’re going to need a lot of ground troops (or a lot of very capable robots) to keep all those distributed data centers secure and connected.
A few well targeted EMP’s will do a lot of work that no one can really defend from.
Surveillance and “persuasion” — the real utility of LLMs seems to be sucking in the vulnerable and scrambling their brains.
This was my first thought when I heard about the presidential national security memo Trump signed recently.
raspberry jam, thank you for saying the quiet part out loud. Reading Zitron’s analysis, all I could think is, well, if private markets have no ability for all this capacity to generate revenue, I imagine that the government might be interested in all the computing power for their surveillance schemes.
I suspect DHS DOJ and DOW will be the scant last hope that some AI ventures might not collapse from no revenue.
But, where will those surveillors come up with over $300 billion a year. A lot even for the printing presses.
That’s less than $1000 per American.
If the fantasy is to replace as much of Government with AI as possible, then the argument is about how big the savings will be.
Protests and resistance work, in some cases. The people of Caledonia, WI resisted a Microsoft data center being constructed on farm land and Microsoft backed off:
https://jsonline.com/story/money/business/2025/10/08/microsoft-pulls-plans-for-data-center-in-caledonia-wisconsin/86580822007/
It actually helps that Microsoft’s president (not CEO), Brad Smith, is from Appleton, WI and actually cares about his home state. If only other corporate heads were as ethical. Not saying much, but for someone at that level, Smith is not a bad guy. Hopefully nobody learns about his decency.
This post I scraped off of substack shows the desperation factor to find use cases:
This problem was solved a decade ago by tools like Selenium and Browserstack. QA departments have been using these for a long time to automate functional website testing.
And even if the so-called UI control benchmarks are really better for Gemini’s Computer use solution, vs. existing well-tested tools, this is merely an incremental performance enhancement, not anything revolutionary or game-changing.
If I were a QA manager, I’d certainly not pay good money for this. The PMC is going to have to work harder.
Sounds like basically a brute force solution to interacting with UIs. We see something similar with APIs. As if throwing a LLM at something and letting it brute force API calls is a genius idea. Non-determinism. What’s not to like?
And a malicious API response can cause so much mischief. Who knew that your LLM actually had rm -rf built-in?
I hate to tell you but a significant part of my job is working with these companies who want to integrate AI into their developer tooling, and helping them define the appropriate application layer around an LLM for their use case. The vast majority of the use cases are for test generation/automation and issue resolution – Selenium can’t be automated to the point where you can point it do a code repository then plug it into CI/CD and automate the creation of new tests and resolving issues that arise in the new tests with each merge to the original code base. For now they’re saying they will leave a human in the loop to provide oversight to the new systems. So the Gemini model isn’t a sign of desperation – it’s a sign that LLMs are going to continue to get more hyper-niche specific for use cases, and developer tooling is one of the primary valid ones at this current state of the game.
QA managers are not the ones signing off on this – it is the c-suite keen to automate away huge swathes of QA and support. Of course the individual contributors are pushing back. They don’t sign the contracts.
Somebody still has to provide input to these AI systems, so keeping a human around is more of a necessity than they think. QA departments have been on the edge of extinction in most corporate environments for a while now, what with the “Full stack” developer model that started at Amazon and Google.
That the PMC wants to automate them completely out of existence is no surprise. I pity the fool who gets the assignment to fix all the mistakes and bugs from these AI systems.
I am working with an AI “vertical SaaS” company in a regulated financial market. It has developed a very niche workflow platform, with the vision of current market of 50,000 sales people shrinking to 10,000 because AI will replace a huge swathe of administration with note taking, form filling, product searching and proposal generation.
Our AI-native customer management platform selling like hot cakes. We’ve gone from negotiating sales to ten man Mom-and-Pop shops in month 1 to 100-man “Mittelstand” players in month 2 to 1,000-man bulge bracket players in month 3.
A bust in the AI platform space would hurt because the survivors will stop adding new capacity and raise the price of existing capacity but we can work with any AI platform so we are not in mortal peril if some die.
This sort of AI application will survive. There is no doubt that industry plans to remove the pen-pushing and keep only the best sales and advisory performers.
I did this with perl scripts 25 years ago!
The collapse of the AI bubble would signify America’s loss in the global competition to develop AI technology. This would result in a decline in America’s prestige. Therefore, I believe Washington will prop up the bubble through policy.
Please tell me how that works. So we can also prop up buggy whips through policy?
OpenAI may receive subsidies, and nationalization is also possible.
The US already has a massive budget deficit. Subsidies require Congressional approval. If a bubble has imploded, that will mean the media will be all over the fact that AI was significantly an empty bag. Do you remember all of the dot-com hangover press of the “How dumb were we to believe the hype” sort?
Yet it allowed the Dot.com/ludicrous RE/MBS risk due to laws after offshoring dynamics based on bad maths and domination via it globally. BTW things are heaps different now, past is not indicative.
Everything is a coin toss now …
Hope you are settled in now and having some enjoyment lass …
The whole AI race thing is such a comical farce. I’m not sure what we’re racing China to accomplish, and why we must win? I assume the national security implications, better killing bots, better surveillance capitalism. That stuff is mostly a sideshow to LLMs though, which is what all the froth is about as near as I can tell. But then everything is “AI” now.
The grifting squillionaires aren’t going down without a fight and continue to catapult the propaganda. Just ran across this one from the BBC – ChatGPT image snares suspect in deadly Pacific Palisades fire
If you just saw the headline, you might think ChatGPT tracked down a criminal. But of course what actually happened was the suspect was apprehended and authorities later found a ChatGPT image on his phone.
These clowns are really trying to make it seem like there’s nothing “AI” can’t do, and unfortunately for too many people are falling for it.
The one element that could keep this bubble afloat longer than it should could be the US chronic budget deficit (~6% of GDP currently) compared to the Dot Com era?? Many commentators think that one of the major catalyst that did finally burst the bubble 25 years ago was the Clinton surplus.
“Even worse, per a recent MIT study, 95% of the pilots at companies are failing. Yet virtually no one seems willing to stand back from the intensely-hyped story of inevitability and bright shining uplands.”
Yes it will be a brave entity that first stands up. But that entity will come out far better financially than the multitudes of investors piling in to dump their investments after the bubble has this first pin hole leak.
Bloomberg has been covering the topic of inflated valuations and ‘what does it actually do’ quite evenhandedly. They’re getting quite interested in all the ‘related party’ transactions between the AI companies, data center companies, chip makers, etc., which, in my view, is the part that will drag down many others when the wheels come off. Good summary and tables from earlier this week (should be an unlocked article for a few days):
https://www.bloomberg.com/news/features/2025-10-07/openai-s-nvidia-amd-deals-boost-1-trillion-ai-boom-with-circular-deals?accessToken=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJzb3VyY2UiOiJTdWJzY3JpYmVyR2lmdGVkQXJ0aWNsZSIsImlhdCI6MTc2MDA0MzkxNiwiZXhwIjoxNzYwNjQ4NzE2LCJhcnRpY2xlSWQiOiJUM1MyOEFHUEZIT1cwMCIsImJjb25uZWN0SWQiOiIwNENDQUQ3OUQzRUI0NDUxOUQ2RDE2ODJFMkQxRTgyQiJ9.KUmTWKXudZPr9Kscu6SsLjl7WuOfrah0RcRzDWUFcxM
To the extent that many of the AI companies are privately held and privately funded (i.e., not publicly traded, but invested in by venture capital, hedge funds, or rich people), the vaporization of much of that capital when it does crash and burn may not be missed by most people. It’s the inflated valuations of publicly held companies pinning their hopes on AI as a ‘solution’ that causes me the worries.
thank you for the unlocked link!
that chart caught my eye too! it’s a glorified circle jerk
Possibly true, but also not a counterargument. The dot com bubble didn’t leave us with nothing, but there was still a crash.
Instead of a crash, how about AI stonks drop 20%, followed by a slow grind downwards over a period of 12-24 months, while everybody stands around saying “yeah, prices were a bit inflated…” all nodding in unison?
Reminds me Acacia of the old Greenspan Congressional hearing [collective of Fk wits seeking personal mammon] when he told them dead stop about the realities of fiat. It was Gov at – day one – that funded any asset formation and so then its about what assets does the nation want or need for the future. Heads Sxploded[tm] Vaht did you say … Gov shapes all things market and social = deviant heresy against the Natural Order bequeathed by the Creator thingy.
So now A.I. is another boondoggle like Dot.com and as jobs don’t pay, 401Ks dive pile on like subprime RE MBS, PE [rolls eyes], its all about yield today and never mind the bust ….
Funny thing is I am working on an old Queenslander at the moment, husband is a ex mechanical engineer that got into finance back in the day and doing well. Cough …. now he is engaged in servicing clients with VaR level AI to ascertain risk for them. Mostly U.S./U.K. people.
The absurd thingy is they all say to me how much they appreciate my attention to detail and on the other hand, I see them being just the opposite because expectations of money … lifestyle, status, individual freedoms …
Do we NEED such a collapse to waken the world to Trump and the necessity of massive economic reorganization? Yes, many things can and will go wrong, but one is tempted to speculate that a collapse might take the political wind out of Trump’s sails.
There’s another important echo in the AI story from the Dot-com era. I only say this as I was a senior investment professional at one of the world’s largest asset managers back then.
There was large, incestuous intertwining of financials the grew increasingly concentrated. Microsoft, Intel, etc. were essentially operating VC firms inside their other operations. They were taking stakes in private companies in the space — a la OpenAI — to finance the companies purchase of their products. But what really mattered was they were marking the value of the companies up and running the unrealized capital gains through the P&L. This had the effect of making it look as though operating income was still growing vigorously, to the point that in 1999 something like 80% of operating income growth was unrealized capital gains (by memory).
The impact of all this was that in order to continue to please Wall Street they started distorted other fundamental business decisions, which we are seeing repeated as the they lay-off staff to offset the AI spend.
Separately, I am really scared by the reporting of the acceleration of private credit and infrastructure lending into the sector. I can only assume that they assume that when things crash they will be bailed out, again.
Fair value through Profit & Loss is a feature (bug?) of IASB accounting. If you hold minority investments in companies and you cannot justify them as strategic and hold them at historic value, you are required to revalue them and book it through P&L.
There are some additional criteria that can permit you take the fair value gains directly into Other Comprehensive Income, I.e. without distorting earnings per share, but where’s the fun and long-term incentive plan payout in that?
When the old canal and railway manias went bust, at least there was a lot of useful infrastructure left behind on top of which real economic activity was subsequently built. Likewise the fibre networks left behind by the dot-com bust.
Can anyone with knowledge of the sector say the same of the current AI mania? Or is what I am hearing about the short useful life of the silicon utilized in the systems correct — something in the order of 3-5 years?
Any idea how failed AI promises might reverberate through power companies and water infrastructure?
A Good Bubble
That will not age well.