Satyajit Das: AI – Artificial Intelligence or Absolute Insanity?

Yves here. Satyajit Das provides a high level but nevertheless very effective indictment of the scam known as AI. I hope readers will circulate this post widely.

By Satyajit Das, a former banker and author of numerous technical works on derivatives and several general titles: Traders, Guns & Money: Knowns and Unknowns in the Dazzling World of Derivatives (2006 and 2010), Extreme Money: The Masters of the Universe and the Cult of Risk (2011) and A Banquet of Consequence – Reloaded (2016 and 2021). His latest book is on ecotourism – Wild Quests: Journeys into Ecotourism and the Future for Animals (2024). This is an expanded version of a piece first published on 4 November 2025 in the New Indian Express print edition.

AI is tracing the familiar, weary boom and bust trajectory identified in 1837 by Lord Overstone of quiescence, improvement, confidence, prosperity, excitement, overtrading, convulsion, pressure, stagnation, and distress.

There are three primary concerns.

First, there are doubts about the technology. Building on earlier technologies such as neural networks, rule-based expert systems, big data, pattern recognition and machine learning algorithms, GenAI (generative AI), the newest iteration, uses LLMs (large learning models) trained on massive data sets to create text and imagery. The holy grail is the ‘singularity’, a hypothetical point where machines surpass human intelligence. It would, in Silicon Valley speak, lead to ‘the merge’, when humans and machines come together potentially transforming creativity and technology.

LLMs require enormous quantities of data. Existing firms in online search, sales platforms and social media platforms can exploit their own data troves. This is frequently supplemented by aggressive and unauthorised scraping of online data, sometimes confidential, leading to litigation around access, compensation and privacy. In practice, most AI models must rely on incomplete data which is difficult to clean to ensure accuracy.

Despite massive scaling up of computing power, GenAI consistently fails in relatively simple factual tasks due to errors, biases and misinformation in datasets used.  AI models are adept at interpolating answers between things within the data set but poor at extrapolation. Like any rote-learner, they struggle with novel problems. Their ability to act autonomously interacting within dynamic environments remains questionable. Cognitive scientists argue that simply scaling up LLMs based on sophisticated pattern-matching built to autocomplete rather than proper and robust world models will disappoint. Claimed progress is difficult to measure as benchmarks are vague and inconclusive.

Cheerleaders miss that LLMs do not reason but are probabilistic prediction engines. A system which trawls existing data, even assuming that is correct, cannot create anything new. Once existing data sources are devoured, scaling produces diminishing returns. Rather than fully generalisable intelligence, generative models are regurgitation engines struggling with truth, hallucinations and reasoning.

AI models can take over certain labour-intensive tasks like data driven research, journalism and writing, travel planning, computer coding, certain medical diagnostics, testing and routine administrative tasks like handling standard customer service queries. Its loftier aims may prove elusive. Predictions of medical breakthroughs have disappointed although pre- OpenAI machine learning models, pattern recognition engines and classifiers, used for years, continue to be useful.

For the moment, GenAI, an ill-defined marketing rather than technical term, remains a costly parlour trick for some low-level applications, making memes and allowing scammers to deceive and defraud – the “unfathomable in pursuit of the indefinable”.

Second, financial returns may prove elusive. Capital expenditure on AI is expected to total up to $5-7 trillion by 2030. AI startup valuations based on the latest round of funding were $2.30 trillion, up from $1.69 trillion in 2024, and up from $469 billion in 2020. But AI’s capacity to generate cash and returns on the investment remains questionable.

Revenues would have to grow over 20 times from the current $15-20 billion per annum to just cover current annual investment in land, building, rapidly depreciating chips and power and water operating expenses. Revenues totalling more than $1 trillion may be required to earn an adequate return. Microsoft’s Windows and Office, among the world’s most used software, generates less than $100 billion in commercial and consumer revenue. Around 5 percent of its 800 million users currently pay to use ChatGPT. Microsoft’s CEO drew the ire of true believers when he argued that AI had yet to produce a profitable killer application to match the impact of email or Excel.

The hope is AI will be paid for from higher productivity and corporate profits. But 95 percent of corporate GenAI pilot projects failed to raise revenue growth. After cutting hundreds of jobs and replacing them with AI, many firm were subsequently forced to reemploy staff when the technology proved deficient. Corporate interest is already showing sign of plateauing.

Monetisation of AI faces other uncertainties. Several Chinese firms, such as DeepSeek, Moonshot as well as Bytedance and Alibaba, have developed cheaper models which cast doubts about the capital investment intensive approach of Western firms. China’s favoured open-source design would also undermine the revenues of firms which have invested heavily in proprietary technology. Required electricity and water supplies may prove to be constraints.

In the meantime, AI firms remain a cash burning furnace. In the first half of 2025, OpenAI, owner of ChatGPT, generated $4.3 billion in revenue but spent $2 billion on sales and marketing and nearly $2.5 billion on stock-based equity compensation, posting an operating loss of $7.8 billion.

Third, there are financial circularities seen during the dot com boom. CoreWeave, an equipment rental business trying to cash in the AI boom, purchases graphics processers in-demand for AI applications and rents them to users. Nvidia is an investor in the company, and the bulk of revenues is from a few customers. There is concern around CoreWeave’s accounting practices, especially the rate of depreciation of the chips, and its significant borrowings.

In 2025, Nvidia, the backbone of the boom, agreed to invest $100 billion in OpenAI which in turn bought a similar dollar value of GPUs from it. Open AI proposed to invest in chipmakers AMD and Broadcom. There are side arrangements with Microsoft. Figure 1 sets out some of the complex interrelationships.

Figure 1

AI Firm Inter-relationships and Cross-Investments

This intricate web of linkages creates risks. They complicate ownership and create conflicts of interest. It was not clear how any of these commitments will work or be funded if they proceed. Open AI’s ability to finance these investments depends on continued access to new money from investors because it currently does not have the resources to meet many of these long-term obligations.

These transactions distort financial performance. The firm selling capital goods reports sales and profits while the funding of the sale is treated as an investment. The buyer depreciates the cost over several years. Given that Nvidia seemingly upgrades its chip architecture regularly, depreciation periods of anywhere up to 5 years or longer seem optimistic. This means that dubious earnings boost share prices in a dizzying financial merry go round.

The AI bubble, with its growing gap between expectations, investment and revenue potential, eerily resembles the 1990s. But it is much larger. Investment may be 17 times that of the 2000 dot com and four times the 2008 sub-prime housing bubble.

AI’s acolytes deny any excess and argue that this time it is different because it is financed by equity capital. In fact, a large proportion is funded by debt with the amount tied to AI totalling around $1.2 trillion, 14 percent of all investment-grade debt.

The funding pattern is intriguing. Hyperscalers, firms that build and operate large data centres providing on-demand cloud computing, storage, and networking services, such as Microsoft, Meta, Alphabet and Oracle, are providing much of funding alongside venture capital investors. These firms are currently spending around 60 percent of operating, not free, cash flow, on capital expenditure, the vast majority of which is to support AI projects. This is supplemented by borrowing, relying on their credit standings, to finance their investments. Increasingly, a significant proportion of the funding is being provided by private credit with. expected volumes as high as $800 billion over the next two years and $5.5 trillion through to 2035. Given the high return, high risk appetites of these lenders, the level of financial discipline applied to these loans remains uncertain.

In effect, these large firm are now acting as financiers, borrowing money which is on-lent or invested in AI start-ups with unclear prospects. This exposure is troubling. Investor and lender assumptions that their exposure is to a strong firm is undermined where it is heavily invested in speculative AI ventures with unclear prospects. Microsoft’s share of Open AI’s losses is significant, over $4 billion in the latest quarter, representing around 12 percent of its pre-tax earnings.

Oracle’s experience is salutary. The shares rose 25 percent when it announced a transaction to provide cloud computing facilities to OpenAI. The data centres do not currently exist and will have to be constructed. The transaction requires Oracle, which is significantly leveraged, to borrow funds to create these centres meaning that the firm is taking significant exposure to Open AI. As of December 2025, investor concern was palpable. Given its current net debt of over $100 billion which will need to increase substantially to finance the data centres, the cost of insuring against Oracle default rose sharply and presumably will flow through into the value of existing debt and the cost of future debt. A credit ratings downgrade from its current BBB, low investment grade, is possible, potentially to non-investment or junk grade. Its share price has fallen to levels around that before the announcement of the OpenAI transaction. While Microsoft, Meta and Amazon have stronger balance sheets, the risks are not dissimilar.

The impact of the AI boom on the wider economy is material. AI companies account for 75-80 percent of US stock returns and earnings growth and 90 percent of capital expenditure growth. It has added around 40 percent or a full percentage point to 2025 US growth.  Any retrenchment would affect the wider economy. It would also result in financial instability because of the direct and indirect exposure of banks and financial institutions to the AI sector. It is not inconceivable that some tech firms may require bailouts, such as that engineered for Intel, alongside familiar support for financiers, who will plead that without assistance the economy will collapse.

Investors have convinced themselves that the greater risk is underinvesting not overinvesting. Amazon founder Jeff Bexos hails it a “good kind of bubble” arguing that the money spent will bring long-term returns and deliver gigantic benefits to society, the tech-bro’s persistent bromide. Investors should be cautious. In the 1990s telecoms and fibre optic cable bubble, investors drastically overestimated capacity required. The percentage of lit or used fibre-optic capacity today, much of it installed during the dot com boom, is around 50 per cent, and global average network utilisation is 26 percent.

Investors believe that they have minimises risk by avoiding direct exposure to AI firms investing instead in firms like Nvidia, which provide the ‘picks and shovels’ of the revolution. The case of Cisco, for which the investment case during the halcyon days of the 1990 was similar, provides an interesting benchmark. It briefly became the world’s most valuable company on the largely correct assumption that its routers and other products would be crucial to the Internet. While the company’s financial performance has been generally steady, investors in Cisco lost out as its share price plummeted in 2000 only reaching the same level after 25 years.

When the dot com boom ended, Microsoft, Apple, Oracle and Amazon fell 65, 80, 88 percent, and 94 percent respectively taking 16, 5, 14 and 7 years to recover their 2000 peaks. The economy slowed requiring government support and historically low interest rates, at the time, to sustain economy activity which set off the housing boom which resulted in the 2008 crisis.

Consensual Tolkien-esque hallucinations notwithstanding, it would be surprising if the ending is different this time.

© Satyajit Das 2025 All Right Reserved

 

Print Friendly, PDF & Email

18 comments

  1. ambrit

    “…financiers, who will plead that without assistance the economy will collapse.”
    The other point to this claim is that, for an increasing percentage of the population, the economy has already collapsed.
    I really wonder if America can endure another round of financial bail-outs. The sheer scale of the malinvestment involved with this AI Bubble may make the governmental rescue of the financiers impossible this time. Is it possible to reach a point where the “socialization of the loses” leads to the demise of the society involved?
    We may finally discover the answer to the perennial question; “What came first, the goose or the golden egg?” via reverse financial engineering.
    Stay safe. Keep clipping those coupons.

    1. Arthur Williams

      Apparently the riddle has been solved. Some particular kind of protein I believe was found in eggs that only exists in the chicken. Therefore the chicken had to come first in order to lay that egg, lol.

    2. Kurtismayfield

      I don’t know how politically either party survives a bailout of these foolish investments. All the little people get are cuts and pay more, and then you toss billions at companies that wasted all those resources? The feedback will be highly negative.

  2. fjallstrom

    I have been following the stock market valuation of Nvidia and Oracle to get a hint of when this mad bubble will burst as one makes the chips and the other the data centres (I have no money riding on those ponies, and this is not financial advice).

    Nvidia is currently down 10% from what looks like a peak in late October – early November and Oracle is down 40% from what looks like a peak in late September.

    Considering the amount of billions that need to be continously shovelled into this bubble I figure that when the line goes down enough the fear of losing will overtake the greed of “buying the dip” and the whole thing will come crashing down. But I could be wrong, I usually tend to err on the side of people should be able to see a bubble for what it is.

    1. rob

      I have been watching Nvidia too. Just to look. after so many people had drunk the AI kool-aid. My overwhelming cynicism prevents me from believing the hype.
      Last april when DEEPSEEK came out and Nvidia dropped to $94.. it seemed like reality was about to set in…. but then it climbed back up past the $188 mark…. a quick doubling of money…. damn..
      Gold too… silver too..
      Too bad I didn’t know the future, back then.
      .. the market really can remain irrational longer than you can remain solvent.
      ..

  3. Arthur Williams

    The first crack in the Tower of Number Go Up may have appeared. Oracle posted its quarterly results and not only did it bigly miss the expected earnings, it spent far more than anyone had forecast. Oracle downplayed it by pointing to the 68 billion dollar increase in its future receivables due to the signing of large contracts. They said Stargate was progressing as expected which is nonsense: 2 of the 11 buildings which need to built and operational by June(!) have been built. OpenAI still doesn’t have the money to pay Oracle (or anyone else for that matter) and Oracle doesn’t have the 40 billion to buy all the GPU’s they need to provide.
    I expect we’ll soon see a lot more urgent messaging about how America is losing the AI race and that the government is going to have to loosen the purse strings lest the foreign menace wins and enslaves the losers. I think there’s a good chance OpenAI dies this year when it hits the wall of its debt. Or maybe Microsoft yawns and swallows Altman and friends before returning to its slumber. As Yves et al are fond of saying, pass the popcorn. The lights are dimming and the show will soon start.

  4. Ignacio

    Gigantic benefits (per Bexos, hehe) from data-driven research, journalism(!) and writing, travel planning, computer coding, certain medical diagnostics and some routine administrative tasks? Will I still have to prepare the spaghetti al pesto, drink water instead of data, and pooh-pooh the usual? Scam Altman said the other day that chatbots are now “essential” to raise children. To me a clear indication of the bubbly state of AI.

    A problem barely touched here is the use of resources (electricity, water, raw materials), not to mention the externalities of climate change. Yet, according to some studies AI would (somehow) help to curb carbon emissions more than offsetting AI functioning related emissions. The article linked is an exercise of wishful thinking authored by some partner of a consulting company based in London. It is not a scientific article and cannot be even considered as data driven research. It is an exercise of magical thinking in which, somehow, AI gets in charge of everything and problems are solved.

  5. AG

    short text, machine-translation from German:

    Ones and zeros
    Can Large Language Models develop consciousness?

    By Daniel H. Rapoport
    https://archive.is/R5MS8

    “(…)
    To assess whether LLMs can develop consciousness, we would need a rough understanding of what consciousness is. Unfortunately, no one possesses this understanding yet, not even a rough one. No psychologist, no philosopher, no pastor, and no biologist could say for sure what consciousness really is.
    We know only—purely phenomenologically, from a kind of self-evidence—that consciousness has something to do with inner experience. But we have also learned—both from simple optical illusions and from the enigmatic world of quantum mechanics—that the world is often quite different from how it appears to us. The self-evidence of the capacity for experience need not be as true and fundamental as it seems to us. Even if consciousness were something like an optical illusion, we would still like to know how it manages to generate an “inner experience.” If only to assess whether the potential to produce such illusions also lies dormant in LLMs (Low Life Mechanisms).
    (…)”

  6. Michael Fiorillo

    Given the ongoing enshittification of the Internet, how can LLMs not have an ever-decreasing signal-to-noise ratio? I have no tech knowledge whatsoever, but it just seems to follow from the models at work.

  7. ISL

    Patrick Boyle did a more comprehensive review a month ago:

    https://www.youtube.com/watch?v=NbL7yZCF-6Q

    and is very thoroughly researched including the power issues, not primarily stock prices.

    I wonder if AI helped write this paper: Even Das should know that Deepseek is FREE not cheaper

    “Several Chinese firms, such as DeepSeek, Moonshot as well as Bytedance and Alibaba, have developed cheaper models which cast doubts about the capital investment intensive approach of Western firms.”

    There is no competition with free (!!!!) there is competition potential with cheaper, sounds like one of those “hallucinations.”

    1. Jerren

      I took ‘cheaper’ to be in reference to Deepseek’s inference costs, which are reportedly a fraction of Chat GPT’s.

  8. XXYY

    Good, concise roundup of problems in the AI world, which are well known but cannot be repeated too often in the current information environment.

    This is a good paragraph (my emphasis):

    Cheerleaders miss that LLMs do not reason but are probabilistic prediction engines. A system which trawls existing data, even assuming that is correct, cannot create anything new. … Rather than fully generalisable intelligence, generative models are regurgitation engines…

    The emphasized points here are impossible to argue with, yet they are the death knell of the whole technology. Almost every claim for AI assumes that its output will be beyond anything humans have been able to do to date, yet it’s output will, at best, consist of precisely what humans have been able to do to date.

    However the following paragraph goes off the rails in my opinion (my emphasis, once again):

    AI models can take over certain labour-intensive tasks like data driven research, journalism and writing, travel planning, computer coding, certain medical diagnostics, testing and routine administrative tasks like handling standard customer service queries. Its loftier aims may prove elusive.

    Das acknowledges that AI produces strictly rote material that is unreliable, and unfixable. This precludes using it for anything where mistakes are not tolerable, which obviously includes anything safety-critical like research, medical diagnostics and computer coding, but also anything requiring imagination or creativity, or even just getting better over time, like journalism and writing, travel planning, administration, and interfacing with customers.

    In fact, it’s actually quite difficult to think of any task that’s worth doing, but where mistakes are of no consequence whatsoever.

  9. Brian Westva

    Thanks for this article. I have been skeptical about AI from day one. There isn’t anything about this technology that can be sustained over the long term such as the financing, electricity demand, water use, etc. I watched an interview the other day of Ed Zitron by Newsweek. Ed pointed out that AI doesn’t do much. It isn’t reliable, accurate, or effective in many of the tasks it is asked to do. I also read about the executive order that trump signed about AI last week and there has been speculation that AI will be bailed out by the government. So putting two and two together and dawning the tin foil hat, I do wonder if the government won’t end up using AI as part of the massive surveillance state apparatus that already exists. Huge amounts of surveillance data is available on everybody and AI will have a field day predicting who will commit the next crime. If AI gets the facts wrong, good luck to you proving that you didn’t commit the crime you are accused of. Jury trials are going away in some cases in the UK. What better justification to keep America “safe”.

  10. Simple John

    I appreciate the fine summary of many points that have exposed gen AI as average as in mediocre.
    I’ve never prompted an LLM or image diffuser for the following reason:
    Intelligence can’t exist without goals. AI’s don’t have goals. Therefore I’d just be looking at algorithm generated words and pictures. I keep coming back to kaleidoscopes as an analogy. A marvelously large number of possible patterns generated almost magically but hardly worth investing in.
    Chatbots, with the emphasis on “chat”, have become a tech marketers’ ultimate dream product – pornography (artificial engagement) to meet all one’s social needs, not just the erotic ones. Sam Altman is a pied piper enjoying our children’s innocence while we hook up with a bot.

Comments are closed.