Nvidia’s narrative took a major hit this week due to multiple factors including the emergence of a credible rival, OpenAI’s struggles, and a trade war pincer that has them caught between Trump and China.
Is a Government Backstop the Bull Case?
Nvidia’s narrative which it kicked off with the launch of Ampere architecture and A100 chip in 2020, subsumed any competing story-lines in the post-pandemic American stock markets in 2022 and engulfed the entire American economy under Trump.
Nvida’s narrative that Large Language Models (LLMs) like OpenAI’s ChatGPT, Anthropic’s Claude, and Google’s Gemini ARE the future of technology and the global economy has made them the world’s largest corporation by market cap.
The Trump administration may be all-in with Nvidia’s narrative and may be signaling its willingness to backstop the industry to prevent the AI bubble popping.
Paypal mafioso and Trump tech czar David Sacks had previously indicated the government would not be backstopping OpenAI earlier this month.
I argued elsewhere that Sacks’ tweet contributed to the downturn in AI stocks we saw in the first weeks of this month.
But on the 24th, Sacks seemed to signal a change in his (very influential) thinking, in response to a Wall Street Journal piece headlined, “How the U.S. Economy Became Hooked on AI Spending and subtitled, “Growth has been bolstered by data-center investment and stock-market wealth. A reversal could raise the risk of recession.”
According to today’s WSJ, AI-related investment accounts for half of GDP growth. A reversal would risk recession. We can’t afford to go backwards.
— David Sacks (@DavidSacks) November 24, 2025
Paired with the White House’s executive order titled “Launching the Genesis Mission” released the same day, many took this as bullish news for Nvidia’s narrative and the larger AI bubble.
The closest thing I could find to an action word in the EO was this promise to:
…build an integrated AI platform to harness Federal scientific datasets — the world’s largest collection of such datasets, developed over decades of Federal investments — to train scientific foundation models and create AI agents to test new hypotheses, automate research workflows, and accelerate scientific breakthroughs.
Critics of the Nvidia narrative will of course point out that when a government backstop is the bull case, the bubble is in trouble.
Maybe that’s why the a handful of naysayers who’ve been fighting the tide, are now being joined by some major investors who are putting their money where the critics’ keyboards have been.
Team Bear Beefs Up
Pioneering Nvidia bears like AI scientist Gary Marcus and journalist Ed Zitron have now been joined by notable investors including the “Big Short” contrarian Michael Burry and billionaire Stanley “I killed the Pound” Druckenmiller who sold all 214,060 of his funds’ Nvidia shares.
Marcus and Zitron remain opinion leaders in the space, however.
Marcus is currently dealing with the emergence of a class of rival AI experts who’ve been on the AGI (Artificial General Intelligence) bandwagon and are now getting off.
AGI is the patent nonsense that LLMs are just a few months away from creating super-intelligent, self-replicating machines.
Naturally, belief in AGI has been the conventional wisdom in Silicon Vally for the last couple of years and continues to be a big part of the bulls’ case for the Nvidia narrative.
Marcus has taken lots of heat for calling bullshit on LLMs as the road to AGI from the get go, and is now expressing mixed feelings about the big names who are joining him on the critical side.
Those names include Meta’s Chief AI Scientist Yann LeCun and OpenAI co-founder Ilya Sutskever.
As for Ed Zitron, his latest “The Hater’s Guide To NVIDIA” is well worth the subscription price and the estimated 54 minute reading time.
It includes an excellent, concise explanation of why Nvidia has become the hero of the Nvidia narrative which has reshaped the U.S. economy:
Back in 2006, NVIDIA launched CUDA, a software layer that lets you run (some) software on (specifically) NVIDIA graphics cards, and over time this has grown into a massive advantage for the company.
The thing is, GPUs are great for parallel processing – essentially spreading a task across multiple, by which I mean thousands, of processor cores at the same time – which means that certain tasks run faster than they would on, say, a CPU. While not every task benefits from parallel processing, or from having several thousand cores available at the same time, the kind of math that underpins LLMs is one such example.
CUDA is proprietary to NVIDIA, and while there are alternatives (both closed- and open-source), none of them have the same maturity and breadth. Pair that with the fact that Nvidia’s been focused on the data center market for longer than, say, AMD, and it’s easy to understand why it makes so much money. There really isn’t anyone who can do the same thing as NVIDIA, both in terms of software and hardware, and certainly not at the scale necessary to feed the hungry tech firms that demand these GPUs.
Anyway, back in 2019 NVIDIA acquired a company called Mellanox for $6.9 billion, beating off other would-be suitors, including Microsoft and Intel. Mellanox was a manufacturer of high-performance networking gear, and this acquisition would give NVIDIA a stronger value proposition for data center customers. It wanted to sell GPUs — lots of them — to data center customers, and now it could also sell the high-speed networking technology required to make them work in tandem.
This is relevant because it created the terms under which NVIDIA could start selling billions (and eventually tens of billions) of specialized GPUs for AI workloads.
…
Because nobody else has really caught up with CUDA, NVIDIA has a functional monopoly…NVIDIA has been printing money, quarter after quarter, going from a meager $7.192 billion in total revenue in the third (calendar year) quarter of 2023 to an astonishing $50 billion in just data center revenue (that’s where the GPUs are) in its most recent quarter, for a total of $57 billion in revenue, and the company projects to make $63 billion to $67 billion in the next quarter.
But never fear, Ziton also puts a stake through the heart of the Nvidia narrative over the course of several thousand words. Here’s a key point:
NVIDIA makes so much money, and it makes it from a much smaller customer base than most companies, because there are only so many entities that can buy thousands of chips that cost $50,000 or more each.
Zitron cites pseudonymous finance poster “Just Dario” as someone who’s provided key insights into the workings of Nvidia and Dario’s latest piece on the company is worth reading in full, but the TL;DR explanation of Dario’s role in the larger Nvidia narrative wars can be grasped from glancing at these tweets about whether or not Enron is a valid comparison point for Nvidia:
Nvidia says it’s not Enron. I actually agree with them, they are Envidia https://t.co/8zN308lZKK pic.twitter.com/IeahPNSd4Q
— JustDario 🏊♂️ (@DarioCpx) November 25, 2025
I should note that this pretty much unknown (and I suspect AI-using Substack) writer Shanaka Anslem Perera got the credit from Yahoo Finance with triggering Nvidia’s now infamous “we’re not Enron” memo.
Yahoo also quoted “Jim Chanos, who is famous for predicting the fall of Enron, (who) thinks the comparison between Nvidia and Lucent bears weight.”
“They’re [Nvidia is] putting money into money-losing companies in order for those companies to order their chips,” Chanos said.
As for “Big Short” Burry, his new Substack is a bit rich for my blood, although serious investors will likely find it a bargain, but his latest contribution to Nvidia’s narrative involves comparing Nvidia to Cisco before the dot.com bust:
Michael Burry's case against Nvidia: pic.twitter.com/XafV3SLYVT
— Nat Wilson Turner (@natwilsonturner) November 26, 2025
This adds to Burry’s previous X.com post alleging that AI industry accounting practices of “understating depreciation by extending useful life of assets artificially boosts earnings (are) one of the more common frauds of the modern era.”
And of course, Burry’s close to a billion-dollar bet against Nvidia’s narrative has impacted our story as well.
The Mid-Wits Weigh In
No debate in 2025 would be complete without one of the Abundance bros weighing in.
Naturally Ezra Klein’s “Abundance” co-author Derek Thomas (co-writing with Understanding AI founder Timothy B. Lee is coming down in the middle with “Six reasons to think there’s an AI bubble — and six reasons not to” and shrewdly saves the bull case for its paying customers. Talk about knowing your audience.
The Real Bulls Include Jim Cramer and AGI Crazytown’s Finest
But I’ll leave the real bull case to the legendary CNBC commentator Jim Cramer (I should note that the Inverse Jim Cramer ETF has been getting killed lately).
His case boils down to growth:
I am a huge believer in growth stocks, and …growth stocks are what draws me to the hyperscalers for my travel trust. Now we own many, many stocks from other universes drug stocks, aerospace materials, data center builders. They have long term growth stories, but they’re not turbocharged with the resources of these gigantic companies.
That’s what always brings me back to the Mag-7. These stocks are prominent because of their success. The companies they represent have bountiful profits, which is why they could rise to their lofty trillionaire status in the first place. It’s why I don’t kick them out when they’re down in fact. Or in fact, it’s why I might buy them for the trust.
But the far more entertaining bull case for the Nvidia narrative is made by Utopia believers like Tomas Pueyo who is helping his 119,000 Substack subscribers prepare for a post-scarcity Utopia brought about by “super-intelligence.”
Admittedly, I have an immediate and utter disdain for anyone pitching imminent Utopia but a couple of gummies and Pueyo’s stuff becomes quite entertaining. Here’s a taste from his latest, “AI: How Do We Avoid Societal Collapse and Build a Utopia?“:
In the previous article, we saw that we’ll eventually live in a utopia, a world of full abundance and without the conflicts over scarce resources that have plagued humanity throughout history.
Well then.
But lest readers think Mr. Pueyo isn’t concerned with the obstacles in our path, there’s more:
intelligence will not be infinite until there’s infinite energy and infinite compute, which will also need plenty of raw materials and land. So the scarcity of intelligence might not even be completely eliminated until more of these inputs are sufficiently abundant.
Assuming AIs still serve humans, how will they prioritize? They will need a signal of what matters most to humans. How will humans convey that? Through money.
…
So it’s very unlikely that we’re going to get rid of capitalism. We need the price signal to convey the optimal allocation of capital.
But in that world, where humans are not working anymore because they’ve been fully automated by AIs more intelligent than themselves, how do you decide how much money each person should have?
I’ll leave it up to readers who are really dying to know just how many angels are dancing on the top of his pinhead to read more of his work.
Just be aware that the Nvidia narrative bears are up against a whole lot of people who really really really want to believe and are clapping as hard as they can to keep Tinkerbell alive the AI bubble inflated.
Unfortunately for the bulls, and Nvidia’s narrative. AI slop is having very real world impacts. The kind people notice.
— Nat Wilson Turner (@natwilsonturner) November 26, 2025
Posts Related to Nvidia’s Narrative:
- How Google Is Winning Struggle Among AI Giants
- Yet More AI Bubble Worries
- Has Ed Zitron Found the Fatal Flaw with OpenAI and Its Flagship ChatGPT?
- OpenAI Laying the Groundwork for Massive Federal Bailout
- OpenAI Slipped Shopping Into 800 Million ChatGPT Users’ Chats
- Meta’s Mark Zuckerberg Loves To Throw Good Money After Bad
- OpenAI as The Money Pit

