“If you don’t have AI, there’s nothing going on.” Those are Peter Thiel’s words in an interview with Ross Douthat for the podcast Interesting Times. To put it into context, Thiel is referring to something he calls the “stagnation hypothesis.”
Thiel’s view is not unique—others such as Tyler Cowen and Robert Gordon have proposed something similar. The idea is that there has not been any significant advancement in any field (the sort that might change our understanding of it) since the 1970s—or, at least, not in the past 50 year in a way that clearly signals progress.
According to Thiel, for a significant breakthrough to happen, risk must be taken in any field—be it medicine, transportation, or science. He argues this isn’t happening due to a risk-averse culture, excessive regulation, and an over-financialized economy yielding low returns, among other reasons.
The argument is that without a breakthrough to propel a new wave of economic growth, the social fabric as we know it—in the West—will begin to collapse. To avoid that, we need to seek progress through various means, including exploring new forms of government, ventures into radical biology—such as transhumanism—or missions to Mars.
Thiel’s view differs from that of accelerationism. Though both share the stagnation hypothesis and the need for radical transformation, Thiel seeks to reinvigorate capitalism—a new wave of unbridled, unregulated capitalism—while accelerationists seek to overcome capitalism entirely, aiming for a yet-undefined system.
There’s another point of agreement: the belief that artificial intelligence might be the source of this desired transformation. Though Thiel is more cautious and less enthusiastic, he is adamant that it be deregulated.
Perhaps that’s why Trump’s flagship tax bill included a provision barring states from regulating artificial intelligence. According to Bloomberg, “Trump allies in Silicon Valley, including venture capitalist Marc Andreessen, defense tech firm Anduril Industries Inc. founder Palmer Luckey, and Palantir Technologies Inc. co-founder Joe Lonsdale all advocated for including the restriction.”
Matt Stoller goes into detail about how and why the Senate voted to strip the AI provision from Trump’s tax bill. But, as he says, “Don’t be fooled by the lopsided vote—this AI regulation ban was much closer to being enacted into law than it appears. The attempt to eliminate regulation of automated decision-making and AI systems will return. Big business is going to have an open checkbook going forward—amounts of money that are unfathomable—to enact their agenda.”
Since Trump—whom Thiel supported early on and now advises—returned to office in a second term, he has dismantled federal regulation meant to oversee the development of AI and has promoted its integration with the government. This is part of what Musk’s DOGE achieved: complete access to fiscal, health, and other sensitive, confidential data to feed into an algorithm.
But the story of AI’s entanglement with the government predates Trump and Musk’s affair. Thiel’s company, Palantir, was developed with CIA funding to later sell software to the U.S. Department of Defense. According to some reports, this began around 2003–2005. In fact, in a Reddit AMA from 2014, when asked if Palantir was a front for the CIA, Thiel responded: “The CIA is a front for Palantir.”
Obviously, that statement seems far-fetched, but it does reflect how blurry the boundaries have become. And it might even end up being true. If that sounds exaggerated, consider how Palantir’s Mosaic—the AI model developed for the International Atomic Energy Agency (IAEA)—was used to justify Israel’s and the U.S.’s attack on Iran even above CIA’s intelligence.
Palantir’s AI platform, Mosaic, has been integrated into the IAEA’s monitoring systems since 2015. It processes massive datasets—satellite imagery, surveillance footage, social media, and even Mossad intelligence—to detect anomalies in Iran’s nuclear program. The report that prompted the IAEA to declare Iran in breach of its non-proliferation agreement was likely based on Mosaic’s prediction.
Of course, there are reports claiming that the IAEA has a bias toward Israel, that Mosaic used Mossad’s data in its predictions, and that Palantir is also deeply involved in Israel’s operations in Gaza through a similar algorithm called Lavender. But the relevant point here is that this explains why Trump had the audacity to dismiss his own head of intelligence, Tulsi Gabbard, when she said there was no intelligence to confirm that Iran was pursuing nuclear weapons.
Trump’s “intelligence” was probably based on Israeli sources, which likely relied on the IAEA’s predictions—which were, in turn, based on Mosaic’s algorithm. This would also explain why Rafael Grossi later denied that the IAEA had any verifiable intelligence regarding Iranian nuclear weapons.
Whether ignoring CIA intelligence in favor of Palantir’s was a deliberate attempt to create a casus belli against Iran, or whether White House decision-makers genuinely preferred Mosaic’s prediction, the result is the same: the “intelligence” of an AI model was prioritized over that of an actual human intelligence agency.
This signals a major turning point. The CIA works—or should work—with verifiable information. Even when that information is fake or distorted to fit a narrative, it should still be technically possible to review sources and assess their credibility.
With AI, it’s different. Researchers increasingly admit they don’t fully understand how these systems function. Given that models draw from hundreds of millions of data points and can refine their own reasoning, at some point their conclusions become unverifiable to the human mind. We simply can’t follow the logic behind them. Which means we can’t verify those conclusions—we have to trust them. Blindly. Or, put differently, we’re being asked to have faith in them.
Paradoxically, this was Henry Kissinger’s fear, according to his biographer Niall Ferguson in an interview with Noema: “The insight that he had, long before anyone had heard of ChatGPT, was that we had created technologies that were doing things and delivering outcomes that we could not explain.” Kissinger traced the development of AI back to the application of the scientific method developed during the Enlightenment.
However, the unregulated pursuit of AI may have the opposite effect of what the Enlightenment intended. Back then, the goal was to explain everything through reason—what could not be explained was considered speculation at best. Now, we are making critical decisions based on processes we do not understand.
>>>However, the unregulated pursuit of AI may have the opposite effect of what the Enlightenment intended. Back then, the goal was to explain everything through reason—what could not be explained was considered speculation at best. Now, we are making critical decisions based on processes we do not understand.
I would say that we have come full circle to pre Enlightenment thinking except for the surprising intellectual rigor of Catholicism including in philosophy during the Middle Ages. It’s like our society, through its “elites,” has decided to chuck logic, reasoning, common sense, and even faith for mindless worship and obedience to of whatever it is we are supposed to be worshipping, if that makes any sense.
Pre-Enlightenment indeed, and per the quotation, I would add that it’s not that “the unregulated pursuit of AI may have the opposite effect of what the Enlightenment intended” but that it will have the opposite effect.
On this point, recall the first two paragraphs of Kant’s What is Enlightenment? and it becomes clear where we’re heading:
https://www.nypl.org/sites/default/files/kant_whatisenlightenment.pdf
The use of AI is not “Have courage to use your own understanding!” Quite the opposite. It is laziness, cowardice, and lifelong immaturity, as Kant put it.
“More rationality does not necessarily wise up the individual.” — C. Wright Mills
He meant that the rationalization of institutions makes them function goal rationally, but not in a substantively rational way. So Captain Ahab madness is always an open possibility, as it can be with AI.
It would be a good bet that a major use of AI will be to reduce liability and risk to large institutions by effectively externalizing costs and damage to the rest of society and nature. Thiel is going to get exactly what he wants to avoid: a society completely constrained by fear of risk.
If nothing is going on, it is because you have AI.
At least with AI, the practice of augury will no longer need to disembowel the fowl.
As far as “irrational,” it seems like politicians have always been doing what they wanted to do for “reasons,” not on the basis of irrationality but because they don’t want to admit the real reasons for their actions. AI augury is just a more modern way to hide the ball. AI is never going to tell us NOT to bomb someone, or NOT to waste billions on some obsolete weapon system for the MIC.
Also a way to deflect blame for the decision.
The fowl-disemboweling was better, if you ask me. Our lily-liver’d leaders haven’t the stomach to face a stomach for answers anymore. Besides, once the bird’s dressed you can make a nice chicken kiev…er, kyiv.
AI decision making systems is like putting idiot savants or autistic children in charge.
I am sorry but I am going to write it in Spanish. Peter Thiel es un fantasma. Literally, P.T. is a ghost, but it means someone who speaks too much of implausible stuff.
Nope, he is a ghoul.
AI has never been fair or neutral. MOSAIC is a modern version of the “Niger Document” and was most likely created solely to justify attacks on Iran.
Excellent piece, Curro, on a hot and critically important topic. Many thanks.
Having watched the Douthat interview you cite along with Jordan Peterson’s interview of Thiel, my impression, aside from Thiel’s stuttering, mumbling and you-knowing, is that Thiel’s “stagnation hypothesis” has no coherent, deep, philosophical source, but is the product of Thiel’s boyish frustration at not being able to say, “Earl Grey, hot” and have a door open in the wall with his tea along with his desperate fear of death and the infirmities of old age. It is obvious to most sane observers that the pace of technological change has been far too rapid for our society to assimilate successfully, in part because the profit driven focus of that technology has been on how to wage more horrible wars, control the ever more abused citizenry more completely, and assuage that citizenry’s growing list of physical and psychological ailments resulting from that abuse with pills that treat only symptoms and must be taken until death lest the treated malady return with even more severe symptoms.
In the Peterson interview, Thiel blames–who else?–the damn hippies and Woodstock for this “calamity” along with the precautionary principle which we know so well here at NC, not as an impediment to growth or technological advancement but, in this context, as our protection against tech that becomes ever more destructive as its capabilities increase. In the Douthat interview, he doesn’t go after the dirty hippies by instead blames his nominee for the antichrist, Greta Thunberg. It’s all quite funny to observe his insanity until one remembers how much power he wields.
It’s a difficult question as to how we can stop these crazy assholes before they kill us all. We can hope that some will self-destruct as Musk appears to be doing, and certainly their astronomical levels of hubris have invited a Nemesis takedown, but some will likely stay sane enough to continue with their plans. We might also take heart in the likelihood that each will first seek to establish his dominance over the other TechBros even though internecine warfare among them will surely have plenty of collateral damage among us plebes. As to other options, well, I’m of the view that they must be stopped if we and ours are going to survive, much less thrive.
As Thiel admits, capitalism, the West and the TechBros’s power cannot survive much longer without some sort of boost that keeps the bezzle going along with these crazy dreams of trips to Mars and immortality. Nate Hagens and Daniel Schmachtenberger talk about our world as a carbon-powered Super Organism that is devouring our Earth and humanity at an ever fast rate. The TechBros are building data centers at an extraordinary rate, often subsidized by us, and these data centers don’t only steal our water and power, they also multiply the power of these hyper-dangerous people many times over, and they will accelerate the already exponentially increasing rate of planetary destruction and human misery wrought by the Super Organism. At a conference in Sweden where Hagens and Kate Raworth also spoke, Schmachtenberger gave this answer to the perennial question of, “What then shall we do?”
Thanks again, Curro, for bringing this to our attention.
I’m of the opinion that these tech bros are exactly the sort who would push the big red button to unleash nuclear armageddon. Safe in the knowledge that they’d be safe in their bunkers while us upstart plebs would be wiped out. And that they’d get to repopulate the earth in their image. I mean it wouldn’t work out that way in the end. But I’m becoming more and more certain that is their viewpoint.
I call BS, they know enough about AI to get the desired result and they will change the inputs to get it if needed.
All this hipe in certain circles on the power of AI shows in fact the lack of knowledge and understanding on how these systems actually work and how they are “parametrized” (the hundreds of thousands of data points…). And American politicians are on average stupid and easily bribed. Make that politicians in general.
I keep getting invites via LinkedIn for jobs to paramtrize AI in the bilogical/medical field for $40/hour. Karen Hao in her interview on Novara Media
https://www.youtube.com/watch?v=8enXRDlWguU&t=3s
and her book
https://www.penguinrandomhouse.com/books/743569/empire-of-ai-by-karen-hao/
describes this PhD from Venezuela living in Colombia that gets “jobs” for training AI. But it is almost like a bidding process and it look horrible.
I am more for training mentats and having more Dr House, instead of these probabilistic led systems that ultimately lie.
I do see the countours of the Butleria Jihad coming to life in a not too distant future.
But is AI really influencing decisions or is it merely a modern sounding pretext generation machine? After all Israel is a country devoted to the proposition that thousands of years ago God gave Palestine to the ancestors of people who may have some connection to later 20th century settlers from Middle Europe. There’s nothing “rational” about it. It’s all pretext.
I think Michael Hudson summed things up pretty well when he said that Silicon Valley thought bubbles and vaporware were supposed to replace heavy industrial America as a way of continuing US dominance over world culture. It hasn’t worked out that way and the cliff edge approaches. You can’t fool all of the people all of the time.
I’d say the researchers know quite well how their ML systems work, why they work the way they do and specifically what are the limits of their systems. During the few years I was pitching/selling an AI based system for a very narrow purpose it became apparent that neither the putative clients nor the CEO and CMO of my company had no idea of the limits, capabilities and uses of the system.
I could have explained it well enough in 20 minutes for any other person “experienced in the field” to be able to build a very alike system rather quickly.
So, as in most IT, also in AI/ML it’s the all the MBAs that get it seriously wrong and sell stuff that doesn’t actually work as promised.
The movement that supposedly influences Thiel et al. is called Dark Enlightenment for a reason. They do want Enlightenment values to go dark. Thiel has said democracy is finished, women never should have been given the vote, and his Prospera “freedom city” sounds like a 21st century decorated dictatorship to me. He is neither wise nor good, just rich and powerful.
Lewis Mumford nailed it more than sixty years ago in his two-volume Technics and Civilization: he predicted the current Electronic Dark Ages…
AI does epistemology? Does it give alpha and beta risks of accepting or rejecting its conclusions?
If the AI input included “intelligence”! How do you assure “verifiable information”?
Is AI more than souped up data mining? It may be “taught” to filter data to form knowledge? Decision grade?
Problem is as always: garbage in garbage out.
How the AI determines truth, does its verifying, is also taught.
Information depends on taxonomy (AI may do Rosetta stone work) , fact/accuracy, time sensitive (pedigree).
What AI does is what it is taught and with what it receives.
When the decisions became too big to analyze with several degrees of freedom, and risking a spectrum of good and bad outcomes, the decisions were run through uncertainty theory.
Would you trust the mini-max and the maxi-min from an AI?
More than one from Hegseth and Raizin Caine?
A lot cheaper than AI.
What did the AI predict the chances of 14 B-2’s completing 37 hour missions?
Holy crap, the IAEA was using Palantir as its system of record for the nuclear inspections? That is all kinds of problematic.
I had been wondering why Iran cut ties with the IAEA, but I think it’s pretty clear now.
That and the wee matter of at least one MI6 agent in the IAEA relaying bombing co-ords to the US and Israel.
And now it’s not only Iran that has no reason to trust the IAEA, but pretty much everybody else (except Israel, who will always get a free pass and no hard questions).
Well done, IAEA.
Palantir is a great name. It is the name of the ‘far-seeing stones’, telecommunications tech. from the Atlantis-like lost civilization of Numenor. When we see them in the Lord of the Rings, one of them has long been captured by the evil Sauron, who overpowers any other user on the system. He uses them to deceive, spy on, and manipulate the ‘Leaders of the West’.
Now, rather than stovepiping intel to plant NYT stories to then reference as a justification for invasion as with Iraq, “AI” can simply get straight to the confabulating.
The efficiency gains boggle the mind!