Coffee Break: Silicon Valley Ideologies as a Lens for Viewing Current Events

Awareness of the various emerging Silicon Valley ideologies may provide a helpful lens through which to analyze current events.

The techbros have control of the technologies that increasingly run our lives, billions and billions of dollars to influence our politics, and a ruthless drive for power and control.

Turns out they’ve been spending quite a bit of time debating the big questions among themselves and now their philosophies are spilling over into our real lives.

I’ve been intending to supplement my mini-series on the companies pushing AI (aka Large Language Models) with a look at the putative “thought systems” that have fascinated Silicon Valley.

Then I read Matt Stoller’s “Is There a Silicon Valley Plan to Subvert Elections?” and Emile P. Torres’ “Meet the Radical Silicon Valley Pro-Extinctionists!” and knew it was time to get on that.

Not to mention Ross Douthat’s “Peter Thiel and the Antichrist: The original tech right power player on A.I., Mars and immortality” in the New York Times.

Stoller’s piece discussed:

“…the creation of a new political slush fund by titans in Silicon Valley. I don’t want to be alarmist, but if it goes to plan, it could functionally subvert elections in America.”

…if someone can just spend an infinite amount of money to call you a trans-loving pedophile, you will likely lose your race. For instance, in Ohio in 2024, long-time Senator Sherrod Brown faced $40 million of crypto spending alleging all sorts of things, and that chipped away at his popularity such that he lost.

Today, Fairshake can flip most politicians without spending a dime, secure in the knowledge that aspiring office-seekers wouldn’t want to lose just over what they perceive as a minor policy around finance. These companies got everything they wanted; they are now running crypto policy for Trump, and have terrified most members of Congress into voting for whatever they want. Fairshake has amassed another big war chest for 2026, and it’s unlikely that crypto’s power will be dented until there’s a financial crash.

Unfortunately, the lesson of Fairshake was not lost on others in Silicon Valley. Marc Andreessen, who is on the board of Meta and involved in Fairshake, has been organizing this strategy in other areas. Meta CEO Mark Zuckerberg, and AI venture capital investors, have now chosen to launch their own Fairshake-style slush funds, to make it impossible to regulate generative AI or big tech.

The net effect of these pots of money is that it could become functionally impossible to enact public policy around AI through our democratic system. As AI becomes more important, that means American law will look the way Andreessen and a few other titans want it to look. Moreover, other corporate giants will start playing in their areas, closing off other spaces to democracy.

Now, it’s always been difficult, especially in the Citizens United era, to make progress, as big money does drown out a lot of good policy. Indeed, what we’re really seeing is the final stages of an organized attempt from the 1970s onward to allow money to overwhelm democracy. The Lever’s Master Plan is an excellent podcast series on it. These massive slush funds could mean that voting really has become ornamental.

Torres, in the second of his three-part series on Silicon Valley pro-extinctionism, wrote (part 1 is here):

A journalist asked me the other day what I think is most important for people to understand about the current race to build AGI (artificial general intelligence). My answer was: First, that the AGI race directly emerged out of the TESCREAL movement. Building AGI was initially about utopia rather than profit, though profit has become a significant driver alongside techno-utopian dreams of AGI ushering in a paradisiacal fantasyworld among the literal heavens. Hence, one simply cannot make sense of the AGI race without some understanding of the TESCREAL ideologies.

Second, that the TESCREAL movement is deeply intertwined with a pro-extinctionist outlook according to which our species, Homo sapiens, should be marginalized, disempowered, and ultimately eliminated by our posthuman successors. More specifically, I argue in a forthcoming entry titled “TESCREAL” for the Oxford Research Encyclopedia that views within the TESCREAL movement almost without exception fall somewhere on the spectrum between pro-extinctionism and (as I call it) extinction neutralism. Silicon Valley pro-extinctionism is the claim that our species should be replaced, whereas extinction neutralism says that it doesn’t much matter whether our species survives once posthumanity arrives.

Torres goes on to give capsule summaries of the thought of various Silicon Valley figures, including Carnegie Mellon’s Hans Moravec, Google co-founder Larry Page, Turing award winner Richard Sutton, shitposter Beff Jezos aka Gill Verdon, Singularity prophet Ray Kurzweil, and:

Sam Altman (facepalm)
Altman is not only a major reason the race toward AGI was launched and has been accelerating, but he believes that uploading human minds to computers will become possible within his lifetime. Several years ago, he was one of 25 people who signed up with a startup called Nectome to have his brain preserved if he were to die prematurely. Nectome promises to preserve brains so that their microstructure can be scanned and the resulting information transferred to a computer, which can then emulate the brain’s functioning. By doing this, the person who owned the brain will then suddenly “wake up,” thereby attaining “cyberimmortality.”

Is this a form of pro-extinctionism? Kind of. If all future people are digital posthumans in the form of uploaded minds, then our species will have disappeared. Should this happen? My guess is that Altman wouldn’t object to these posthumans taking over the world — what matters to many TESCREALists, of which Altman is one, is the continuation of “intelligence” or “consciousness.” They have no allegiance to the biological substrate (to humanity), and in this sense they are at the very least extinction neutralists, if not pro-extinctionists.

Peter Thiel (blech)
Thiel holds a particular interpretation of pro-extinctionism according to which we should become a new posthuman species, but this posthuman species shouldn’t be entirely digital. We should retain our biological substrates, albeit in a radically transformed state. As such, this contrasts with most other views discussed here. These other views are clear instances of digital eugenics, whereas Thiel advocates a version of pro-extinctionism that’s more traditionally eugenicist — in particular, it’s a pro-biology variant of transhumanism (a form of eugenics).

Torres also references a key Freudian slip on Thiel’s part when he was interviewed by NYT conservative midwit Ross Douthat:

Thiel was asked whether he “would prefer the human race to endure” in the future. Thiel responded with an uncertain, “Uh —,” leading the interviewer, columnist Ross Douthat, to note with a hint of consternation, “You’re hesitating.” The rest of the exchange went:

Thiel: Well, I don’t know. I would — I would —

Douthat: This is a long hesitation!

Thiel: There’s so many questions implicit in this.

Douthat: Should the human race survive?

Thiel: Yes.

Douthat: OK.

Torres doesn’t get into Douthat’s attempt to reconcile his views with his claimed Christianity:

But it still also seems like the promise of Christianity in the end is you get the perfected body and the perfected soul through God’s grace. And the person who tries to do it on their own with a bunch of machines is likely to end up as a dystopian character.

Thiel: Well, it’s — let’s articulate this.

Douthat: And you can have a heretical form of Christianity that says something else.

Thiel: Yeah, I don’t know. I think the word “nature” does not occur once in the Old Testament. And so there is a word in which, a sense in which, the way I understand the Judeo-Christian inspiration is it is about transcending nature. It is about overcoming things. And the closest thing you can say to nature is that people are fallen. That’s the natural thing in a Christian sense, that you’re messed up. And that’s true. But there’s some ways that, with God’s help, you are supposed to transcend that and overcome that.

Douthat: Right. But most of the people — present company excepted — working to build the hypothetical machine god don’t think that they’re cooperating with Yahweh, Jehovah, the Lord of Hosts.

Thiel: Sure, sure. But ——

Douthat: They think that they’re building immortality on their own, right?

Thiel: We’re jumping around a lot of things. So, again, the critique I was saying is: They’re not ambitious enough. From a Christian point of view, these people are not ambitious enough.

I should also let quote from Torres earlier work for TruthDig to explain his acronym TESCREAL which combines the first letter of the ideologies transhumanism, Extropianism, singularitarianism, cosmism, Rationalism, Effective Altruism and longtermism:

“…the constellation of ideologies behind the current race to create AGI, and the dire warnings of “human extinction” that have emerged alongside it…

At the heart of TESCREALism is a “techno-utopian” vision of the future. It anticipates a time when advanced technologies enable humanity to accomplish things like: producing radical abundance, reengineering ourselves, becoming immortal, colonizing the universe and creating a sprawling “post-human” civilization among the stars full of trillions and trillions of people. The most straightforward way to realize this utopia is by building superintelligent AGI.

Those ideologies, we believe, are a central reason why companies like OpenAI, funded primarily by Microsoft, and its competitor, Google DeepMind, are trying to create “artificial general intelligence” in the first place.

…In (the view of Marc Andreessen), the most likely outcome of advanced AI is that it will drastically increase economic productivity, give us “the opportunity to profoundly augment human intelligence” and “take on new challenges that have been impossible to tackle without AI, from curing all diseases to achieving interstellar travel.” Developing AI is thus “a moral obligation that we have to ourselves, to our children and to our future,” writes Andreessen.

Torres also pointed me to David Z. Morris whose “DeepSeek and the AI Murder Cult” argues that “Rationalism links a wave of murders, FTX embezzlement, and crashing markets.”

From his piece:

(Rationalism) lurks at the heart of Sam Bankman-Fried’s rampant embezzlement at #FTX, of which $500 million dollars went to Anthropic, an “AI Safety”-fueled startup that employs Amanda Askell, ex-wife of Effective Altruism founder Will MacAskill. $5 million in money stolen by SBF also went directly to the Center for Applied Rationality, one of Yudkowsky’s two organizations. Half a million in FTX funds also helped facilitate the purchase of a hotel that became the headquarters of a CFAR subsidiary called Lightcone Research, which notoriously featured several eugenicists and white supremacists at events.

It also helps explain, I think, why OpenAI and other U.S. artificial intelligence startups just got embarrassingly annihilated by a Chinese hobbyist: because they’re driven by some of the same ideas that have led fringe Rationalists into madness.

There have now been at least EIGHT violent deaths over the past three years tied, to varying degrees, to splinter factions of the Rationalist movement founded by Eliezer Yudkowsky in San Francisco. The Rationalist community is eager to disown the perpetrators, and it’s true that the factionalists have been in conflict with the main group for years. More to the point, they seem simply insane.

But, I would tentatively argue, the source of the conflict is that these bad actors took Yudkowsky’s basic ideas, above all ideas about the imminent destruction of humanity by AI, and played them out to a logical conclusion – or, at least, a Rationalist conclusion. This wave of murder is just the most extreme manifestation of cultish elements that have bubbled up from the Rationalist movement proper for going on a decade now, including MKUltra-like conditioning both at Leverage Research – another splinter group seemingly pushed out of Rationalism proper following certain revelations – and within the Center for Effective Altruism itself.

In his piece “FTX, Rationalism, and U.S. Intelligence: A Conspiracy Theory” (an excerpt from his book “Stealing the Future: Sam Bankman-Fried, Elite Fraud, and the Cult of Techno-Utopia”) Morris has connected some alarming dots:

the Center for Applied Rationality, which received (and has resisted returning) funds stolen from FTX customers by Sam Bankman-Fried and his co-conspirators, bears a striking resemblance to the agendas for both individual brainwashing and large-scale social engineering that drove some of the Central Intelligence Agency’s most disturbing programs.

Now, with the revelation that a group of rogue Rationalists known as the “Zizians” have been tied to a wave of murders across the U.S., it seems justified to explore the possibility that the Rationalist movement is not merely a misguided ethos turned toxic by cult-like insularity. Placed in a broader context, its tenets and practices begin to resemble both the Human Potential Movement centered around institutions like the Esalen Institute; and, in fringe sub-groups that have splintered from Rationalism proper, the illicit human experimentation conducted by the CIA starting in the 1950s under the code name MKUltra.

Morris’ piece “What is TESCREALism? Mapping the Cult of the Techno-Utopia” can help us get back to current events:

the AGI myth is why reality-based efforts to make existing AI algorithms safe for currently-living humans have almost zero traction among the loudest proponents of “AI safety.” In just the same way that Sam Bankman-Fried stole customer funds to make long-term bets, today’s AI leaders are actively and vocally dismissing the current, material risks of machine learning algorithms, and focusing instead on a long-term future that they confidently predict without a shred of actual evidence. (Just two baseless assumptions of the doomer fantasy are that A.I. will become self-improving, and that it will easily master nanotechnology.)

This patent display of foolishness might be the deepest underyling reason the tech industry had to purge Timnit Gebru. The vision of AI shared by people like Sam Altman is substantially derived from sci-fi like James Cameron’s Terminator, and going as far back as Karel Capek’s R.U.R., the origin of the word “robot.” Capek’s 1923 play far preceded anything like AI, making clear that the intentional, humanoid, thinking “robot” has always been primarily a metaphor for the much more complex dialectic by which man-made technology becomes a threat to human essence. The Singularitarians have made the childish error of mistaking these simplified storybook tales for the complexity of reality, and as long as Gebru and her cohort remain committed to describing how technology actually works, the collective fantasy of superintelligent yet incredibly dangerous AI is threatened.

Norris also connects TESCREALism to the newly launched publication The Argument and the Abundance bros in his “Effective Altruism In a Skinsuit: “The Argument” is Laundering Austerity“:

The launch of new “liberal” news outlet The Argument has been unambiguously hilarious, fundamentally because most of their marquee writers, particularly Matt Yglesias and Kelsey Piper, are not so much “liberal” in any commonly understood American sense as “center-right-to-secretly-eugenicist.” Piper and Yglesias are both formerly tied to Vox, and The Argument also features Derek Thompson as a staff writer – Ezra Klein’s partner in the ideologically very similar “Abundance Liberalism” project, which is largely about co-opting right-wing deregulation rhetoric.

When you look at the funding for The Argument it becomes very clear why this “liberal” publication is devoted to undermining the case for a welfare state. The Argument is primarily funded and staffed, not by “liberals,” but by a mix of Effective Altruists like Dustin Moskovitz strategically shifting away from that brand after the FTX debacle showed its strategic and ideological emptiness; and entities tied to far-right funding sources including Peter Thiel and the Koch Brothers. This is “liberalism” in 2025.

If you know Yglesias and Piper, you know their entire shtick is maintaining a strategic ignorance that serves their ideological aims.

Freddy deBoer has some supplementary thoughts on Ezra Klein that doesn’t explicitly link back to Silicon Valley ideologies but provides additional insight:

Klein, in his earnest credulity towards the claims of AI maximalists, shows us one way this refusal plays out. Ezra’s entranced by the prospect of radical technological transformation, by the possibility that generative models or robotics or biotech are going to utterly remake the human condition.

He’s interviewed dozens of people on the subject, and though he hedges and qualifies, there’s always an underlying openness to the idea that we are at the brink of a sci-fi future. “Person after person… has been coming to me saying… We’re about to get to artificial general intelligence[!]” says Ezra, in his breathless style, not pausing to acknowledge that every one of those persons is someone who has direct financial investment not in AGI being real and imminent but in the impression that AGI is real and imminent.

Klein does not want to let go of the possibility that he might live in Star Trek or Blade Runner or Terminator; he wants to believe that our lives can be so thoroughly altered that the weight of ordinary existence will be lifted. And I promise I’m not blowing smoke when I say that, where I find most AI evangelists to be disingenuous charlatans, I find everything Ezra says to be aching with sincerity and sentiment. Which, analytically, is of course the exact problem. He is too eager to believe.

Klein wants the AI story to mean that we are on the verge of a post-scarcity society, that the hard grind of politics and labor might soon be obviated by miraculous machines; he’s savvy enough not to say the other part out loud, which is that he wants to pilot a mech on the sands of Mars, to guide his X-Wing into the mouth of a wormhole that will lead him who knows where.

Klein’s fantasies risk destroying the world economy.

The thing that deBoer gets is that Klein is desperate to believe in magic. What he’s missing is that Klein’s fantasies are structured and guided by Silicon Valley “thinkers” who are equally committed to a fantasy-life vision of reality.

Unfortunately for everyone else, they’ve got the money and power to impose those fantasies on the rest of us.

Print Friendly, PDF & Email

21 comments

  1. Bugs

    These people are absolutely delusional freaks who believe their own made up Sci fi mythology about humanity and technology, have very limited personal intellectual means to bring them to fruition, but access to a class network that enables them to continue to spout off nonsense that gets money thrown at it. It’s like a hundred spoiled brats, equivalent in scamdom to someone like Elizabeth Holmes, are running around managing C-suites and actually getting trillions in money and engineering skills tossed at their absurd ideas. Imagine what we could do if such resources were directed to peace and prosperity, under popular democratic control.

    Reply
    1. Carolinian

      So they all think they are going to be raptured into the inside of a computer. The new religion is just like the old religion. As Caligula said (maybe):”I think I am becoming a god.”

      Meanwhile Trump thinks he has eternity all sewed up if he can just get that Peace Prize–sort of like the Wizard pinning a medal onto the Cowardly Lion.

      Who are we mere mortals to object?

      Freaks indeed.

      Reply
  2. Lee

    This notion of uploading the mind sans corpus reminds me of the Tibetan Buddhists’ Bardo when, after death, the mind exists absent all physical organs of consciousness or the senses. In that state, which is at least initially accompanied by sheer terror, the mind either finds its way to the great white light or, failing to do so, will come to inhabit another body. The notion that these would-be immortals could find themselves stuck forever in a digital netherworld between enlightenment and reincarnation, locked in as it were, and therefore unable to trouble the rest of us with their wackadoodle schemes, I find quite appealing. Admittedly, this is very un-Buddhist of me.

    Reply
  3. wsa

    A book I’ve been recommending a lot lately is Adam Becker’s More Everything Forever: AI Overlords, Space Empires, and Silicon Valley’s Crusade to Control the Fate of Humanity. It covers many of the tech lords’ pathologies in a very readable way.

    Reply
    1. Nat Wilson Turner Post author

      Been waiting for that one to become available from my local library. 26 holds on 2 “electronic copies” — what stupid BS that is

      Reply
  4. RookieEMT

    Trump ally and activist Charlie Kirk shot in the neck at an event. I would say he’s pretty much dead as his carotid was seemingly destroyed.

    Escalation leading to more violence, counter-violence and more escalation.

    Reply
    1. amfortas

      aye. and twitx feed is filled with “see?! the LEFTISTS really want us all dead!!”–(or” have declared war on us!!”)

      i friended and corresponded with that baldheaded McCarthy guy who used to edit TAC, a week or two ago(as i said to him, ive always had a soft spot for russel kirk/wendell berry ‘conservatives’)…regarding this very thing…that dems aint left, and neither is woke.
      he agreed.
      perhaps he’ll step forward and stay this madness,lol.

      and that 10 sec video clip was awful.
      i cohere with(watch me now!) Murray fucking Rothbard on the subject of violence: no first use of force, ever.

      Reply
  5. Fastball

    The entire concept of an AGI in any hands where a billionaire or corporatist can ruthlessly exploit it forever seems completely antithetical to me, though I can’t put my finger on exactly why. The term “Artificial Slave” seems more apt. My instincts as a software developer is that we develop software to serve our interests and wants as human beings (or corporate monsters, whatever the case) as we see fit.

    To me this argues against an AGI ever developing in the first place, especially one developed by a billionaire to serve the billionaire’s interests, at least not one remotely resembling that of a human. The whole raft of behaviors involving sapience presuppose things no navel gazing billionaire would ever want; from exploration to self awareness to play to self preservation. And, to be realistic, such mindsets have to emerge organically rather than being programmed in as the kind of command override mentality of an ordinary software developer, to say nothing of the quintessential user parasite that is the ordinary billionaire.

    They don’t want their creations to run free or even develop freely and for many applications an AGI is not something most of society would want either. One does not have to imagine a Terminator style hyper dystopia where robots were running around trying to kill off all of humanity to imagine a world in which AGIs are not doing anything we humans might want as a species.

    If all humans want is Artificial Slaves, then the race for AGI is just trying to do something to see if it can be done, not to do any good in the world.

    Reply
    1. Nat Wilson Turner Post author

      Excellent point.

      They’re starting out with bad intent, which the Buddha reminds us is never a good thing.

      Reply
  6. Watt4Bob

    I’ve been watching ‘Foundation’ the Sci-Fi series on Apple TV, based on Isaac Asimov.

    The plot involves a galactic empire with hundreds of thousands of planets and a population of trillions.

    I’m also reading ‘Tech Agnostic’ ( How Technology Became the World’s Most Powerful Religion , and Why it Desperately Needs a Reformation) by Greg M. Epstein, MIT Press.

    Epstein explains how the Tech Religionists envision a universe populated with trillions of trans-human beings whose reason for being seems to be the allowance of unlimited numbers of tech-bro billionaires, because, unlimited customers.

    So far, the world is able to support only so many billionaires, but a universe of trillions allows folks like the DOGE boys to fantasize a rich future of unlimited opportunity.

    The fact that Asimov wrote Foundation in 1951, and even at that time, understood that maintaining the Empire might require destruction of a few planets, and their inhabitants/subjects/customers, leads me to think that the Tech-Bros and their minions are following an age-old, greed-driven script, a script that is sure to include a nod toward “…absolute power corrupts, absolutely.”.

    They want to get really rich, and they want to live forever.

    They have a plan;

    “…and if it turns out that God is not on our side, we’ll create one who is.”

    Reply
        1. Carolinian

          Thanks for the link.

          Alex Cockburn teed off on Wiesel and said much of his Night was fiction. Asimov presents as the dim memory of a more sensible time.

          Reply
      1. GrimUpNorth

        Spoiler alert:

        Does that make Trump the Mule?

        The books are a fantastic read. It’s loosely based on the Roman Empire, I might read them again now after 40years.

        As to the main post, I don’t understand the motives of the American people, so I have no chance understanding their overlords.

        Reply
        1. Nat Wilson Turner Post author

          The motives of the overlords bear no relation to the motives of the American people which range from naive and humble aspirations of providing for families to complete dissolution and pursuit of the pleasure principle.

          I’ll say this for the techbros, at least TESCREAL is an ethos, man.

          Reply
  7. Mikel

    I’ll ask again: what happened to them getting their own little islands to go off to and rule? Wasn’t that working out?
    And it’s total BS because they don’t want any “extinction”…They want to be worshipped for their bad ideas.
    There will be a bunch of idiots in the future pretending all this fake shit works.

    Reply
    1. Watt4Bob

      There will be a bunch of idiots in the future pretending all this fake shit works.

      Unfortunately, you’re describing the present.

      Reply
    2. Nat Wilson Turner Post author

      There are already a ton of idiots pretending this fake shit works. Who do you think bought all those NFTs & crypto?

      My favorite is the guy who claims he made millions flooding Amazon with AI written books.

      Reply
  8. HH

    To paraphrase FDR: We fear losing fear itself.

    The remarkable backlash against accelerating technical progress suggests a strange nostalgia for fear and suffering. How else to explain the popularity of dystopian and apocalyptic fiction? Taking away scarcity forces a rethinking and remaking of society on a sweeping scale. If fusion power technology and AGI arrive, there will undoubtedly be a wave of global abundance. Certainly human folly could misdirect that wealth, but we are not headed back to feudalism.

    To assume that leading developers of advanced technology are super-villains is attributing more importance to them than they deserve. The light bulb would have been invented without Edison, and powered flight would have happened without the Wright brothers. AGI is desirable and inevitable because it will greatly increase the application of intelligence to the problems of man. Human society will adapt to its benefits and drawbacks as it has accommodated prior technological transformations.

    As for trans-humanism, many of us are already augmented with knowledge and communications resources far beyond anything available 50 years ago. The devices in our pockets allow us to see and speak with anyone in the world instantly, translate languages, and access a colossal store of digital information. Why should we fear having more wealth, longer life spans, greater mental capacities, and expanded life choices? We should be willing to sacrifice some fear to gain those advantages.

    Reply
    1. Nat Wilson Turner Post author

      WWI effectively dispelled the myth of progress. Everything since is just remedial classes for those who didn’t pay attention.

      Reply

Leave a Reply

Your email address will not be published. Required fields are marked *