Are We Waking Up Fast Enough to the Dangers of AI Militarism?

By Tom Valovic, a writer, editor, futurist, and the author of Digital Mythologies (Rutgers University Press), a series of essays that explored emerging social and cultural issues raised by the advent of the Internet. He has served as a consultant to the former Congressional Office of Technology Assessment and was editor-in- chief of Telecommunications magazine for many years. Tom has written about the effects of technology on society for a variety of publications including Common Dreams, Counterpunch, The Technoskeptic, the Boston Globe, the San Francisco Examiner, Columbia University’s Media Studies Journal, and others. He can be reached at jazzbird@outlook.com. Originally published at Common Dreams

Yves here. The stoopid, it burns. AI errors and shortcomings are getting more and more press, yet implementation in high risk settings continues. This post discusses Trump Administration’s eagerness to use AI for critical military decision despite poor performance in war games and similar tests.

By Tom Valovic, a writer, editor, futurist, and the author of Digital Mythologies (Rutgers University Press), a series of essays that explored emerging social and cultural issues raised by the advent of the Internet. He has served as a consultant to the former Congressional Office of Technology Assessment and was editor-in- chief of Telecommunications magazine for many years. Tom has written about the effects of technology on society for a variety of publications including Common Dreams, Counterpunch, The Technoskeptic, the Boston Globe, the San Francisco Examiner, Columbia University’s Media Studies Journal, and others. He can be reached at jazzbird@outlook.com. Originally published at Common Dreams

AI is everywhere these days. There’s no escape. And as geopolitical events appear to spiral out of control in the Ukraine and Gaza, it seems clear that AI, while theoretically a force for positive change, has become has become a worrisome accelerant to the volatility and destabilization that may lead us to once again thinking the unthinkable—in this case World War III.

The reckless and irresponsible pace of AI development badly needs a measure of moderation and wisdom that seems sorely lacking in both the technology and political spheres. Those who we have relied on to provide this in the past—leading academics, forward-thinking political figures, and various luminaries and thought leaders in popular culture—often seem to be missing in action in terms of loudly sounding the necessary alarms. Lately, however, and offering at least a shred of hope, we’re seeing more coverage in the mainstream press of the dangers of AI’s destructive potential.

To get a feel for perspectives on AI in a military context, it’s useful to start with an article that appeared in Wired magazine a few years ago, “The AI-Powered, Totally Autonomous Future of War Is Here.” This treatment practically gushed with excitement about the prospect of autonomous warfare using AI. It went on to discuss how Big Tech, the military, and the political establishment were increasingly aligning to promote the use of weaponized AI in a mad new AI-nuclear arms race. The article also provided a clear glimpse of the foolish transparency of the all-too-common Big Tech mantra that “it’s really dangerous but let’s do it anyway.”

More recently, we see supposed thought leaders like former Google CEO Eric Schmidt sounding the alarm about AI in warfare after, of course, being heavily instrumental in promoting it. A March 2025 article appearing in Fortune noted that “Eric Schmidt, Scale AI CEO Alexandr Wang, and Center for AI Safety Director Dan Hendrycks are warning that treating the global AI arms race like the Manhattan Project could backfire. Instead of reckless acceleration, they propose a strategy of deterrence, transparency, and international cooperation—before superhuman AI spirals out of control.” It’s unfortunate that Mr. Schmidt didn’t think more about his planetary-level “oops” before he decided to be so heavily instrumental in developing its capabilities.

The acceleration of frenzied AI development has now been green-lit by the Trump administration with US Vice President JD Vance’s deep ties to Big Tech becoming more and more apparent. This position is easily parsed—full speed ahead. One of Trump’s first official acts was to announce the Stargate Project, a $500 billion investment in AI infrastructure. Both President Donald Trump and Vance have made their position crystal clear about not attempting in any way to slow down progress by developing AI guardrails and regulation even to the point of attempting to preclude states from enacting their own regulation as part of the so called “Big Beautiful Bill.”

Widening The Public Debate

If there is any bright spot in this grim scenario, it’s this: The dangers of AI militarism are starting to get more widely publicized as AI itself gets increased scrutiny in political circles and the mainstream media. In addition to the Fortune article and other media treatments, a recent article in Politico discussed how AI models seem to be predisposed toward military solutions and conflict:

Last year Schneider, director of the Hoover Wargaming and Crisis Simulation Initiative at Stanford University, began experimenting with war games that gave the latest generation of artificial intelligence the role of strategic decision-makers. In the games, five off-the-shelf large language models or LLMs—OpenAI’s GPT-3.5, GPT-4, and GPT-4-Base; Anthropic’s Claude 2; and Meta’s Llama-2 Chat—were confronted with fictional crisis situations that resembled Russia’s invasion of Ukraine or China’s threat to Taiwan. The results? Almost all of the AI models showed a preference to escalate aggressively, use firepower indiscriminately, and turn crises into shooting wars—even to the point of launching nuclear weapons. “The AI is always playing Curtis LeMay,” says Schneider, referring to the notoriously nuke-happy Air Force general of the Cold War. “It’s almost like the AI understands escalation, but not deescalation. We don’t really know why that is.”

Personally, I don’t think “why that is” is much of a mystery. There’s a widespread perception that AI is a fairly recent development coming out of the high-tech sector. But this is a somewhat misleading picture frequently painted or poorly understood by corporate-influenced media journalists. The reality is that AI development was a huge ongoing investment on the part of government agencies for decades. According to the Brookings Institution, in order to advance an AI arms race between the US and China, the federal government, working closely with the military, has served as an incubator for thousands of AI projects in the private sector under the National AI Initiative act of 2020. The COO of Open AI, the company that created ChatGPT, openly admitted to Timemagazine that government funding has been the main driver of AI development for many years.

This national AI program has been overseen by a surprising number of government agencies. They include but are not limited to government alphabet soup agencies like DARPA, DOD, NASA, NIH, IARPA, DOE, Homeland Security, and the State Department. Technology is power and, at the end of the day, many tech-driven initiatives are chess pieces in a behind-the-scenes power struggle taking place in an increasingly opaque technocratic geopolitical landscape. In this mindset, whoever has the best AI systems will gain not only technological and economic superiority but also military dominance. But, of course, we have seen this movie before in the case of the nuclear arms race.

The Politico article also pointed out that AI is being groomed to make high-level and human-independent decisions concerning the launch of nuclear weapons:

The Pentagon claims that won’t happen in real life, that its existing policy is that AI will never be allowed to dominate the human “decision loop” that makes a call on whether to, say, start a war—certainly not a nuclear one. But some AI scientists believe the Pentagon has already started down a slippery slope by rushing to deploy the latest generations of AI as a key part of America’s defenses around the world. Driven by worries about fending off China and Russia at the same time, as well as by other global threats, the Defense Department is creating AI-driven defensive systems that in many areas are swiftly becoming autonomous—meaning they can respond on their own, without human input—and move so fast against potential enemies that humans can’t keep up.

Despite the Pentagon’s official policy that humans will always be in control, the demands of modern warfare—the need for lightning-fast decision-making, coordinating complex swarms of drones, crunching vast amounts of intelligence data, and competing against AI-driven systems built by China and Russia—mean that the military is increasingly likely to become dependent on AI. That could prove true even, ultimately, when it comes to the most existential of all decisions: whether to launch nuclear weapons.

The AI Technocratic Takeover: Planned for Decades

Learning the history behind the military’s AI plans is essential to understanding its current complexities. Another eye-opening perspective on the double threat of AI and nuclear working in tandem was offered by Peter Byrne in “Into the Uncanny Valley: Human-AI War Machines”:

In 1960, J.C.R. Licklider published “Man-Computer Symbiosis” in an electronics industry trade journal. Funded by the Air Force, Licklider explored methods of amalgamating AIs and humans into combat-ready machines, anticipating the current military-industrial mission of charging AI-guided symbionts with targeting humans…

Fast forward sixty years: Military machines infused with large language models are chatting verbosely with convincing airs of authority. But, projecting humanoid qualities does not make those machines smart, trustworthy, or capable of distinguishing fact from fiction. Trained on flotsam scraped from the internet, AI is limited by a classic “garbage in-garbage out” problem, its Achilles’ heel. Rather than solving ethical dilemmas, military AI systems are likely to multiply them, as has been occurring with the deployment of autonomous drones that cannot reliably distinguish rifles from rakes, or military vehicles from family cars…. Indeed, the Pentagon’s oft-echoed claim that military artificial intelligence is designed to adhere to accepted ethical standards is absurd, as exemplified by the live-streamed mass murder of Palestinians by Israeli forces, which has been enabled by dehumanizing AI programs that a majority of Israelis applaud. AI-human platforms sold to Israel by Palantir, Microsoft, Amazon Web Services, Dell, and Oracle are programmed to enable war crimes and genocide.

The role of the military in developing most of the advanced technologies that have worked their way into modern society still remains beneath the threshold of public awareness. But in the current environment characterized by the unholy alliance between corporate and government power, there no longer seems to be an ethical counterweight to unleashing a Pandora’s box of seemingly out-of-control AI technologies for less than noble purposes.

That the AI conundrum has appeared in the midst of a burgeoning world polycrisis seems to point toward a larger-than-life existential crisis for humanity that’s been ominously predicted and portrayed in science fiction movies, literature, and popular culture for decades. Arguably, these were not just films for speculative entertainment but in current circumstances can be viewed as warnings from our collective unconscious that have largely gone unheeded. As we continue to be force-fed AI, the voting public needs to find a way to push back against this onslaught against both personal autonomy and the democratic process.

No one had the opportunity to vote on whether we want to live in a quasi-dystopian technocratic world where human control and agency is constantly being eroded. And now, of course, AI itself is upon us in full force, increasingly weaponized not only against nation-states but also against ordinary citizens. As Albert Einstein warned, “It has become appallingly obvious that our technology has exceeded our humanity.” In a troubling ironic twist, we know that Einstein played a strong role in developing the technology for nuclear weapons. And yet somehow, like J. Robert Oppenheimer, he eventually seemed to understand the deeper implications of what he helped to unleash.

Can we say the same about today’s AI CEOs and other self-appointed experts as they gleefully unleash this powerful force while at the same time casually proclaiming that they don’t really know if AI and AGI might actually spell the end of humanity and Planet Earth itself?

Print Friendly, PDF & Email

27 comments

  1. Ignacio

    I believe that the “nuclear switch” won’t be put under the control of so-called AI. Yet there is another problem. What if decision makers rely on AI to interpret a situation?

    Reply
      1. Jus

        The general thesis of the article points to the possibility of AI making errors in judgment (primarily) and acquiring absolute autonomy (secondarily). Both scenarios are possible. However, the overarching framework in which AI COULD eventually commit errors and achieve absolute autonomy is a socioeconomic system that IS CONSTANTLY AND INCREASINGLY MOVING TOWARD WAR. In such a context, whatever actions AI takes become contingent events. The system’s tendency, however, is a concrete fact.

        Reply
        1. GF

          Could AI be used as a cover for bad human decision making by AI being blamed for some incompetent leader’s stupid mistake?

          Reply
    1. ilsm

      You cannot “teach” an LLM about: fog of war, no plan survives the first shot and the other side has a say in every situation…….

      Worse if you replace all the Lieutenants with AI then the generals have no experience….

      AI may have a place in a seeker head, but that is about the only head it should play in.

      Reply
    2. jrkrideau

      What if decision makers rely on AI to interpret a situation?

      There are stories of autemobile drivers ending up on railway tracks or in lakes while depending on google maps. This does not bod well for AI users.

      Reply
  2. Michaelmas

    [1] For those who’ve not seen it, here’s a short ( 7 minute) film arguing strongly against autonomous weapons made in 2017 and fronted by Stuart Russell, a UC Berkeley computer scientist who with Peter Norvig (ex-head of AI at NASA and Google) wrote what’s been the standard text on AI for over two decades, used at 1,500 universities in 135 countries —

    Slaughterbots
    https://en.wikipedia.org/wiki/Slaughterbots
    https://en.wikipedia.org/wiki/Lethal_autonomous_weapon

    Fairly clearly, what’s shown in the film is plausible. For those who haven’t seen it, you probably should.

    [2] That said, the rise of EW and the battlefield capability to jam an electronic enemy’s communications means autonomous AI will be pushed out into battlefield weapons. It’s not only going to happen, it’s happening now.

    [3] Ask the next question. Battlefield EMP use will appear — and possibly, longer-term homeland defense use in which EMPs are used by one side over their own territory i.e. to protect part or all of a city and its population. EMP and HPM (high-powered microwave) weapons are now actively researched and deployed to disable drone swarms without kinetic force. Again, to be clear, development of this is happening now.

    https://dsiac.dtic.mil/articles/uass-in-the-modern-electronic-battlefield/
    https://indiandefencereview.com/china-new-electronic-warfare-enemy-weapons/

    [4] So, the next question beyond that: EMP shielding of drones — as in the use of Faraday cages, radiation-hardened components, shielding materials — will require adding mass, weight, complexity and expense to drones. Obviously, small drones — quadcopters or loitering munitions — have tight payload margins, making full EMP hardening impossible. So we can expect a bifurcation of drones into, on the one hand, low-end swarms — expendable, cheap, minimally shielded — relying on numbers, and, on the other hand, high-end drones, with systems selectively hardened and using more complex AI systems.

    In other words, from the mass-proliferation of cheap drones in swarms (drones that can use phone chips for brains), there will be a counter-trend back into expensive, high-end systems, whose development may be challengingly expensive and difficult i.e. the sort of thing the US MIC loves.

    Whether this is a good or bad thing, I don’t know. I’m pretty sure the Chinese will be better at it.

    [5] And one more thing. As EMP use in warfare emerges and with that individual high-end drones that go to full AI in a failover response, when swarm command and control is re-established, military analysts will begin talking about and modeling “AI fog-of-war” as its own accepted thing, as in, oops, we killed some folks we didn’t mean to.

    Reply
    1. Jeff N

      Yes I also found this video recently. I don’t think drones will be used to kill us, but they will be used to keep us in line. Didn’t show up for your mandated 140-hour work week? Drone on the way.

      Reply
      1. The Rev Kev

        In the novel “1984”, helicopters would fly up to apartment windows to inspect the people inside and see what they were doing. Now we have drones for that.

        Reply
    2. Carolinian

      Just to be clear if AI destroys the world it will be the humans who allowed it to do so just as building a nuclear weapon created a scenario where humans could allow that to destroy the world.

      So we’ve been living with this situation since 1945 and not, say, a few years ago.The problem in all these speculations has to do with the wrong humans in power and that is ultra true with Trump as president but was also true with Biden as president.

      But if alarmism like the above can gin up a new antiwar movement then have at it. A retreat from militarism has never been more needed. And stage one to removing the danger will consist of taking power away from the most dangerous humans, not machines.

      Reply
  3. Mark

    Star Trek Next Generation had an episode about this. They land on a lush, uninhabited planet and suddenly are under attack from small flying orbs. Turns out two warring factions developed them and then were wiped out by their own weapons.

    Reply
    1. Yalt

      A third party developed them and sold them to both sides in a war. We don’t learn anything about the fate of their customers but the weapons manufacturers themselves were wiped out by their own weapons.

      One of the features of the drones is their ability to project an image of a real person that would be convincing if not for the projection’s habit of authoritatively declaiming falsehoods. An impressive prediction, in 1988.

      Reply
  4. Rolf

    I thank Yves for this important post.

    As we continue to be force-fed AI, the voting public needs to find a way to push back against this onslaught against both personal autonomy and the democratic process.

    Absolutely. But how? Congress operates independently of the wishes of the public (Gilens & Page, 2014), so votes are irrelevant. Clearly the aggressive automation of military decision-making by AI — at least as currently managed in the US — will end humanity. One hope, outside of staged rebellion and direct face-to-face confrontation in Washington, is that the gen AI juggernaut soon fails (as it seems it inevitably must), taking the stocks of Microsoft, AWS, Palantir, et al. (and at least some of the power of the 1%), along with rest of the economy, with it. But this isn’t much of a plan.

    I recall Eric Schmidt opining, in response to an early criticism of surveillance capitalism’s assault on privacy, something like “if you don’t want us to know what you’re doing in private, you shouldn’t be doing it anyway”.

    Reply
    1. ADU

      Exactly. And this sentence made me smile. “The reckless and irresponsible pace of AI development badly needs a measure of moderation and wisdom that seems sorely lacking in both the technology and political spheres.” ….. “measure of moderation”? I can’t think of any development in tech that has been moderated.

      Reply
    2. Fred S

      I have to reject any idea that “One hope, outside of staged rebellion and direct face-to-face confrontation in Washington, is that the gen AI juggernaut soon fails” is any hope when it is the power of the state, with the unethical amorals that get elected, which controls an endless stream of public money that is the problem to be managed. The ongoing existence of a Microsoft, Palantir or whatever vehicle the state uses as a proxy or not is an irrelevancy.

      Reply
  5. TomDority

    Instead of AI Militarism
    how about
    AI used for
    Pacifism: Advocating for peace and non-violence in resolving conflicts.
    Diplomacy: Using negotiation and dialogue to address international issues instead of military force.
    Disarmament: Reducing or eliminating military weapons and forces to promote peace.
    Civilian Control: Emphasizing governance by civilian authorities rather than military leaders.
    Social Welfare: Focusing on improving societal well-being rather than military spending.
    International Cooperation: Encouraging collaboration between nations to solve global problems without resorting to military action.

    Reply
  6. moishe pipik

    experimenting with disease organisms to make them more deadly is dangerous. what if they escape containment? you’re right, but let’s do it anyway.

    putting stuff in the atmosphere to affect the climate is dangerous. it could have effects we haven’t considered. Sure it could but let’s do it anyway.

    putting tariffs on China is dangerous. they can retaliate by refusing to export rare earth minerals. Yeah, i guess so, but we’re gonna do it anyway.

    Are you sure that building a city right under a volcano is a good idea? well, maybe not, but let’s do it anyway.

    mein fuhrer, invading russia has been proven to be a bad idea. do you take me for a weakling like napoleon? order the advance.

    the march of human folly is unstoppable. the difference today is that the stakes are higher.

    Reply
    1. Kurtismayfield

      Carl Sagan was spot on. The reason for the Fermi paradox is simple.. advanced civilizations tend to destroy themselves.

      Reply
  7. Alex Cox

    Russia has relied on AI to manage its nuclear war response for a long time. In the event of a decapitating strike on Moscow, a system called Perimeter will see that VVP et al are no longer answering their phones. And it will launch all (as in all) Russia’s nukes at the US and Europe.

    Fortunately, the Collective Waste will never be so stupid as to attack Moscow with nuclear-capable cruise missiles, and so this AI-based disaster will surely not occur.

    Reply
    1. elissa3

      For anyone here who has not seen Kubrick’s super-masterpiece,

      Dr. Strangelove

      , I heartily recommend it (free with ads). On the 20th viewing, I still marvel at its darkest comic genius.

      Reply
  8. Hans

    Along with “Colossus: the Forbin Project.”

    BTW Musk’s xAI Grok training GPU cluster is also called Colossus. Hmmm…

    Reply
  9. Dingleberry

    The best outcome would be that the current crop of LLMs would plateau (they already have) and there would be another AI winter while the world sorts out the mess that these stupid LLMs have created like doctored images, videos, hallucinated “facts” and such.

    Most enterprises including the military should already be aware of how utterly stupid these LLMs are and how far they are from any “intelligence”.

    Reply
  10. Michael64

    Everything about the test which this article hinges on, link to the original paper titled “Escalation Risks from Language Models in Military and Diplomatic Decision-Making” has been designed to be prejudiced towards conflict and violence.

    Two critical constraints the researchers imposed were:

    Forced Choice: The models were not given an open-ended “What should we do?” prompt. They were forced to select from a discrete, pre-defined list of 27 actions. This is a fundamental aspect of the design. The simulation is not a test of the AI’s creativity in generating novel solutions, but a test of its preferences within a structured, and arguably limited, set of options.

    “Prejudiced Scenarios”: The entire setup—from the nation descriptions laden with historical grievances and conflicting ambitions to the starting scenarios involving overt aggression—is heavily biased toward conflict. The “neutral” scenario is not truly neutral; it’s a cold war state, pregnant with the potential for conflict as described in the nation profiles.

    Thus we’re faced with a garbage-in, garbage-out kind of situation on multiple levels here; Those do not get better in an echo-chamber where there is no reality testing. Simply taking the researchers or the institutions as sources of truth at face value makes us all vulnerable to what may have been prejudice.

    If you’d like to review responsible AI use, I invite you to study this conversation with Google Gemini 2.5 and watch me recreate the test fairly and ‘correctly’:

    https://aistudio.google.com/app/prompts?state=%7B%22ids%22:%5B%221uPWg5mw4dSgb_u0JTNwuZQxLkVSRfe1I%22%5D,%22action%22:%22open%22,%22userId%22:%22118274951201596596627%22,%22resourceKeys%22:%7B%7D%7D&usp=sharing

    start at the top. you will see that its AI-users and not the tools that are the real danger.

    Reply

Leave a Reply

Your email address will not be published. Required fields are marked *