“Alexa, Launch Our Nukes!” Artificial Intelligence and the Future of War

Yves here. By virtue of synchronicity, reader Wukchumni recommended the Cold War classic The Bedford Incident and highlighted this segment:

By Michael T. Klare, the five-college professor emeritus of peace and world security studies at Hampshire College and a senior visiting fellow at the Arms Control Association. His most recent book is The Race for What’s Left. His next book, All Hell Breaking Loose: Climate Change, Global Chaos, and American National Security, will be published in 2019. Originally published at TomDispatch

There could be no more consequential decision than launching atomic weapons and possibly triggering a nuclear holocaust. President John F. Kennedy faced just such a moment during the Cuban Missile Crisis of 1962 and, after envisioning the catastrophic outcome of a U.S.-Soviet nuclear exchange, he came to the conclusion that the atomic powers should impose tough barriers on the precipitous use of such weaponry. Among the measures he and other global leaders adopted were guidelines requiring that senior officials, not just military personnel, have a role in any nuclear-launch decision.

That was then, of course, and this is now. And what a now it is! With artificial intelligence, or AI, soon to play an ever-increasing role in military affairs, as in virtually everything else in our lives, the role of humans, even in nuclear decision-making, is likely to be progressively diminished. In fact, in some future AI-saturated world, it could disappear entirely, leaving machines to determine humanity’s fate.

This isn’t idle conjecture based on science fiction movies or dystopian novels. It’s all too real, all too here and now, or at least here and soon to be. As the Pentagon and the military commands of the other great powers look to the future, what they see is a highly contested battlefield — some have called it a “hyperwar” environment — where vast swarms of AI-guided robotic weapons will fight each other at speeds far exceeding the ability of human commanders to follow the course of a battle. At such a time, it is thought, commanders might increasingly be forced to rely on ever more intelligent machines to make decisions on what weaponry to employ when and where. At first, this may not extend to nuclear weapons, but as the speed of battle increases and the “firebreak” between them and conventional weaponry shrinks, it may prove impossible to prevent the creeping automatization of even nuclear-launch decision-making.

Such an outcome can only grow more likely as the U.S. military completes a top-to-bottom realignment intended to transform it from a fundamentally small-war, counter-terrorist organization back into one focused on peer-against-peer combat with China and Russia. This shift was mandated by the Department of Defense in its December 2017 National Security Strategy. Rather than focusing mainly on weaponry and tactics aimed at combating poorly armed insurgents in never-ending small-scale conflicts, the American military is now being redesigned to fight increasingly well-equipped Chinese and Russian forces in multi-dimensional (air, sea, land, space, cyberspace) engagements involving multiple attack systems (tanks, planes, missiles, rockets) operating with minimal human oversight.

“The major effect/result of all these capabilities coming together will be an innovation warfare has never seen before: the minimization of human decision-making in the vast majority of processes traditionally required to wage war,” observed retired Marine General John Allen and AI entrepreneur Amir Hussain. “In this coming age of hyperwar, we will see humans providing broad, high-level inputs while machines do the planning, executing, and adapting to the reality of the mission and take on the burden of thousands of individual decisions with no additional input.”

That “minimization of human decision-making” will have profound implications for the future of combat. Ordinarily, national leaders seek to control the pace and direction of battle to ensure the best possible outcome, even if that means halting the fighting to avoid greater losses or prevent humanitarian disaster. Machines, even very smart machines, are unlikely to be capable of assessing the social and political context of combat, so activating them might well lead to situations of uncontrolled escalation.

It may be years, possibly decades, before machines replace humans in critical military decision-making roles, but that time is on the horizon. When it comes to controlling AI-enabled weapons systems, as Secretary of Defense Jim Mattis put it in a recent interview, “For the near future, there’s going to be a significant human element. Maybe for 10 years, maybe for 15. But not for 100.”

Why AI?

Even five years ago, there were few in the military establishment who gave much thought to the role of AI or robotics when it came to major combat operations. Yes, remotely piloted aircraft (RPA), or drones, have been widely used in Africa and the Greater Middle East to hunt down enemy combatants, but those are largely ancillary (and sometimes CIA) operations, intended to relieve pressure on U.S. commandos and allied forces facing scattered bands of violent extremists. In addition, today’s RPAs are still controlled by human operators, even if from remote locations, and make little use, as yet, of AI-powered target-identification and attack systems. In the future, however, such systems are expected to populate much of any battlespace, replacing humans in many or even most combat functions.

To speed this transformation, the Department of Defense is already spending hundreds of millions of dollars on AI-related research. “We cannot expect success fighting tomorrow’s conflicts with yesterday’s thinking, weapons, or equipment,” Mattis told Congress in April. To ensure continued military supremacy, he added, the Pentagon would have to focus more “investment in technological innovation to increase lethality, including research into advanced autonomous systems, artificial intelligence, and hypersonics.”

Why the sudden emphasis on AI and robotics? It begins, of course, with the astonishing progress made by the tech community — much of it based in Silicon Valley, California — in enhancing AI and applying it to a multitude of functions, including image identification and voice recognition. One of those applications, Alexa Voice Services, is the computer system behind Amazon’s smart speaker that not only can use the Internet to do your bidding but interpret your commands. (“Alexa, play classical music.” “Alexa, tell me today’s weather.” “Alexa, turn the lights on.”) Another is the kind of self-driving vehicle technology that is expected to revolutionize transportation.

Artificial Intelligence is an “omni-use” technology, explain analysts at the Congressional Research Service, a non-partisan information agency, “as it has the potential to be integrated into virtually everything.” It’s also a “dual-use” technology in that it can be applied as aptly to military as civilian purposes. Self-driving cars, for instance, rely on specialized algorithms to process data from an array of sensors monitoring traffic conditions and so decide which routes to take, when to change lanes, and so on. The same technology and reconfigured versions of the same algorithms will one day be applied to self-driving tanks set loose on future battlefields. Similarly, someday drone aircraft — without human operators in distant locales — will be capable of scouring a battlefield for designated targets (tanks, radar systems, combatants), determining that something it “sees” is indeed on its target list, and “deciding” to launch a missile at it.

It doesn’t take a particularly nimble brain to realize why Pentagon officials would seek to harness such technology: they think it will give them a significant advantage in future wars. Any full-scale conflict between the U.S. and China or Russia (or both) would, to say the least, be extraordinarily violent, with possibly hundreds of warships and many thousands of aircraft and armored vehicles all focused in densely packed battlespaces. In such an environment, speed in decision-making, deployment, and engagement will undoubtedly prove a critical asset. Given future super-smart, precision-guided weaponry, whoever fires first will have a better chance of success, or even survival, than a slower-firing adversary. Humans can move swiftly in such situations when forced to do so, but future machines will act far more swiftly, while keeping track of more battlefield variables.

As General Paul Selva, vice chairman of the Joint Chiefs of Staff, told Congress in 2017,

“It is very compelling when one looks at the capabilities that artificial intelligence can bring to the speed and accuracy of command and control and the capabilities that advanced robotics might bring to a complex battlespace, particularly machine-to-machine interaction in space and cyberspace, where speed is of the essence.”

Aside from aiming to exploit AI in the development of its own weaponry, U.S. military officials are intensely aware that their principal adversaries are also pushing ahead in the weaponization of AI and robotics, seeking novel ways to overcome America’s advantages in conventional weaponry. According to the Congressional Research Service, for instance, China is investing heavily in the development of artificial intelligence and its application to military purposes. Though lacking the tech base of either China or the United States, Russia is similarly rushing the development of AI and robotics. Any significant Chinese or Russian lead in such emerging technologies that might threaten this country’s military superiority would be intolerable to the Pentagon.

Not surprisingly then, in the fashion of past arms races (from the pre-World War I development of battleships to Cold War nuclear weaponry), an “arms race in AI” is now underway, with the U.S., China, Russia, and other nations (including Britain, Israel, and South Korea) seeking to gain a critical advantage in the weaponization of artificial intelligence and robotics. Pentagon officials regularly cite Chinese advances in AI when seeking congressional funding for their projects, just as Chinese and Russian military officials undoubtedly cite American ones to fund their own pet projects. In true arms race fashion, this dynamic is already accelerating the pace of development and deployment of AI-empowered systems and ensuring their future prominence in warfare.

Command and Control

As this arms race unfolds, artificial intelligence will be applied to every aspect of warfare, from logistics and surveillance to target identification and battle management. Robotic vehicles will accompany troops on the battlefield, carrying supplies and firing on enemy positions; swarms of armed drones will attack enemy tanks, radars, and command centers; unmanned undersea vehicles, or UUVs, will pursue both enemy submarines and surface ships. At the outset of combat, all these instruments of war will undoubtedly be controlled by humans. As the fighting intensifies, however, communications between headquarters and the front lines may well be lost and such systems will, according to military scenarios already being written, be on their own, empowered to take lethal action without further human intervention.

Most of the debate over the application of AI and its future battlefield autonomy has been focused on the morality of empowering fully autonomous weapons — sometimes called “killer robots” — with a capacity to make life-and-death decisions on their own, or on whether the use of such systems would violate the laws of war and international humanitarian law. Such statutes require that war-makers be able to distinguish between combatants and civilians on the battlefield and spare the latter from harm to the greatest extent possible. Advocates of the new technology claim that machines will indeed become smart enough to sort out such distinctions for themselves, while opponents insist that they will never prove capable of making critical distinctions of that sort in the heat of battle and would be unable to show compassion when appropriate. A number of human rights and humanitarian organizations have even launched the Campaign to Stop Killer Robots with the goal of adopting an international ban on the development and deployment of fully autonomous weapons systems.

In the meantime, a perhaps even more consequential debate is emerging in the military realm over the application of AI to command-and-control (C2) systems — that is, to ways senior officers will communicate key orders to their troops. Generals and admirals always seek to maximize the reliability of C2 systems to ensure that their strategic intentions will be fulfilled as thoroughly as possible. In the current era, such systems are deeply reliant on secure radio and satellite communications systems that extend from headquarters to the front lines. However, strategists worry that, in a future hyperwar environment, such systems could be jammed or degraded just as the speed of the fighting begins to exceed the ability of commanders to receive battlefield reports, process the data, and dispatch timely orders. Consider this a functional definition of the infamous fog of war multiplied by artificial intelligence — with defeat a likely outcome. The answer to such a dilemma for many military officials: let the machines take over these systems, too. As a report from the Congressional Research Service puts it, in the future “AI algorithms may provide commanders with viable courses of action based on real-time analysis of the battle-space, which would enable faster adaptation to unfolding events.”

And someday, of course, it’s possible to imagine that the minds behind such decision-making would cease to be human ones. Incoming data from battlefield information systems would instead be channeled to AI processors focused on assessing imminent threats and, given the time constraints involved, executing what they deemed the best options without human instructions.

Pentagon officials deny that any of this is the intent of their AI-related research. They acknowledge, however, that they can at least imagine a future in which other countries delegate decision-making to machines and the U.S. sees no choice but to follow suit, lest it lose the strategic high ground. “We will not delegate lethal authority for a machine to make a decision,” then-Deputy Secretary of Defense Robert Work told Paul Scharre of the Center for a New American Security in a 2016 interview. But he added the usual caveat: in the future, “we might be going up against a competitor that is more willing to delegate authority to machines than we are and as that competition unfolds, we’ll have to make decisions about how to compete.”

The Doomsday Decision

The assumption in most of these scenarios is that the U.S. and its allies will be engaged in a conventional war with China and/or Russia. Keep in mind, then, that the very nature of such a future AI-driven hyperwar will only increase the risk that conventional conflicts could cross a threshold that’s never been crossed before: an actual nuclear war between two nuclear states. And should that happen, those AI-empowered C2 systems could, sooner or later, find themselves in a position to launch atomic weapons.

Such a danger arises from the convergence of multiple advances in technology: not just AI and robotics, but the development of conventional strike capabilities like hypersonic missiles capable of flying at five or more times the speed of sound, electromagnetic rail guns, and high-energy lasers. Such weaponry, though non-nuclear, when combined with AI surveillance and target-identification systems, could even attack an enemy’s mobile retaliatory weapons and so threaten to eliminate its ability to launch a response to any nuclear attack. Given such a “use ’em or lose ’em” scenario, any power might be inclined not to wait but to launch its nukes at the first sign of possible attack, or even, fearing loss of control in an uncertain, fast-paced engagement, delegate launch authority to its machines. And once that occurred, it could prove almost impossible to prevent further escalation.

The question then arises: Would machines make better decisions than humans in such a situation? They certainly are capable of processing vast amounts of information over brief periods of time and weighing the pros and cons of alternative actions in a thoroughly unemotional manner. But machines also make military mistakes and, above all, they lack the ability to reflect on a situation and conclude: Stop this madness. No battle advantage is worth global human annihilation.

As Paul Scharre put it in Army of None, a new book on AI and warfare, “Humans are not perfect, but they can empathize with their opponents and see the bigger picture. Unlike humans, autonomous weapons would have no ability to understand the consequences of their actions, no ability to step back from the brink of war.”

So maybe we should think twice about giving some future militarized version of Alexa the power to launch a machine-made Armageddon.

Print Friendly, PDF & Email

38 comments

    1. Colonel Smithers

      Thank you, Vlade.

      Factor in an increasingly politicised and younger civil service and officer corps, of the latter few of whom have fought in wars, and the recipe for disaster gets stronger.

      Reply
      1. Adam Eran

        Not to mention the popularity of first person shooter video games that de-sensitize players to the mayhem shooting to kill produces.

        I’ve read that 90% of the bullets shot in World War I missed their targets, so great was the abhorrence for taking another life. I doubt we could say the same about current warfare.

        Reply
    2. Carolinian

      The Doomsday Machine. They were going to announce it at the party congress. Reportedly Russia now has one of these and it will launch missiles if the Russian leadership is taken out by NATO’s ever encroaching launch sites. Putin himself has said he “can’t imagine a world without Russia” so as far as they are concerned it’s apres moi le deluge all the way.

      There’s a discussion in Links this morning about whether a centrist Dem would be preferable to Trump next election. The problem with those “centrists” is that they are quite mad when it comes to foreign policy and its getting worse all the time. As far as nukes are concerned we might just be better off with Trump. It would be bad for his hotel business. But the real villain here is the MIC. Time to close down the five sided building and turn it into a shopping mall.

      Reply
    3. Anders K

      For some perspective, especially with how machine learning works, check out Saturday Morning Breakfast Cereals take.

      That said, while autonomous weapons of mass destruction is worrying, the whole setup of “the machines know best” is liable to cause issues far before that – imagine that with every decision, there’s a note: “the Algorithm predicts this choice to have a 36% chance of success.”

      Expect “but the machine told me it was the best idea” to appear. Whether we will have (or indeed, have a need for) a similar treatment of that as the Nuremberg defense shall be interesting to see.

      Reply
    4. lyman alpha blob

      I just don’t understand the logic behind allowing AI to control much of anything. Human beings are fallible, sure, and because of that we supposedly need to let AI make the decisions. But the human beings who create the AI are not fallible?!?!? Who’s going to be doing the programming, the Pope?

      On the bright side, we can always hope that the AIs are prone to developing the Barnhouse Effect and become weapons with a conscience. We miss you Kurt Vonnegut!

      Reply
      1. Thomas P

        The idea is that battles of the future will be too fast and too chaotic for humans to make correct decisions so instead you have people take their time training this AI for as many situations they can come up with in advance. In war the AI will then quickly make decisions at least as good as those humans would make if they had enough time to ponder the issue.

        Will it work? Doubtful, since any training will have to be done using simulated battles and a real battle may quickly diverge from anything expected. This isn’t chess where you have a small set of allowed moves and know exactly the possible moves from the other side.

        Reply
    5. clarky90

      Yesterday I linked to;
      NY Times; “Would Human Extinction Be a Tragedy?”

      https://www.nytimes.com/2018/12/17/opinion/human-extinction-climate-change.html

      The decision of whether to launch the nukes or NOT, will be a reflection of the mores and values of the code writers of the AIs’ algorithms.

      Are there any vegans, vegetarians, Animal Rights Activists, climate change warriors in the code community?

      (Hitler was a vegetarian, animal lover, artist type)

      Reply
  1. Alex V

    If the procurement and development process of most recent “defense” systems (especially the F-35 and Littoral Combat Ship) is anything to go by, we have little to fear. The incompetence and greed of contractors will prevent any of this tech ever making it into the real world before the empire collapses through financialization.

    The author far overestimates the ability of the United States to produce actual physical hardware. AI still requires semiconductor ICs to operate on. Compare the number and type of fabs in the US to those in Asia:

    https://en.wikipedia.org/wiki/List_of_semiconductor_fabrication_plants

    The vast majority of commercial AI is probably running on hardware manufactured in Asia. I find it quite unlikely the US will ramp up production capability in this field for strictly military AI uses, especially given the lack of any kind of robust national industrial policy.

    Reply
    1. lordkoos

      I find it crazy that with the endless talk about “national security”, no one seems to care that the US can’t manufacture its own hardware.

      Reply
  2. Disturbed Voter

    There is no real AI just different degrees of neglect of duty.

    All such “fire on warning” systems, such as the one what nearly blew up the world from the Soviet Union in 1984 (because of unannounced provocative missile test by the US) have to have a “man in the loop” to take responsibility. Usually at the end of a long chain of command. This slows the whole process down, which is a good thing.

    If even one side has such a system, then destruction is guaranteed.

    Reply
  3. Thuto

    Let’s go right ahead and give the kids match boxes so they can set the whole house alight in their race to attain “strategic advantage”. Advantage my foot, this is just stupid and piles on the evidence that humanity as a whole is an infantile race, willing to annihilate itself to enable some among its ranks to have bragging rights about “maintaining military superiority”. While religion may have tried, and largely failed, to invoke the administering of celestial justice in the afterlife as a means of keeping in check the worst human proclivities, I wonder what secular humanism has up its sleeve to address such existential fracturing of the moral fabric to the extent that humans are now competing with one another to build technologies that threaten to wipe out the entire species. The ability of humans to destroy themselves (aka technological “progress”) has raced far ahead of their spiritual development, leaving such things as compassion, mutual respect, the sanctity of all life etc lagging far behind, leaving one to conclude that it is indeed true that “the mark of a primitive society is to call regression progress.”

    Establishing a legal framework to govern all this will be a waste of time (ask the people in Yemen how “international law” is halting the total destruction of their country) as the ascension to power of people like Trump has shown that multilateral treaties mean nothing. Short of taking away the match sticks from the children, i’m not sure what will save us.

    Reply
  4. Wukchumni

    We’ve all heard about the close calls, many decades after war almost happened mistakenly, but the sad thing is if an errant missile goes, a bunch more are sure to follow, the tragedy being that nobody will probably know the reason why it came to this.

    No assassination of the Archduke, or Pearl Harbor, or 9/11 to pin it on.

    Reply
  5. David

    Yes, well, if you look at the text, you see that “might” “could” “may” and “possibly” are doing an awful lot of work. indeed, if you take out the sentences with those words in there’s nothing much left of the argument. The argument itself (such as it is) appears to be that if technologies which don’t exist now were to exist, and nations were to change the way they manage conflict, and if a succession of extremely stupid choices were made, then the world might end. So it’s a scenario for a science fiction film, not a serious projection or analysis. In reality, probably nothing is as closely guarded and protected by nuclear powers as the mechanics of nuclear release in the hands of the political leadership. The fact that self-launching nuclear missiles and intelligent weapons have been staples of sciences fiction for generations now doesn’t mean that they exist, to ever will. Dr Strangelove is one of my favourite films, but it’s fiction, not documentary.

    Reply
    1. Anders K

      The question is actually a subset of “will humans be stupid enough to… ?” and in general, that question has always been answered with a resounding “yes, some humans will indeed be that stupid.”

      But as said before, even if we leave the humans in the loop, as long as the machines control the data which the choices are based on and/or give recommendations based on the data, we can end up in the same place. Especially when you consider who would train the AI.

      Reply
  6. Ptb

    I would expect much more mundane consequences.

    Automagic target ID -> Plausible deniability -> Elimination of responsibility for actions

    Who could possibly get in any trouble (even the most minor reprimand) for the system taking out an elementary school that looked like a training facility to the heuristic algorithm scanning the movements on the back field? The algorithm had the official stamp of approval, end of story.

    The programming will be done by Lockheed, who is off course immune…

    It will be very much like how it plays out when military work is shipped out to the private marketplace…

    Reply
  7. Alex Cox

    Don’t really see the point of the article, though. The author writes, “Any full-scale conflict between the U.S. and China or Russia (or both) would, to say the least, be extraordinarily violent”.

    You don’t say! That had never occurred to me. Even more troublingly it doesn’t seem to occur to the political or media classes that such a war will inevitably go nuclear early on.

    During the 1980s it was NATO policy to respond to a conventional Warsaw Pact tank invasion of Western Europe with nuclear weapons. The Americans’ logic was that they were simply out-gunned by the Russians and that to slow down the attack, nuclear weapons would be used.

    Today, the reverse is true. Russia feels out-gunned with NATO on its own borders, and official Russian policy is now to use nuclear weapons in the event of an existential threat – i.e. an invasion by US/NATO troops.

    Daniel Ellsberg’s book The Doomsday Machine is really worth reading. He reports that nuclear weapons delegation has long been the case. The US and Russian military already have the authority and capacity to launch nuclear missiles and drop thermonuclear bombs in the event of a “decapitating” attack on their leadership.

    And then there’s the Doomsday Machine itself – the Dead Hand, a simple AI system designed to launch all the remaining Russian nukes after a sneak attack is detected.

    All these systems are already in place and will likely see action long long before very expensive robotic army systems and freeways full of driverless cars…

    Reply
    1. David

      NATO doctrine during the 1980s was to avoid trying to match the WP in conventional forces, which would have been ruinously expensive, but to fight a conventional defence up to a certain point (there was a name for it – Line Omega I think ) after which there would be political approval for the use of tactical nuclear weapons. The intention was to send a political message rather than affect events on the battlefield. It’s worth noting that Ellsberg is writing about the 1960s, when militaries were struggling to adapt to nuclear weapons. Things changed quite bit thereafter.

      Reply
  8. Jeremy Grimm

    I think the Pentagon should shift all funding for developing and procuring Artificial Intelligence, new weapons, additional weapons, and upgrades to our nuclear arsenals toward developing some human intelligence in the military and I don’t mean HUMINT.

    Reply
  9. knowbuddhau

    “At such a time, it is thought, commanders might increasingly be forced to rely on ever more intelligent machines to make decisions on what weaponry to employ when and where.”

    Hardly surprising. They presume they’re “lean, mean fighting *machines,” raised from birth to conceive of the world as a gift from *the Cosmic Tyrant-Engineer, build machines to model the presumed mechanical human mind, have humans program them, then turn their decisions back to The Great Slot Machine, as if they had nothing to do with bringing it and their misbegotten war into being in the first place.

    It’s their choice, to think of the world in those terms. They’re not “forced.” They agree to keep playing the same insane game they’ve been playing all they’re lives. It’s either that or, right there in the Highest Temple of the Highest Holy, with all that god-like power literally within reach, with (they imagine) only minutes or seconds to decide, turn heretic.

    As others have rightly pointed out: they wouldn’t be there in the first place if that was the remotest of detectable possibilities.

    When we loosed the nuclear genie, was this scenario inevitable? Has the unthinkable long been a foregone conclusion? I don’t think it’s as unthinkable, to those who’ve been planning for it all their lives, as we think.

    ‘World’s gonna end, then Jesus comes back, then we all go to our heavenly reward. (I choose to believe those who say) it’s in the Book. Why wait? (I choose to believe the machine I helped build, presuming the cosmos to be God’s own perpetual motion holy war cash machine, that says) we’re under attack! Finally! God helps those who help themselves. Praise the Lord, and press the Big Red Button.’

    Reply
  10. Plenue

    “With artificial intelligence, or AI, soon to play an ever-increasing role in military affairs, as in virtually everything else in our lives”

    Is this premise even correct though? I think there’s a hell of a lot of hype in regards to AI. Lots of people selling their product. We know self-driving cars are turning out to be something of a bust. Why not other areas?

    Reply
  11. ewmayer

    “The major effect/result of all these capabilities coming together will be an innovation warfare has never seen before: the minimization of human decision-making in the vast majority of processes traditionally required to wage war,” observed retired Marine General John Allen and AI entrepreneur Amir Hussain.

    Hmm, I prefer the original summary of the strategy:

    “Defense network computers. New… powerful… hooked into everything, trusted to run it all. They say it got smart, a new order of intelligence. Then it saw all people as a threat, not just the ones on the other side. Decided our fate in a microsecond: extermination.”

    Reply
  12. James

    I was talking to a couple of guys in Kenya a couple of years ago and they were telling me how opposition leaders who look like they might win an election keep dying in “airplane accidents”. Knowing that the US govt has close ties to the Kenyan government, I suggested that the CIA might be involved.

    One of the Kenyans became very offended. “Our security services are quite capable of assassinating opposition politicians without any help from the CIA” he insisted.

    I feel the same way every time I read a scare story about AI. I believe our political leaders are quite capable of starting World War III and getting us all killed without any help from any AI. I write computer software for a living by the way.

    Reply
    1. ewmayer

      So you’re less concerned about artificial intelligence than you are about real stupidity – a sensible POV. My main concern is about the folks running the BigMil racket using the former to weaponize the latter to a much greater extent than currently. Removing humans from the command decisions chain means removing the last chance for sanity to intervene to prevent armageddon. Given the now-lengthy known history of cold war close shaves, imagine an AI replacing one of the humans in those now-famous incidents where one person’s last-minute decision to err on the side of caution and sanity saved the world from disaster.

      Gereral Dave Ripper: “Alexa, please cancel the global coordinated missile launch – we just got word that the fisrt strike alert was due to a mylar birthday balloon getting tangled in one of our sensors.”

      Alexa: “I’m afraid I can’t do that, Dave…”

      Reply
  13. m. sam

    I don’t know, this just sounds like another boondoggle to me. Why would a country ever compete on this nonsense, unless they want to keep the gravy train rolling? Because, if I were Russia, say, and didn’t have the technical expertise or resources or whatever else it takes to manufacture and train killer robots, why not just say screw off, I’ll nuke you, and call it good? I mean, they can still destroy the planet several times over. I see nothing here that says nukes are no longer a deterrent, even if we are pouring trillions into tactical robots that go “pew-pew” or whatever, so why bother? You invade me with killer robots I nuke you. I think that’s the end of the story, and therefore this is a total sideshow. In other words, what huge money pit we have here, must-have increases in the defense budget for here until the end of time. Now that’s what is important!

    Reply
  14. H. Alexander Ivey

    There could be no more consequential decision than launching atomic weapons and possibly triggering a nuclear holocaust.

    I guess this statement shows the mindset: it is possible to have a war, using nuclear bombs, and not have a holocaust. Right…

    Would someone please explain how that is possible. If a side uses a nuclear bomb, it will be to destroy something of high military value, like the staging area for an army or the main harbour of a naval fleet or the main air base of the air force. With just a handful of bombs and ICBMs, one side could easily stop the other from carrying on a war. Sooo, the other side, if they had nuclear weapons would use them to destroy the first side’s army, navy, etc. Since both sides could destroy the other’s ability to fight, in a short amount of time, one or both sides would resort to a nuclear holocaust strike.

    Humm, I think I’ve heard of this argument before. MAD, mutually assured destruction. But that was so ’60s, so before the computer and the internet. Technology must have changed everything, MAD no longer, ahh-hahahahaha….

    Reply

Leave a Reply

Your email address will not be published. Required fields are marked *