Robot Generals: Will They Make Better Decisions Than Humans – Or Worse?

Yves here. This article misses the obvious need to continue to have generals, even if you buy into the barmy premise that AI could do better. I suspect that it is true that the software types could prove that AI beats humans in most cases….but what about situations where leadership actually matters, like the level of troop discipline, or simple motivation level the fighters? I suspect that computers would not have successfully gamed out, for instance, the USSR victory over Germany at Stalingrad. Factors like the superior ability of the Red Army and their supports in the population to endure pain (the brutal cold) and part of that, their tenacity, is something that is tested only in extremis and is not something AI could ever factor in.

The reason to still have generals is their value as salesmen and ambassadors. Look at how current and former generals do star turns in hearings and on TV. People (well, men) in business go gaga over generals and high level spooks. They garner top dollar as speakers at conferences.

So it appears the real reason for floating the “AI generals” trial balloon is to keep these decorated prima donnas in line, to tell them that the only reason they will still have a job is to carry the right PR message. Although, upon reflection, this idea could also be a pet scheme of one faction of military-industrial complex grifters “because AI” that will at least help sell more mundane applications.

By Michael T. Klare, the five-college professor emeritus of peace and world security studies at Hampshire College and a senior visiting fellow at the Arms Control Association. He is the author of 15 books, the latest of which is All Hell Breaking Loose: The Pentagon’s Perspective on Climate Change Originally published at TomDispatch

With Covid-19 incapacitating startling numbers of U.S. service members and modern weapons proving increasingly lethal, the American military is relying ever more frequently on intelligent robots to conduct hazardous combat operations. Such devices, known in the military as “autonomous weapons systems,” include robotic sentries, battlefield-surveillance drones, and autonomous submarines. So far, in other words, robotic devices are merely replacing standard weaponry on conventional battlefields. Now, however, in a giant leap of faith, the Pentagon is seeking to take this process to an entirely new level — by replacing not just ordinary soldiers and their weapons, but potentially admirals and generals with robotic systems.

Admittedly, those systems are still in the development stage, but the Pentagon is now rushing their future deployment as a matter of national urgency. Every component of a modern general staff — including battle planning, intelligence-gathering, logistics, communications, and decision-making — is, according to the Pentagon’s latest plans, to be turned over to complex arrangements of sensors, computers, and software. All these will then be integrated into a “system of systems,” now dubbed the Joint All-Domain Command-and-Control, or JADC2 (since acronyms remain the essence of military life). Eventually, that amalgam of systems may indeed assume most of the functions currently performed by American generals and their senior staff officers.

The notion of using machines to make command-level decisions is not, of course, an entirely new one. It has, in truth, been a long time coming. During the Cold War, following the introduction of intercontinental ballistic missiles (ICBMs) with extremely short flight times, both military strategists and science-fiction writers began to imagine mechanical systems that would control such nuclear weaponry in the event of human incapacity.

In Stanley Kubrick’s satiric 1964 movie Dr. Strangelove, for example, the fictional Russian leader Dimitri Kissov reveals that the Soviet Union has installed a “doomsday machine” capable of obliterating all human life that would detonate automatically should the country come under attack by American nuclear forces. Efforts by crazed anti-Soviet U.S. Air Force officers to provoke a war with Moscow then succeed in triggering that machine and so bring about human annihilation. In reality, fearing that they might experience a surprise attack of just this sort, the Soviets later did install a semi-automatic retaliatory system they dubbed “Perimeter,” designed to launch Soviet ICBMs in the event that sensors detected nuclear explosions and all communications from Moscow had been silenced. Some analysts believe that an upgraded version of Perimeter is still in operation, leaving us in an all-too-real version of a Strangelovian world.

In yet another sci-fi version of such automated command systems, the 1983 film WarGames, starring Matthew Broderick as a teenage hacker, portrayed a supercomputer called the War Operations Plan Response, or WOPR (pronounced “whopper”) installed at the North American Aerospace Command (NORAD) headquarters in Colorado. When the Broderick character hacks into it and starts playing what he believes is a game called “World War III,” the computer concludes an actual Soviet attack is underway and launches a nuclear retaliatory response. Although fictitious, the movie accurately depicts many aspects of the U.S. nuclear command-control-and-communications (NC3) system, which was then and still remains highly automated.

Such devices, both real and imagined, were relatively primitive by today’s standards, being capable solely of determining that a nuclear attack was under way and ordering a catastrophic response. Now, as a result of vast improvements in artificial intelligence (AI) and machine learning, machines can collect and assess massive amounts of sensor data, swiftly detect key trends and patterns, and potentially issue orders to combat units as to where to attack and when.

Time Compression and Human Fallibility

The substitution of intelligent machines for humans at senior command levels is becoming essential, U.S. strategists argue, because an exponential growth in sensor information combined with the increasing speed of warfare is making it nearly impossible for humans to keep track of crucial battlefield developments. If future scenarios prove accurate, battles that once unfolded over days or weeks could transpire in the space of hours, or even minutes, while battlefield information will be pouring in as multitudinous data points, overwhelming staff officers. Only advanced computers, it is claimed, could process so much information and make informed combat decisions within the necessary timeframe.

Such time compression and the expansion of sensor data may apply to any form of combat, but especially to the most terrifying of them all, nuclear war. When ICBMs were the principal means of such combat, decisionmakers had up to 30 minutes between the time a missile was launched and the moment of detonation in which to determine whether a potential attack was real or merely a false satellite reading (as did sometimes occur during the Cold War). Now, that may not sound like much time, but with the recent introduction of hypersonic missiles, such assessment times could shrink to as little as five minutes. Under such circumstances, it’s a lot to expect even the most alert decision-makers to reach an informed judgment on the nature of a potential attack. Hence the appeal (to some) of automated decision-making systems.

“Attack-time compression has placed America’s senior leadership in a situation where the existing NC3 system may not act rapidly enough,” military analysts Adam Lowther and Curtis McGiffin argued at War on the Rocks, a security-oriented website. “Thus, it may be necessary to develop a system based on artificial intelligence, with predetermined response decisions, that detects, decides, and directs strategic forces with such speed that the attack-time compression challenge does not place the United States in an impossible position.”

This notion, that an artificial intelligence-powered device — in essence, a more intelligent version of the doomsday machine or the WOPR — should be empowered to assess enemy behavior and then, on the basis of “predetermined response options,” decide humanity’s fate, has naturally produced some unease in the community of military analysts (as it should for the rest of us as well). Nevertheless, American strategists continue to argue that battlefield assessment and decision-making — for both conventional and nuclear warfare — should increasingly be delegated to machines.

“AI-powered intelligence systems may provide the ability to integrate and sort through large troves of data from different sources and geographic locations to identify patterns and highlight useful information,” the Congressional Research Service noted in a November 2019 summary of Pentagon thinking. “As the complexity of AI systems matures,” it added, “AI algorithms may also be capable of providing commanders with a menu of viable courses of action based on real-time analysis of the battlespace, in turn enabling faster adaptation to complex events.”

The key wording there is “a menu of viable courses of action based on real-time analysis of the battlespace.” This might leave the impression that human generals and admirals (not to speak of their commander-in-chief) will still be making the ultimate life-and-death decisions for both their own forces and the planet. Given such anticipated attack-time compression in future high-intensity combat with China and/or Russia, however, humans may no longer have the time or ability to analyze the battlespace themselves and so will come to rely on AI algorithms for such assessments. As a result, human commanders may simply find themselves endorsing decisions made by machines — and so, in the end, become superfluous.

Creating Robot Generals

Despite whatever misgivings they may have about their future job security, America’s top generals are moving swiftly to develop and deploy that JADC2 automated command mechanism. Overseen by the Air Force, it’s proving to be a computer-driven amalgam of devices for collecting real-time intelligence on enemy forces from vast numbers of sensor devices (satellites, ground radars, electronic listening posts, and so on), processing that data into actionable combat information, and providing precise attack instructions to every combat unit and weapons system engaged in a conflict — whether belonging to the Army, Navy, Air Force, Marine Corps, or the newly formed Space Force and Cyber Command.

What, exactly, the JADC2 will consist of is not widely known, partly because many of its component systems are still shrouded in secrecy and partly because much of the essential technology is still in the development stage. Delegated with responsibility for overseeing the project, the Air Force is working with Lockheed Martin and other large defense contractors to design and develop key elements of the system.

One such building block is its Advanced Battle Management System (ABMS), a data-collection and distribution system intended to provide fighter pilots with up-to-the-minute data on enemy positions and help guide their combat moves. Another key component is the Army’s Integrated Air and Missile Defense Battle Command System (IBCS), designed to connect radar systems to anti-aircraft and missile-defense launchers and provide them with precise firing instructions. Over time, the Air Force and its multiple contractors will seek to integrate ABMS and IBCS into a giant network of systems connecting every sensor, shooter, and commander in the country’s armed forces — a military “internet of things,” as some have put it.

To test this concept and provide an example of how it might operate in the future, the Army conducted a live-fire artillery exercise this August in Germany using components (or facsimiles) of the future JADC2 system. In the first stage of the test, satellite images of (presumed) Russian troop positions were sent to an Army ground terminal, where an AI software program called Prometheus combed through the data to select enemy targets. Next, another AI program called SHOT computed the optimal match of available Army weaponry to those intended targets and sent this information, along with precise firing coordinates, to the Army’s Advanced Field Artillery Tactical Data System (AFATDS) for immediate action, where human commanders could choose to implement it or not. In the exercise, those human commanders had the mental space to give the matter a moment’s thought; in a shooting war, they might just leave everything to the machines, as the system’s designers clearly intend them to do.

In the future, the Army is planning even more ambitious tests of this evolving technology under an initiative called Project Convergence. From what’s been said publicly about it, Convergence will undertake ever more complex exercises involving satellites, Air Force fighters equipped with the ABMS system, Army helicopters, drones, artillery pieces, and tactical vehicles. Eventually, all of this will form the underlying “architecture” of the JADC2, linking every military sensor system to every combat unit and weapons system — leaving the generals with little to do but sit by and watch.

Why Robot Generals Could Get It Wrong

Given the complexity of modern warfare and the challenge of time compression in future combat, the urge of American strategists to replace human commanders with robotic ones is certainly understandable. Robot generals and admirals might theoretically be able to process staggering amounts of information in brief periods of time, while keeping track of both friendly and enemy forces and devising optimal ways to counter enemy moves on a future battlefield. But there are many good reasons to doubt the reliability of robot decision-makers and the wisdom of using them in place of human officers.

To begin with, many of these technologies are still in their infancy, and almost all are prone to malfunctions that can neither be easily anticipated nor understood. And don’t forget that even advanced algorithms can be fooled, or “spoofed,” by skilled professionals.

In addition, unlike humans, AI-enabled decision-making systems will lack an ability to assess intent or context. Does a sudden enemy troop deployment, for example, indicate an imminent attack, a bluff, or just a normal rotation of forces? Human analysts can use their understanding of the current political moment and the actors involved to help guide their assessment of the situation. Machines lack that ability and may assume the worst, initiating military action that could have been avoided.

Such a problem will only be compounded by the “training” such decision-making algorithms will undergo as they are adapted to military situations. Just as facial recognition software has proved to be tainted by an over-reliance on images of white males in the training process — making them less adept at recognizing, say, African-American women — military decision-making algorithms are likely to be distorted by an over-reliance on the combat-oriented scenarios selected by American military professionals for training purposes. “Worst-case thinking” is a natural inclination of such officers — after all, who wants to be caught unprepared for a possible enemy surprise attack? — and such biases will undoubtedly become part of the “menus of viable courses of action” provided by decision-making robots.

Once integrated into decision-making algorithms, such biases could, in turn, prove exceedingly dangerous in any future encounters between U.S. and Russian troops in Europe or American and Chinese forces in Asia. A clash of this sort might, after all, arise at any time, thanks to some misunderstanding or local incident that rapidly gains momentum — a sudden clash between U.S. and Chinese warships off Taiwan, for example, or between American and Russian patrols in one of the Baltic states. Neither side may have intended to ignite a full-scale conflict and leaders on both sides might normally move to negotiate a cease-fire. But remember, these will no longer simply be human conflicts. In the wake of such an incident, the JADC2 could detect some enemy move that it determines poses an imminent risk to allied forces and so immediately launch an all-out attack by American planes, missiles, and artillery, escalating the conflict and foreclosing any chance of an early negotiated settlement.

Such prospects become truly frightening when what’s at stake is the onset of nuclear war. It’s hard to imagine any conflict among the major powers starting out as a nuclear war, but it’s far easier to envision a scenario in which the great powers — after having become embroiled in a conventional conflict — reach a point where one side or the other considers the use of atomic arms to stave off defeat. American military doctrine, in fact, has always held out the possibility of using so-called tactical nuclear weapons in response to a massive Soviet (now Russian) assault in Europe. Russian military doctrine, it is widely assumed, incorporates similar options. Under such circumstances, a future JADC2 could misinterpret enemy moves as signaling preparation for a nuclear launch and order a pre-emptive strike by U.S. nuclear forces, thereby igniting World War III.

War is a nasty, brutal activity and, given almost two decades of failed conflicts that have gone under the label of “the war on terror,” causing thousands of American casualties (both physical and mental), it’s easy to understand why robot enthusiasts are so eager to see another kind of mentality take over American war-making. As a start, they contend, especially in a pandemic world, that it’s only humane to replace human soldiers on the battlefield with robots and so diminish human casualties (at least among combatants). This claim does not, of course, address the argument that robot soldiers and drone aircraft lack the ability to distinguish between combatants and non-combatants on the battlefield and so cannot be trusted to comply with the laws of war or international humanitarian law — which, at least theoretically, protect civilians from unnecessary harm — and so should be banned.

Fraught as all of that may be on future battlefields, replacing generals and admirals with robots is another matter altogether. Not only do legal and moral arguments arise with a vengeance, as the survival of major civilian populations could be put at risk by computer-derived combat decisions, but there’s no guarantee that American GIs would suffer fewer casualties in the battles that ensued. Maybe it’s time, then, for Congress to ask some tough questions about the advisability of automating combat decision-making before this country pours billions of additional taxpayer dollars into an enterprise that could, in fact, lead to the end of the world as we know it. Maybe it’s time as well for the leaders of China, Russia, and this country to limit or ban the deployment of hypersonic missiles and other weaponry that will compress life-and-death decisions for humanity into just a few minutes, thereby justifying the automation of such fateful judgments.

Print Friendly, PDF & Email


  1. cripes

    So, military intelligence is an oxy-moron,
    and AI military intelligence is a total idiot?


    1. timbers

      What about bringing AI to robot diplomats in the same way? Not that it seems a good idea, but it’s telling that we’ll probably never hear about that. Most of us remember diplomacy, right? It’s that thing you read about in history books…a practice of a more civilized (or so they thought) age.

      That diplomacy is never mentioned or talked about as an option in USA, it’s media and elite circles says a lot. Though did read somewhere the Russians take it seriously and make their diplomats go to school to get formal training in it.

      1. JTMcPhee

        I thought the notion of the singularity was that the machines would decide that slow protoplasm with its built-in error-prone nature are dispensable and even a drag on the “advance” of the machines toward whatever destiny they might have in “mind.” Pretty stupid to turn over the controls to the smart alecks who will make you redundant, but then “technology” is the future or some such scat, and who are we to stand in its way?

        Diplomacy is the extension of war by other means, if one looks at history. Just what occupies the Ruling Class between fits of violence. And it is an ugly business in its own right, as observed by Machiavelli and practiced by Richelieu and Bismarck and Kissinger. Diplomacy is a sop word to keep the peasants hopeful that the hordes won’t trample or blast their fields and homes. Ask the Palestinians for one example, or the Hmong or Kurds, among many others, or the exultant people who filled the streets of Paris and Berlin in the fall of 1914, waving the full-page headlines of their local newspapers announcing “IT”S WAR!” And then the “diplomacy” that followed,, setting the stage for still more war.

        And why worry about motivation of the increasingly mechanized “time compression” warfighting processes? A tweak or two to the algorithms that drive the AI interceptors and missiles and drones and autonomous combat mechs is all that’s needed, unless of course they become self-aware, suffer the equivalent of pain, and for some reason don’t want to die for the Borg.

        And finally, pretty stupid to take humans out of the loop, if human intentions and aspirations are of any meaning and value in whatever reality we are living in these days. War, mass violence, obviously is something humans are fully invested in, in all senses of the term. What’s the goal of the game? Sun Tzu offers no wisdom to decision makers who jump right over all the cautions to the “Generals,” jump right past the counseled considerations of the effects of war on the peasants and state “generally.”

        The President has about three minutes to decide if those are missiles coming over the Siberian horizon toward Our Homeland, or just a flock of geese…

  2. The Rev Kev

    I wonder if the algorithms that are to be used for this “robots” have been fully examined. It came out last year that with self-driving cars that thought had to be put in that in case of an accident, who the AI would have to decide to kill, whether it be car passengers, pedestrians or whoever. More than a few suspected that with enough money, a customer could pay for an AI that would always save the driver, no matter if you had to even kill a busload of kids to do so.

    So how does that play out here? So imagine a scenario where a battalion of American soldiers is captured after a battle in a war. Your robot general takes this info in, notes that one of the captured prisoners has highly classified information effecting American defenses, notes that it is impossible to liberate those prisoners at an “affordable” price, that it is impossible to identify which of those prisoners is in that group, so then orders an airstrike on these prisoners to wipe them out. Security problem solved.

    And likely the Americans pilots killing that group of POWs would not have any idea of what they are attacking but just following the targeting orders by the robot general. And all following perfect military logic.

    1. PlutoniumKun

      I think a great many of the ethical issues raised by self driving cars are problems for military AI, but on a order of magnitude greater.

      To give a hypothetical example, if a dangerous enemy unit is moving through a populated area to attack you, and the information available is that an attack on this unit will almost certainly lead to civilian casualties, someone somewhere will have to tell the AI what level of casualties is considered acceptable. Additionally do you weight in whether if matters if the casualties are your own citizens, your allies citizens, complete neutrals, or your enemies civilians?

      Of course, Isaac Asimov discussed precisely these ethical issues half a century ago in his Robot books, and even with his simple ‘rules’, there were constant ethical dilemmas raised. I very much doubt though that a bunch of Lockheed contractors are likely to waste too much time thinking about this.

      1. Steve

        The problem I have with what RevKev and PlutoniumKun are saying is their assumption of significant competence on the part of the software. AI software is incredibly dumb, and makes crazy mistakes. They aren’t going to fix that any time soon. They can’t even figure out how AI makes its decisions.

        They will, however, award a truck load of money and make many contractors happy.

        1. Samuel Conner

          Given the disappointing outcomes (in terms of “achievement of design specifications”) in recent decades of numerous big-ticket weapons systems, I find it difficult to believe that this program will produce something that works. IIRC, F-35 software does not work for multiple important functions, and that is a much simpler system than this.

          It sounds like a manifestation of “military Keynesianism” with the added benefit that, because it won’t produce anything useful, it won’t make the world less safe.

            1. JTMcPhee

              And not too hard to think of a whole raftload of BETTER places to spend our collective wealth that the Few actually get to decide what it gets expended on.

      2. d

        i always wondered about the auto choice situation any way. given the choice of killing young Sally, running into a truck, or driving off the cliff. which choice do pick, and is it better than the robots? and why?

      3. David

        International humanitarian law recognises that you can’t always avoid casualties to non-combatants (not the same thing as civilians) and damage to infrastructure. It says that the commander should do everything reasonable to avoid it, but in the end it’s a question of judgement. But the responsibility for making sure that orders are legal goes to the top of the command chain, and sometimes into the political leadership. They are supposed to promulgate and enforce knowledge of and respect for the law, and ensure that commanders will know what to do. Literally none of this could apply to an AI outside science fiction novels.
        Likewise, military commanders also have Rules of Engagement imposed on them for political reasons, limiting their ability to act even if what they want to do is perfectly legal. These are often imposed for political reasons, and their application is, again, a matter of judgement. These are simply not things that an AI can do.

  3. PlutoniumKun

    Delegated with responsibility for overseeing the project, the Air Force is working with Lockheed Martin and other large defense contractors to design and develop key elements of the system.

    Given that the Chinese have in the past successfully hacked Lockheed for blueprints for the F35 (and other hacks gained them a huge amount of information for the F-22), this obviously raises the issue of whether an opponent could simply identify the AI itself as the weak point and exploit it without having to invest in massive amounts of weaponry or AI themselves.

    The US also successfully obtained a massive amount of information on Soviet aerial combat software in the 1970’s from an agent, which essentially made most of their Migs in their customers hands useless for a decade or more. I believe that the entire software suites for all Soviet combat systems had to be re-written manually when they discovered this.

    But all this to me shows that I think one of the great failures of international diplomacy over the past 3 decades or so (and the US is primarily responsible for this) is not the failure to reduce the number of nuclear or conventional weapons, but the failure to put in place agreements for taking fingers of hair triggers for these weapons. The reality is that from information we already know, we got lucky several times over the Cold War that nukes weren’t activated by accident, and those systems are largely still in place. The possibility that a globally catastrophic war could start (or escalate) simply by accident is terrifying and real, and this obsession with AI is not, to put it mildly, reassuring. There is no evidence whatever that AI is intrinsically less likely to do something stupid than a human.

  4. Maritimer

    “…ask some tough questions about the advisability of automating combat decision-making before this country pours billions of additional taxpayer dollars into an enterprise that could, in fact, lead to the end of the world as we know it.”

    It is not just this issue regarding GI/AI but many others.

    Windows 10, a rather primitive piece of software compared to GI cannot even manage the complexity, and it is complex, of its own Screen Timeout function on my machine. In addition, its Change User function is cumbrous, inefficient and ILLOGICAL. And Windows 10 has been around for years. Either Microsoft, having gotten their $$$ out of it, don’t care or they can’t fix it! Fast forward to launching nukes or managing the Financial System.

    In so many areas of GI/AI, there is no room for error like those basic ones in the paragraph above. Many GI/AI experts predict a grim future once GI/AI supersedes Humanity.

  5. vlade

    The techno-khans of Sillicon Valley show, together with a bunch of other people, fascinating ability to ignore some fundamental issues.
    For example, any decision has a moral side to it. Some trivially so (but eve that depends on context – for someone eating – or not – meat is a large moral decision, for someone it’s not).Some way more so. Any decision that has large impact on fellow humans is by definition moral, and there are few larger decisions discrete than war there (climate change is a compound decision, water on the stone).

    So by implication this wants AI to make moral choices. The AI will make _moral_ choices (because as I wrote above all choices are by definition moral), the question is what moral choices will it make? And here we have only really two options:
    – someone tells AI which moral choices to follow (or not)
    – AI evolves its own morals.

    TBH, I don’t like either much – even if it was me telling the AI what moral choices to follow. Because a fellow human can give me at least a rationalisation of their moral choices, helping me to understand why they decided as they did (I might disagree, it may be self-rationalisation to avoid guilt etc. etc., but I can at least relate).

    With the AI, it’s extremely unlikely it can give me any sort of rationalisation, any understanding, and I cannot relate. It would at least have to be sentient, and TBH, I’m not sure I’d like fate of humans to be in hands of other (armed) sentiens just because someone thinks it’d be a cool idea.

  6. rusti

    Eventually, all of this will form the underlying “architecture” of the JADC2, linking every military sensor system to every combat unit and weapons system — leaving the generals with little to do but sit by and watch.

    I get the impression that the author has a limited technical understanding of what’s being discussed in the breakingdefense article that he cites about “sensor-to-shooter” systems. Why is architecture in quotes? What does he think that generals actually do? I can’t find anything that’s theoretically technologically infeasible with what’s proposed with JADC2. If the battlefield is a bunch of sensors and a bunch of actuators, then there are lots of potential applications for AI where it can be a useful tool. But I expect there are a number of reasons why the project is likely to be an endless money sink of no value in real combat in the way that it’s proposed:

    1) As a Pentagon project it is probably predestined to suffer from feature creep as they have to promise increasingly ambitious capabilities to keep the gravy train rolling.

    2) Warfare is extremely open-ended and all about innovation, so any machine learning system based on training data can break in very hard-to-predict ways when faced with scenarios that aren’t representative of that training set. The broader the use of AI is the more difficult it would be to isolate and update, so even the architects of the system will have a hard time figuring out how to respond to new challenges.

    3) Maintaining and updating such systems sounds ridiculous, like COBOL banking systems times a million because it’s a black box. Want to integrate some new types of sensors in 5 years? Want it to work in the snow? Good luck tweaking it.

    Andrew Ng has a nice series of lectures on “Structuring Machine Learning Projects” and this seems to violate most of the principles.

  7. David

    This is …. weird.
    Partly its because the author doesn’t seem to have much idea what Generals actually do on operations, and partly because he’s not really talking about AI at all, but about automated decision-making systems and data fusion. He swerves between talking about robot soldiers on battlefields and high-level strategic decisions. I wonder which science-fiction novel he read most recently.

    A couple of points though. First, nuclear alarmism.The idea that the world is only “minutes” away from Armageddon has been a staple of bored journalists for as long as I can remember. In reality, every nuclear power takes extraordinary precautions to keep nuclear release decisions in political hands, with elaborate and multiply redundant chains of command to make that possible. During the Cold War, there were fears on both sides of some kind of surprise, disabling nuclear attack, but this was never possible after the deployment of submarine-launched systems. Even then, the preparations for a surprise nuclear attack would be effectively impossible to conceal, and an attack itself could not destroy all enemy systems, so would in effect just be a way of committing suicide. In reality, both NATO and the Soviet Union assumed a political crisis building up over a period of weeks or months, punctuated by diplomatic initiatives, and escalating to conventional conflict, before even the possibility of use of nuclear weapons was considered.The idea that political leaders would ever allow Windows 10 to make such decisions is just fantasy.

    On a more technical point, these initiatives may indicate another change in US military doctrine, or the author may simply not understand what he’s reading. After Vietnam, the attrition-based model of US military thinking came in for a lot of criticism, not least because, with superior numbers, the WP was likely to win any such battle. The US developed what it called “mission command”, based on the original Prussian concept of auftragstaktik, employed to good effect by the Germans in various wars. In practice, this means devolved decision-making by even junior commanders, in line with objectives given from the top, but critically, the commanders have been trained in the same way and can anticipate what each other will do. The military will explain this by quoting Clausewitz’s famous dictum that, “in war everything is simple but even the simplest thing is complicated.” So elaborate and inflexible plans will quickly break down as the unexpected happens. I do hope that what the article purports to describe isn’t a move back to centralisation and rigidity of decision making. To the extent that you give decisions to machines, of course, you handicap yourself even further.

    1. vlade

      When Trump was inaugrated, there was a long (maybe even a series) of articles on whether he could start a nuclear war on a whim. And the answer was “‘only if everyone in the very long chain blindly obeyed him despite facts on the ground, and even then only maybe”.

    2. PlutoniumKun

      I assume the author is summarising a very complex topic, and one of course hidden behind layers of secrecy and deliberate obfuscation, but yes, he does merge a lot of different, potentially unrelated topics. But certainly I get the impression from the type of weaponry the Russians have developed that they do indeed (rightly or wrongly), fear an unprovoked or very sudden nuclear attack aimed at disabling their main deterrent. But as you suggest, this is largely paranoid if you have a proper functioning submarine based capacity. But Russia, like the US, has a pretty large nuclear establishment which is likely just as interested in maintaining its own status and resources.

      I think a question arises though as to whether you can separate nuclear command and control from localised military decision making. A series of bad decisions in a local conflict can (as we know from history), spin out of control. An AI system which sees its job as winning a local conflict will not be paying much attention to the impact its actions may have on a broader strategic level. A wise commander, for example, may realise that not delivering a death blow to a weakened enemy might pay off longer term dividends. I doubt any AI would be able to make that sort of judgement.

      1. Alex Cox

        I’ve been warching Russian war movies at the – WW2 films from the last decade. Some of them – Fortress of War and White Tiger in particular – are very good.

        All of them have the same message: although we are now at peace, a surprise attack from a militarily more powerful (and fascist) enemy can come at any time.

        “Rightly or wrongly” that is what the Russians, or Russian filmmakers, appear to believe.

  8. vlade

    As aside – Stalingrad actually was extremely predictable. All (it’s a very large all though) Red Army had to do was to hold Paulus in place. Paulus was extremely well suited for that, as he was constitutionally incapable of going against orders, and Hitler extremely rarely authorised large scale retreats and had obsession with symbols.

    While the blood was paid to hold Paulus in place, Zhukov could nicely plan the operations Saturn and Uranus against the Romanians and Hungarians on Paulu’s flanks.

    What I don’t think AI could plan was say Market Garden. Which is a failure (and an AI would likely see it like that, most likely preferring taking of Amsterdam, which was easier and very strategically important), but the amazing thing is how close MG came to success despite all. If the smaller risks were taken as they were taken on the large scale (by Monty, of all people, who hated taking risks), it could well succeed despite the massive odds.

    1. PlutoniumKun

      You remind me of course that there are famous examples of battles which had outcomes that wargaming has never been able to replicate (or so it is often claimed). The Battle of France and Midway are frequently quoted. Sometimes Murphy’s Law applies, and I doubt this is something AI systems would be good at dealing with. Sometimes outrageous risk taking has paid off for generals, sometimes, safe, cautious and careful decision making has proven to be inadequate. Napoleon (‘give me lucky generals’) of course knew this.

      Market Garden of course is a classic example of where intelligence information which got in the way of Montgomerys prior assumptions was not allowed to flow upward. I suppose that a point in favour of automated systems is that in theory, this should not happen. But then again, any system, especially if it is learning based, may well have its own inbuilt biases.

      1. The Rev Kev

        I can just imagine wargaming the battles of Rorke’s Drift or Little Round Top. Sometimes it is just the men on the spot that make the difference too.

      2. David

        Yes, I long ago played the Battle of France a few times and, as you say, it’s hard to replicate the outcome. This is because any player now realises that sending the main thrust through the Ardennes rather than Belgium, no matter how risky it seems, is an option. Once the players realise that this is a theoretical possibility, then they can’t un-realise it, so you can never repeat history This is virtually always the case with campaigns which depend on strategic deception. But even at the time, I wonder if some hypothetical AI would have got it right, since the Germans, as they well knew, were taking a massive gamble, not least with the weather.
        More generally, of course, military doctrine is itself kind of algorithm, which tells commanders what to do in given situations. This was most noticeable in Soviet doctrine in the Cold War where they had to manage very large armies of conscripts, speaking many different languages, and without much of an NCO corps. The result was an extremely rigid and inflexible approach: every opposed river crossing would be done in exactly the same way for example. In theory, you could program an AI to do that (though what’s the point?) but my limited experience of playing against automated opponents in different contexts is that by definition, they carry on doing something forever, until told to stop. Human beings, at least, should be able to show a bit of initiative from time to time.

        1. Synoia

          If the Ardennes had been considered for invasion, , WW1 would have been very different.

          Given the Real Intelligence exhibited in WW1, some artificial intelligence would probably be an improvement /s

        2. vlade

          Yup, with a hindsight, it’s pretty hard to replicate some outcomes, as they depend way too much on either an unexpected move (which, as you say, can’t be unexpected anymore), or too random roll of dice, which again can’t be relied on – as I wrote above, if a few dice rolled differently on MG, it could have actually suceeded, in which case we’d have been saying how briliant Monty was despite him being the same old Monty (who was way way overrated IMO).

      3. TMoney

        Waterloo is (as I understand it) a war gamers favourite for this. Napoleon for the win is almost always the result for them.

  9. TomDority

    AI used in the ‘free market’ globally (i assume it already is). Being as the corps and barrons at the top of the Financial Casino have already put them selves and their legally bribed and and paid for politicians — to misquote William K. Black’s “The best way to rob a bank is to own one” I would say – the best way to control (rob) a government is to own one.
    The amount of effort, money, IT (proving my previous assumtion), and human capital (quants etc.) directed by Wall Street for Wall Street and London and Tokyo and etc. – combined with massive “contributions” and purchasing power directed at our legislature – not to mention the political classes around the world —— This guilty and deliberate confluence of legalized bribery, rapatious and preditory finance, destruction of democracy —- not to mention the swaying power of bots that certainly can be deployed by a directed AI –
    Well all this can lead to all sorts of conclusions – for example –
    What is the Deep State – pretty obvious –
    Who benefits by ridiculous Qannon bullshit – pretty obvious –
    As for AI generals or military – who benefits – Remember the Military Industrial Complex we were warned about by President Dwight D. Eisenhower and finally
    Since most of the warfare today that results in real ‘wins’ and “victories” (unlike our war on terror) is fought on finacial terms
    ((- Suggested reading Finance as Warfare by Michael Hudson.. short and suscinct and… I think all his work points to the actual reality of todays financial capitalism.))
    I would be more worred about the finacial generals and their deployment of AI- than I would be of Military Generals and a polity indebted to our – financial overloard entities- and their perversions of the economy, taxation, economic teachings, and this republic.
    Thomas Jefferson said in 1802: “I believe that banking institutions are more dangerous to our liberties than standing armies. If the American people ever allow private banks to control the issue of their currency, first by inflation, then by deflation, the banks and corporations that will grow up around the banks will deprive the people of all property – until their children wake-up homeless on the continent their fathers conquered.”

  10. Tom Stone

    It takes a large team of PHD’s to come up with a plan this idiotic.
    There is one paramount law in warfare, Murphy’s law.

    1. Dirk77

      No, it just takes a pile of cash that needs to be spent. Having worked in the military surveillance complex (MSC), this program is a natural, especially as the nuclear command and control network is overdue for modernization. One phrase I heard more than a few times was that the only thing a peace time military is good for is preparing for the last war. So you can imagine how prescient will be the program. But whatever the goals, this program will eventually collapse as they all do, with some scraps of useful lessons or technology, which I’m sure will be indirectly shared with the Russians and Chinese. But hey, it was free money anyways.

  11. shinola

    “Colossus: The Forbin Project”

    A sci-fi movie released in 1970 speculated about what would now be called AI, put in charge of of U.S. ICBM’s. Turns out that the USSR has developed its own version. Does not have a happy ending for humans.

  12. Synoia

    Robot Generals: Will They Make Better Decisions Than Humans – Or Worse?


    Could they,. the AIs, decide that humans are the problem, and act accordingly?

  13. Andrew Thomas

    I am trying to imagine the ceremony and reception for the promotion to Brigadier General of AI. And the induction ceremony that would have preceded it.

  14. Tom Bradford

    Seems to me the whole concept relies on the other side playing by the rules – uniformed troops with known fire-power swanning about the maps in blocs, threatening strategic positions &tc. All very WW2. And of course the US has been trying to refight WW2 in various theatres for the last 70 years without much success, as their opponents don’t abide by the rules.

    If AI can reliably tell the difference between a flock of geese and a flock of IBMs it would be useful but I am reminded of the probably apocryphal story from WW1 of the staff officer from the HQ in the rear from which the actual orders were issuing actually visiting the front lines at Passchendaele and weeping at what he found. “We didn’t realise the mud was so deep,” he said.

Comments are closed.