A compact comprehensive discussion of the present state and future prospects of AI technologies applied to armed conflict is an impossible mission, but I will undertake it here. What gives me encouragement is the great difficulty of prognostication regarding AI. I believe AI will be the most profound transformation of human affairs in the history of civilization, and nobody has a good grasp of where this is going, so the deficiencies of my predictive efforts may not stand out among the shortcomings of others.
Warfare has a structural hierarchy that recapitulates its evolution. Early hand-to-hand combat and tribal battles were entirely tactical. As tribes aggregated into forces resembling armies, operational logistics and strategies became necessary, and when nation states formed enormous armies, navies, and air forces, grand strategies developed. Technological advances have generally permeated all levels of the art of war. Cavalry, gunpowder, high explosives, steel ships, mechanized armor and transport, aviation, and computerization have all affected tactical, operational, and strategic levels of warfare. I will discuss AI in terms of its role in each of these levels of armed conflict.
Tactical AI
Starting at the bottom of the art of war hierarchy, consider a soldier with a rifle. It currently takes four to eight weeks for the military to train a highly-proficient marksman. Such a soldier, a squad-designated marksman, can hit targets effectively at 600 yards. Today, rudimentary AI-integrated electronics in a smart rifle sight can give almost anyone an even higher degree of accurate shooting capability. This equipment automatically adjusts aim for wind, temperature, atmospheric pressure, rifle position, target motion, and range, and it does it consistently, even in the heat of battle.
Developments like smart rifle sights foreshadow the extinction of soldiers on the battlefield. The same high accuracy will be available to any aerial or terrestrial drone armed with an automatic rifle. Ultimately, human soldiers will not be survivable on future battlefields where smarter, faster, and deadlier robotic combatants are deployed.
Another tactical weapon prominent on today’s battlefields is the guided combat drone. Although most attack drones are directed over a video link by a drone operator, AI technology is enabling autonomous operation for advanced drones capable of loitering over the battlefield and identifying and striking targets with minimal human guidance. Moreover The concept of AI-enabled intelligent drone swarms is under development by multiple militaries, and this will add a new dimension of lethality to drone weaponry.
Operational AI
At the operational level, military advantage is attained by the large-scale direction of resources through superior information and communication. This pertains to areas such as logistics, handling of casualties, and detection and targeting of enemy forces. AI technology can confer advantages across all operational domains. A chilling example of such AI utilization is the Israeli “Lavender” system used to direct strikes against Hamas in the current war in Gaza.
U.S. technology companies, such as Palantir Technologies, have come under increasing criticism for supplying components of military AI systems to foreign governments, particularly Israel. U.S. citizens are concerned that these systems are or will be used domestically for repressive purposes.
Strategic AI
At the highest levels of strategic military planning, AI can potentially contribute significantly to the quality of decision making. With its ability to rapidly integrate vast amounts of information across many domains, AI decision support systems can provide leaders with clearer estimates of strategic threats, risks, and opportunities that ever before. However, if AI facilities are granted autonomy to make strategic decisions, there is a grave danger of undesired conflict escalation resulting from undiscovered flaws in the AI programming.
The development of safeguards for strategic military AI systems is hindered by the wall of secrecy behind which national defense projects are created. Independent review mechanisms for such projects should be created to allow oversight of classified systems, particularly those related to cyberwarfare and nuclear weapons.
Escalation and the Dangerous Future of Military AI
Contemporary military theory puts a high premium on speed of execution of what is called the OODA loop (Observe, Orient, Decide, Act). An adversary that can execute this loop faster has a potent advantage over the slower opponent. Because AIs can accelerate all aspects of military decision making, there is a powerful evolutionary pressure to grant AI systems increasing autonomy for fear of losing a decision loop race against an enemy. This creates the potential for very rapid escalation of an armed conflict between opposing AIs. If unchecked, such escalation could culminate in a global nuclear war.
AI Ensembles
Perhaps the greatest source of uncertainty regarding future AI developments is the evolution of ensembles of collaborative AI systems. Just as military effectiveness is enhanced by close cooperation among manned units, the future architecture of military AI facilities will benefit from close integration of multiple AI systems. The compounding effects of anomalies in the interacting systems could result in emergent failure modes undetectable in unit testing. This is a disturbing possibility in systems associated with weaponry. A related danger is the potential enabling of self-modifying capabilities in the AIs, which would enable learning-based enhancement of functionality, but with unpredictably hazardous consequences. The extraordinary speed and power of AI ensembles would be a double-edged sword that could result in serious unrecoverable errors, such as friendly fire incidents or false threat detection triggering counter-strikes.
Conclusion
There is an AI arms race underway spanning all aspects of armed conflict. Like the nuclear arms race, it is dangerously unstable. If AI military systems are granted increasing autonomy there is a possibility of rapid conflict escalation that could be catastrophic. For decades the world has been bedeviled by unexpected flaws and bugs in relatively simple software. The vastly greater complexity of AI systems and the likely future interaction among multiple AI entities substantially increases the risk of unknown operational failure modes. Of particular concern is the potential for AIs to modify their own programming and implement capabilities unknown to their human proprietors. Political leaders and policy makers should work to develop international protocols and treaties implementing safeguards to prevent runaway AI technology from launching enormously destructive wars. Never has human intelligence been more necessary to guide its artificial counterparts.
The scary thing about all this is that you have half smart politicos hiring wildly overconfident tech bros to implement systems no one really understands.
It’s not like anything could go wrong 😏
My first thought is actually, if you ultimately remove soldiers from the battlefield, what then? We have zero casualty conflicts?
It seems more likely instead that targeting your opponent’s civilian population centers becomes even more central. There you’ll find the means of production, and those doing the production. I don’t think you can divorce killing from war.
There are so many possible avenues to all of this. Sadly an end to violent conflict is likely not one of them.
“This creates the potential for very rapid escalation of an armed conflict between opposing AIs. If unchecked, such escalation could culminate in a global nuclear war.”
Years ago, I read a list compiled by Google of various ways experiments with AI systems based on neural networks went awry.
A number of these trials dealt with multi-player games of the EVE kind — with AI systems configured and trained to be participants in complex simulated wars. Not only did the AI players cheat liberally, some also inferred that since the objective was to prevent adversaries from winning, then the best option was to go nuclear right away and blow up everything (including self)…
Maybe. But the military has always over-promised what their latest wunderwaffen could do. I remember during the Vietnam War when the latest sensors were said to “make the jungle transparent.” The Vietnamese hung bags of urine in trees and the Americans bombed acres of empty jungle. Forty-five years ago Reagan promised his “star wars” anti-missile system would stop ICBMs. I don’t think they ever had a successful, honest test. I’m always skeptical of claims made for new weapon systems–particularly when the claims are made at appropriations time.
I keep seeing that phrase “what could go wrong?” or some variation of it in comments skeptical of AI and I think it will be similar as the last tech-boondoggle (self driving cars): less than we expect, since it doesn’t actually work, and is 99% marketing.
Not that we shouldn’t be concerned & wary of it, but it’s just the latest “e-shit” from an industry running on fumes, trying desperately to squeeze $$$ out of an increasingly dry American economy.
Do not exclude the factor quantity, which as we all know, has a quality of its own.
Perhaps a few isolated AI systems — which as you surmise may well not be as effective as advertised, or even not work properly at all — will not cause much trouble. Put many of them in action simultaneously, and the result might be sheer destructive chaos.
Relevant example, since you mention self-driving cars: they fall short of what is expected from all the hype, but individually their failures always had a limited impact. However, in such cities as Austin or San Francisco, where driverless shuttle services have been put in operation on a significant scale, those defective AI-based capabilities already generated unexpected, massive traffic jams.
mostly grift is just about right, i expect.
but every once in a while, the miltech works more or less as advertised(drones upon weddings)…just not against peer great powers.
but they’ll work well enough on uppity americans who dont toe the line, i suppose.
i have long considered…when in driving around the pasture in the Falcon(dead for a year, now, lack of $)…the need for something akin to giant shotguns pointed up all around my place.
dont know how to do that(im not really a gun guy, but can hit a rabbit at 100 yards)
alternative is painting a giant red cross on the roofs of my various structures.
and…in local news…this is supposedly the last day of rain for me…we’ll see how it shakes out,lol.
it’s rained every day but one since july 3rd.
and its beginning to harsh my mellow.
i miss 12% humidity in july, dernit.