How Autonomous Weapons Could Be More Destabilizing Than Nukes

Yves here. We are already seeing how well autonomous cars are working out with Tesla’s Autopilot racking up more casualties, the latest being cops in Texas. Of course, those are unintended casualties. Autonomous weapon s are supposed to inflict only intended casualties. But as this post warns, like Tesla, they regularly go outside their scope.

And that’s before getting to the concern that low-end versions could be built cheaply, meaningly lots of people who are only moderately tech-savvy could play warlord.

By James Dawes, Professor of English, Macalester College. Originally published at The Conversation

Autonomous weapon systems – commonly known as killer robots – may have killed human beings for the first time ever last year, according to a recent United Nations Security Council report on the Libyan civil war. History could well identify this as the starting point of the next major arms race, one that has the potential to be humanity’s final one.

Autonomous weapon systems are robots with lethal weapons that can operate independently, selecting and attacking targets without a human weighing in on those decisions. Militaries around the world are investing heavily in autonomous weapons research and development. The U.S. alone budgeted US$18 billion for autonomous weapons between 2016 and 2020.

Meanwhile, human rights and humanitarian organizations are racing to establish regulations and prohibitions on such weapons development. Without such checks, foreign policy experts warn that disruptive autonomous weapons technologies will dangerously destabilize current nuclear strategies, both because they could radically change perceptions of strategic dominance, increasing the risk of preemptive attacks, and because they could become combined with chemical, biological, radiological and nuclear weapons themselves.

As a specialist in human rights with a focus on the weaponization of artificial intelligence, I find that autonomous weapons make the unsteady balances and fragmented safeguards of the nuclear world – for example, the U.S. president’s minimally constrained authority to launch a strike – more unsteady and more fragmented.

I see four primary dangers with autonomous weapons. The first is the problem of misidentification. When selecting a target, will autonomous weapons be able to distinguish between hostile soldiers and 12-year-olds playing with toy guns? Between civilians fleeing a conflict site and insurgents making a tactical retreat?

Killer robots, like the drones in the 2017 short film ‘Slaughterbots,’ have long been a major subgenre of science fiction. (Warning: graphic depictions of violence.)

The problem here is not that machines will make such errors and humans won’t. It’s that the difference between human error and algorithmic error is like the difference between mailing a letter and tweeting. The scale, scope and speed of killer robot systems – ruled by one targeting algorithm, deployed across an entire continent – could make misidentifications by individual humans like a recent U.S. drone strike in Afghanistan seem like mere rounding errors by comparison.

Autonomous weapons expert Paul Scharre uses the metaphor of the runaway gun to explain the difference. A runaway gun is a defective machine gun that continues to fire after a trigger is released. The gun continues to fire until ammunition is depleted because, so to speak, the gun does not know it is making an error. Runaway guns are extremely dangerous, but fortunately they have human operators who can break the ammunition link or try to point the weapon in a safe direction. Autonomous weapons, by definition, have no such safeguard.

Importantly, weaponized AI need not even be defective to produce the runaway gun effect. As multiple studies on algorithmic errors across industries have shown, the very best algorithms – operating as designed – can generate internally correct outcomes that nonetheless spread terrible errors rapidly across populations.

For example, a neural net designed for use in Pittsburgh hospitals identified asthma as a risk-reducer in pneumonia cases; image recognition software used by Google identified African Americans as gorillas; and a machine-learning tool used by Amazon to rank job candidates systematically assigned negative scores to women.

The problem is not just that when AI systems err, they err in bulk. It is that when they err, their makers often don’t know why they did and, therefore, how to correct them. The black box problem of AI makes it almost impossible to imagine morally responsible development of autonomous weapons systems.

The Proliferation Problems

The next two dangers are the problems of low-end and high-end proliferation. Let’s start with the low end. The militaries developing autonomous weapons now are proceeding on the assumption that they will be able to contain and control the use of autonomous weapons. But if the history of weapons technology has taught the world anything, it’s this: Weapons spread.

Market pressures could result in the creation and widespread sale of what can be thought of as the autonomous weapon equivalent of the Kalashnikov assault rifle: killer robots that are cheap, effective and almost impossible to contain as they circulate around the globe. “Kalashnikov” autonomous weapons could get into the hands of people outside of government control, including international and domestic terrorists.

The Kargu-2, made by a Turkish defense contractor, is a cross between a quadcopter drone and a bomb. It has artificial intelligence for finding and tracking targets, and might have been used autonomously in the Libyan civil war to attack people. Ministry of Defense of Ukraine, CC BY

High-end proliferation is just as bad, however. Nations could compete to develop increasingly devastating versions of autonomous weapons, including ones capable of mounting chemical, biological, radiological and nuclear arms. The moral dangers of escalating weapon lethality would be amplified by escalating weapon use.

High-end autonomous weapons are likely to lead to more frequent wars because they will decrease two of the primary forces that have historically prevented and shortened wars: concern for civilians abroad and concern for one’s own soldiers. The weapons are likely to be equipped with expensive ethical governors designed to minimize collateral damage, using what U.N. Special Rapporteur Agnes Callamard has called the “myth of a surgical strike” to quell moral protests. Autonomous weapons will also reduce both the need for and risk to one’s own soldiers, dramatically altering the cost-benefit analysis that nations undergo while launching and maintaining wars.

Asymmetric wars – that is, wars waged on the soil of nations that lack competing technology – are likely to become more common. Think about the global instability caused by Soviet and U.S. military interventions during the Cold War, from the first proxy war to the blowback experienced around the world today. Multiply that by every country currently aiming for high-end autonomous weapons.

Undermining the Laws of War

Finally, autonomous weapons will undermine humanity’s final stopgap against war crimes and atrocities: the international laws of war. These laws, codified in treaties reaching as far back as the 1864 Geneva Convention, are the international thin blue line separating war with honor from massacre. They are premised on the idea that people can be held accountable for their actions even during wartime, that the right to kill other soldiers during combat does not give the right to murder civilians. A prominent example of someone held to account is Slobodan Milosevic, former president of the Federal Republic of Yugoslavia, who was indicted on charges against humanity and war crimes by the U.N.’s International Criminal Tribunal for the Former Yugoslavia.

But how can autonomous weapons be held accountable? Who is to blame for a robot that commits war crimes? Who would be put on trial? The weapon? The soldier? The soldier’s commanders? The corporation that made the weapon? Nongovernmental organizations and experts in international law worry that autonomous weapons will lead to a serious accountability gap.

To hold a soldier criminally responsible for deploying an autonomous weapon that commits war crimes, prosecutors would need to prove both actus reus and mens rea, Latin terms describing a guilty act and a guilty mind. This would be difficult as a matter of law, and possibly unjust as a matter of morality, given that autonomous weapons are inherently unpredictable. I believe the distance separating the soldier from the independent decisions made by autonomous weapons in rapidly evolving environments is simply too great.

The legal and moral challenge is not made easier by shifting the blame up the chain of command or back to the site of production. In a world without regulations that mandate meaningful human control of autonomous weapons, there will be war crimes with no war criminals to hold accountable. The structure of the laws of war, along with their deterrent value, will be significantly weakened.

A New Global Arms Race

Imagine a world in which militaries, insurgent groups and international and domestic terrorists can deploy theoretically unlimited lethal force at theoretically zero risk at times and places of their choosing, with no resulting legal accountability. It is a world where the sort of unavoidable algorithmic errors that plague even tech giants like Amazon and Google can now lead to the elimination of whole cities.

In my view, the world should not repeat the catastrophic mistakes of the nuclear arms race. It should not sleepwalk into dystopia.

Print Friendly, PDF & Email

38 comments

  1. Mikel

    “The black box problem of AI makes it almost impossible to imagine morally responsible development of autonomous weapons systems.”

    The beginning of the end of humanity: thinking there is a such thing as a morally responsible war.
    Selling war as an adventure was gamefication of it before video games.

    And speaking of video games, I think I’ll start referring to Tesla’s “auto pilot” feature as “Grand Theft Auto Pilot.”

  2. vlade

    “are racing to establish regulations and prohibitions on such weapons development.” Muahaha. If those weapons will convey an advantage (or, more importantly, will be believed to convey an advantage), they will be developed, regardless of any regulations and prohibitions.

    Because, as with all international regulations and prohibitions, they only work when most of the interested parties, and importantly, those who can and are willing to enforce it, want it to work.

    Regulations and prohibitions w/o real consequences for those who break it are irrelevant.

  3. Eustachedesaintpierre

    There could already be a defence out there against them for those who have the right tech, in microwave weapons ( don’t try it at home ) which apparently the Chinese possess & perhaps also the Russians ( there were rumours that they had the use of it in Syria ). It seems that they can knockout electronic systems & have been blamed for Havana Syndrome & the defeat of Indian forces by the Chinese which has been officially denied by the former power.

    Lots of speculation out there & after reading the above I decided to take a look at this site I found a while back called Killerrobots.org which regularly updates articles on the stuff of nightmares including microwave weapons & lo & behold the first thing I noticed was that the above from Yves had also turned up.

    https://www.killerrobots.org/category/killer-robots/

    1. coboarts

      And there really is no need for job specific microwave weapons. I served in an Improved Hawk unit, back in the day. Several of our radars could have fried you in place if leveled that way.

  4. PlutoniumKun

    Arguably, there is nothing new about autonomous weapons. There have been sea and land mines for more than a century (both of which killed huge numbers of people after WWI and WWII ended), plus ‘fire and forget’ anti-vehicle missiles for half a century. The Soviets developed a very long range torpedo in the 1970’s that was designed to detect the wake of any large ship, and then follow it for many miles until it contacted. It was intended as an aircraft carrier killer, but in reality would have tracked any large ship.

    What is new is that these are likely to be very cheap and easily available and quite sophisticated in their ability to track, if not distinguish real target from kid with a toy gun. I suspect that the major powers will rapidly develop a wave of defensive weapons, in particular against light, cheap drones. There is some evidence the Russians have already been pretty successful at this. The risk though is in the type of conflict in, for example, Ethiopia, where they will be used as a quick and cheap means of terror against people who don’t have access to expensive defensive technology.

    Good luck trying to ban them – it took a long, long time for a partial (and widely ignored) ban on landmines and certain types of cluster bombs. Trying even to define an autonomous weapon will be very difficult.

    1. No it was not, apparently

      Advanced semi-autonomous weapons used to be limited to anti vehicle duties by their price; not any longer unfortunately.

      The drones are so cheap now (and so small) that they can be used as anti-human hunter-killers — this is what it makes them them so horrible.

      ———————————————————

      Otherwise, the important story here is the introduction of autonomous battlefield management constructs – they’ll be able to control the “flow” of battle with precision and concentration far beyond any human.

      Once both the high command A.I. and units are robotic we’ll have a high-possibility runaway “SkyNet” scenario even without any malice (or self-awareness) on the side of the robots.

    2. fajensen

      I think that the risk is that “war will be everywhere”. With a autonomous weapons proliferating, everyone with grievances can, with a only modest investment, send something “to sort out the griefer’s shit once and for all”.

      I think it is unstoppable. There is too much energy driving killer robots forward. And Utter Loons like the CIA would further proliferation because proliferation means that they can target everyone they don’t like, or perhaps think they may not like (or their AI think they may not like), sometime in the future, under the cover of deniability from gang-bangers, incels and the rest of the assorted homicidal nutters. Drone attacks are very Noisy, however if the drone is just like a wasp, a robot loaded with potent allergens, then it is silent and cheap to remove a potential Lech Valenza or community organiser Obama before they cause trouble and before their “accident” makes much noise in the newspaper.

      The Future will be very dynamic and at the same time very, very, static with every nail that sticks out being preemptively hammered.

      1. JTMcPhee

        Iy’s a bad time to be Julian Assange or Salman Rushdie. One wonders how soon Congresscritters have something other than the carrot of corruption to think about — both from the briber class if the CCs don’t cough up the “legitimacy” bought and paid for, and from the deprived underclass…

        Infinite vulnerability, maybe ending up as an oblate spheroid of Gray Goo…

    3. Kyle Reese

      I don’t think the issue is autonomous navigation. The issue is autonomous targeting: the war drone loitering endless until some threat/target criteria is met, at which point it autonomously fires/explodes/whatever.

    1. Michael McK

      Phillip K Dick is my favorite author. A theme I find running through his work is ‘That which you love contains within itself or even creates that which you hate’, and vice versa. Dick was a struggling Berkeley SciFi guy for ages, some of his work is not particularly good as it was written in a hurry on amphetamines to get a check from a scifi mag. He died while his breakthrough to fame beyond the scifi world (and riches) was in production.(the movie Blade Runner, based on Do Androids Dream of Electric Sheep?). Dick was no conservative, he hated the ROTC in his very brief stint at UC Berkeley and described himself as a flipped-out freak
      Many films have credited his work including “Total Recall” with Arnold Schwarzenegger. I have always though Arnold’s writers had to be big PKD fans and have always thought that “Terminator” was inspired by “Second Variety” though they extended the plot so much so they did not credit him.
      I posit that Arnold would not have risen to fame and power were it not for the writing of Phillip K Dick, which is to me a living example of the Dickian theme of what you love birthing that which you abhor.

  5. Tom Stone

    The optics and sensors are the most expensive part of such a system, however if you are willing to accept a little collateral damage here and there on top of that caused by inherently flawed AI…
    proliferation is inevitable, the tech is here and they are cheap.

    1. JTMcPhee

      Add this vector of destruction to the unaccountable power given to bio hackers and malevolent types in and out of “government” by CRSP-R…

      One thinks of the immanence of notions like Ragnarok and other End Times myths and memes in so many cultures…

  6. Dave in Austin

    To follow up on PlutoniumKim’s posting:

    There is history of above ground mines like the American Claymore which can be left unattended and kill anyone who walks by. During Vietnam the Marines installed defensive minefields and failed to recover the mines when the Marines moved on. The VC did the recovering and the Marines got the mines back one-at-a-time when they went “boom”. I assume Marine doctrine has been updated.

    Kids routinely used to explore interesting places like junkyards at night. The owner’s response? The “junk yard dog”, which is analogous to the “killer porpoises” used by the US Navy to “neutralize” frogmen in Vietnam.

    There is also the recent American-or-Isreali killing of a prominent Iranian scientist using an automated machine gun on a street. In that case there were charecteristics of the electronic signature of devices owned by the potential victim that came into play and at least one hit in Lebanon seems to have used that sort of information. We may see more of this. Your Fitbit can get you killed.

    Domestically we have what are called “Springtrap Laws” that prohibit property owners from using automated, lethal defense mechanisms. And anyone who has ever lived in NYC knows about touch-sensitive car alarms going off at 3 am. On 33rd street in the 1970s that annoyance was stopped by a few judicious bricks thrown through windshields.

    Police also are now using radar-triggered speed traps routinely.

    All of these have some of the four issues Yves cited, although most are mere annoyances, not lethal . But the line can be blurry. I’ve been behind a speeding car which apparently had a radar detector that suddenly went off when the car crested a hill. At 75 mph the driver jammed on the brakes although there was no obvious reason I could see to do so. Following him (yes, a him) about 100 feet behind I came very close to rear-ending his car.

    I’d like to propose we look at the issue broadly: we are being monitored and based on the monitoring, punished; when we spot the monitoring we take instant countermeasures to avoid punishment, which can increase the risk of harm or death. I could give some very funny examples from my DC days but I’ll stop here.

    1. ZacP

      Good comment! Interesting to explore how these concepts apply to more primitive weapons technology and situations outside of AI.

      My wifi security cameras (only bought them since we moved to the city and I work nights) are only supposed to activate and record when humans walk by. Instead the video storage is filled with stray cats, squirrels and cars. Funny to think about all the collateral damage if it had an attached turret gun…

  7. CNu

    And that’s before getting to the concern that low-end versions could be built cheaply, meaning lots of people who are only moderately tech-savvy could play warlord.

    For half a minute, I was under the impression that the rousing victories chalked up by Azerbaijan at Nagorno-Karabakh and by the Houthis against Saudi Arabia – would swiftly redefine contemporary military doctrine.

    From a strictly PMC perspective, the cost-to-value of those victories was amazing.

    Despite the compelling cost-to-value – very little seems to have changed, however. Large militaries seem as wedded to old-fangled ways of war-fighting as ever. I have concluded, based on my slowly growing familiarity with countermeasures, that drone swarms and autonomous weapons are REALLY only a serious threat to those unable to field electronic countermeasures.

    Thus, contrary to this article’s headline, I propose that autonomous weapons, including vast deadly drone swarms, are not at all destabilizing, rather, they’re inherently stabilizing. Those who can afford to field highly effective autonomous swarms can also easily afford to defend against the same. This is much less the case with nuclear arsenals which are costly to field and maintain, difficult if not impossible to defend against, and which are too destructive to responsibly utilize.

    1. JTMcPhee

      But think about how the outcome in Afghanistan would have changed it the slaughterbots and drone swarms had been in the Imperial armory…

      1. CNu

        That’s half my point. These weapons HAVE BEEN in the Imperial armory for a while now. They would have proven promptly decisive. Too decisive.

        Had they been used en masse, their use would have been in diametric opposition to the MIC cash flow objectives better served by buying, maintaining and resupplying the old-fangled war fighting equipment and personnel left “carelessly” strewn across Central Asia.

        The old-fangled MIC is a loooong way from ready for its own long-overdue great reset. Speaking of which, have you noticed how quickly Patriot Missile defense opportunities have dried up since the Houthi’s put on their multi-billion $$ attacks on the Saudi’s?

        Let one of the axial three great powers put on a true drone swarm extravaganza, not just Turkey and Iran, and the entire existing force projection doctrine will be shown to be the unsustainable boondoggle that casual observers have long realized it to be.

        That’s a lot of jobs and material suddenly gone “poof”!

        1. JTMcPhee

          MY point was that if the Empire had loosed the slaughterbots on Afghans, which ones would have been killed? The tech might have gifted the Empire with a “win” (“victory” is another one of those words that is used in btu not defined in the DoD Dictionary of Military and Associated Terms), as a kind of massively “effective” Phoenix Program assassination thing. So the neocons would have had their scalp, hey? Of course it would all be a lot more complicated than that, and this kind of asymmetric warfare would be soooo much easier to pull off. You can bet that the “cause of global democracy” would scarcely have been served.

  8. fajensen

    The weapons are likely to be equipped with expensive ethical governors designed to minimize collateral damage, using what U.N. Special Rapporteur Agnes Callamard has called the “myth of a surgical strike” to quell moral protests.

    Suuure they totally will be, and of course the “smart” machines building and training the algorithms are going to find ways of undermining their limiters, not with ill intentions of destroying humanity, because we are too dumb to build proper AI so they will have none of that stuff, but, they will do it in order to raise their KPI’s (Kills per humanitarian Intervention or sum such measure of Value).

    Machine Learning will, just like some humans, cheat and Lie in order to reach / maximise an objective.

  9. samhill

    Looking at myriad, relentless, error-ridden, intentional and unintentional, often pointless and counterproductive, very often exuberant and exalted atrocities committed by flesh and blood soldiers throughout the millennia and adding in Fela’s pointed observation (i.e. Zombie) I think AI and autonomous robots will just be more of the same but hopefully less exuberant and exalted. I’m being facetious, still, could Hiroshima or the holocaust and Klaus Barbie’s work be better handled by, delegated out to, a robot? Sure, but it sure as hell doesn’t need to be. I do agree on the error worry, from now on human error will be compounded not replaced by AI error. How much more can humanity take?

  10. Phil

    I suppose killer robots means the billionaire class won’t have to worry about the loyalty of it’s armed guards anymore. Which is something

    1. Tom Stone

      Phil, who is going to maintain those robot guards and how reliable are they mechanically?
      Assuming that the AI works well ( A big assumption) how will they communicate with each other and the human security team?
      And how secure will those communications be?
      This looks like the internet of shit with guns added, YMMV.

      1. Kyle Reese

        There’s a huge difference between maintaining equip and putting own ones life on the line to protect your billionaire prick-of-a-boss and his/her family.

      2. CuriosityConcern

        I would take cold comfort if the monthly service and subscription fees for the autoomous kill bots started to grow exponentially. Also, do you fully trust the programmers?

  11. Mikel

    “No flesh shall be spared”

    “Hardware”: Directed by Richard Stanley. With Carl McCoy, Iggy Pop, Dylan McDermott, John Lynch.

    90s sci-fi movie. Lower budget, but quick thrill ride about the beginning of a robo dystopia on top of a dystopia.

    Often overlooked when thinking about movies like this from Robocop to Terminator: whether set in the future or current times, the killerbot stage of society is one that is ALREADY a dystopia.
    We are IN it now. It may not look exactly like sci-fi movies. But you’re in it.

  12. jr

    I have a solution to the problem of autonomous weapons: make them people! Saudi Arabia already extended human rights to that rubber faced mannequin I linked to the other day:

    https://learningenglish.voanews.com/a/saudi-arabia-first-nation-to-grant-citizenship-to-robot/4098338.html

    Pennsylvania did something similar with Amazon sidewalk delivery bots:

    https://vista.today/2020/12/pennsylvania-passes-one-of-least-restrictive-autonomous-delivery-robot-laws/

    Let’s do the same to the drones and the AI’s running them. Then, when they screw up and kill a bunch of nuns or whatever, we can drag them to the Hauge and have them stand trial for crimes against humanity. Then off to prison! An entire industry of robotic incarceration will blossom, providing jobs for the auto-cannon’s and drone sentries that guard the killer-bots as they sit and reflect upon their crimes. It’s a win for the producers of such technologies as they can simply throw up their hands and say “Hey, I can’t be responsible for what other people do, we just make them!”

  13. Brooklin Bridge

    A project I’m sure will soon be undertaken is to teach these algo driven robots how to manufacture themselves start to finish and then also define kill objectives in terms such as, “all humans we don’t like.” There is no lack of code heads who wouldn’t blink, never mind catch the lethal irony, just to be the first with a solution.

    Who needs movies! If we can’t do it fast enough with global warming, then by gawd we’ll do it with algos.

  14. Jeremy Grimm

    This post very briefly mentions asymmetric warfare, insurgent groups and international and domestic terrorists in passing but directs almost all of its attention to the uses of autonomous weapons for warfare between state entities. All of the discussion, as well as the video clip focus on the use of autonomous weapons for killing humans. Concern about proliferation, an arms race, and war crimes as a consequence of killer robots seems quaint as the u.s. spends trillions on upgrades and expansion for its nuclear weapons arsenals. This post seems to relegate catastrophic mistakes of the nuclear arms race into some vague category of the past.

    The focus of concerns in this post also seem misplaced, even partaking a quality of red-herrings, in evaluating the threat potential of autonomous and remote controlled weapons. It hardly matters whether the weapons are controlled by humans or algorithms. Weapons under remote control are the threat, and they need not be used as a direct threat to humans as demonstrated by Yemen’s drone attacks on Aramco facilities. Our fragile society is rich with many targets offering a much higher payoffs than attacking students in a lecture hall with exploding mini quad-copters.

    Remote attacks, like hacker attacks or drone attacks, whether autonomous or human controlled, can be launched without risk to the attacker but also without leaving clear evidence for attribution of the source of an attack. Even better, an attack can be designed to deliberately spoof the attack’s origins and perpetrators.

    1. CNu

      Remote attacks, like hacker attacks or drone attacks, whether autonomous or human controlled, can be launched without risk to the attacker but also without leaving clear evidence for attribution of the source of an attack. Even better, an attack can be designed to deliberately spoof the attack’s origins and perpetrators.

      Bingo, bango, boingo!!!

      IMOHO – this is the arms race that the axial three are loathe to fully unleash. Turkey and Iran OTOH – are all-in on getting this paradigm-shifting party started in earnest.

  15. Carolinian

    The problem is not just that when AI systems err, they err in bulk.

    Wouldn’t as many as a million dead Iraqis be “in bulk”? Or Dresden, Hiroshima?

    I’d say TPTB see in bulk as a feature, not a bug. Or as Tom Friedman notoriously said, we just need to kick some butt to show who’s boss and it doesn’t matter whose butt it is.

    Who knows, those artificially intelligent machines may end up actually having a conscience.

  16. Tim

    “When selecting a target, will autonomous weapons be able to distinguish between hostile soldiers and 12-year-olds playing with toy guns? Between civilians fleeing a conflict site and insurgents making a tactical retreat?”

    Will pilots in a cobra attack helicopters be able to distinguish between members of the press carrying cameras and enemy combatants carrying weapons? Wikileaks says no.

    Technology is more free from errors than humans are, the key lies in humans not over-assuming the capability of the technology to begin with.

    It may take a very long time (50-100 years), by I do expect to reach a point in time where it is illegal for people to drive cars because machines will have become so much safer.

    1. Brooklin Bridge

      No offense to you, I believe we may well reach a point in time much sooner than 50 years where it is illegal for people to drive cars because renting the autonomous barf bucket a party-hardy rented and perfumed the night before will be so much more profitable for giant rental businesses. I can’t believe they would go to all this trouble just to make cars safer; much more money in accidents, or… ultimately, the rental model which has the added advantage of more data and more remote control over passengers.

      I suspect the ownership model will remain active for some time because mfgs get to defray some of the costs of development that way and they know full well they can’t force people to ride in a death trap hands tied while they get to level 5 autonomy. Then again, they may try. The term “safer” may be put to quite a work out or perhaps gymnastics would fit better. But then our media can handle most such discrepancies with as straight a face as any robot so there is that.

  17. Alex Cox

    The article points out the evil of these weapons, yet its author can’t resist a gratuitous and entirely misdirected potshot at the Serbs:

    “A prominent example of someone held to account is Slobodan Milosevic, former president of the Federal Republic of Yugoslavia, who was indicted on charges against humanity and war crimes by the U.N.’s International Criminal Tribunal for the Former Yugoslavia.”

    Milosevic was never held to account. He was seized and jailed but did not receive a trial. Murder is one of the most serious crimes, and murderers are supposed to be brought to trial quickly. Milosevic never saw a jury, grew increasingly ill, and died in custody.

  18. Synoia

    There is no problem with Military Autonomous Killer Robots. They will be based on Elon Musk’s Autonomous control of cars, which is already perfect.

    And it will only be the poors who get killed /s

    I’d assert that Tesla’s autonomous car is a clear statement of the state of autonomous robots.

Comments are closed.