AI Enabled Kill Webs and the Slippery Slope towards Autonomous Weapons Systems

Original article was published by Campaign to Stop Killer Robots on Artificial Intelligence on Medium


The Campaign to Stop Killer Robots aims to preserve meaningful human control over the use of force. The technologies being tested as part of Project Convergence demonstrate many of our concerns. Can an operator make a sound decision about whether to strike a newly-detected target in under 20 seconds, or are they just hitting an ‘I-believe button’ to rubberstamp the system’s recommendation, delegating the true decision making authority to software? In a sociotechnical system that is explicitly optimizing to reduce the time from detecting a potential threat to destroying it, an individual may not be rewarded for exercising vigilance. The idea of making attacks via the very limited user interface of a smartphone is also troubling.

Nowhere in the public reporting about Project Convergence is any discussion about human factors in design of software interfaces, what training users get about how the targeting systems involved work (and on the shortcomings of those systems), or how to ensure operators have sufficient context and time to make decisions. That’s consistent with the Defense Innovation Board (DIB) AI Principles, published last year, which also omits any mention of human factors, computer interfaces, or how to deal with the likelihood of automation bias (the tendency for humans to favor suggestions from automated decision-making systems).

The DIB AI principles do include reliability: ‘DoD AI systems should have an explicit, well-defined domain of use, and the safety, security, and robustness of such systems should be tested and assured across their entire life cycle within that domain of use.’ This principle contrasts with the approach taken for Project Convergence, which uses ‘plug-and-play interface[s] […] to get a new pod to work on the drone for the first time, without laborious technical integration or time-consuming safety recertifications’, and a network system ‘that significantly improves the warfighting capability of our maneuver brigades, but […] was not fielded to do the things we’re doing’.

The sorts of cobbled-together systems being used as part of Project Convergence may not be quite what many of us think of as an autonomous weapons system — there is automated targeting, but use of force is separated from the target selection, and there is a human decision maker in the loop (although we are not sure that the decisionmaker always has sufficient time and context).

However these systems are, at the very least, a significant step along a slippery slope towards fully autonomous weapons. Arthur Michel Holland calls these kinds of tools ‘Lethality Enabling Systems’ and notes that “in the absence of standards on such matters, not to mention protocols for algorithmic accountability, there is no good way to assess whether a bad algorithmically enabled killing came down to poor data, human error, or a deliberate act of aggression against a protected group. A well-intentioned military actor could be led astray by a deviant algorithm and not know it; but just as easily, an actor with darker motives might use algorithms as a convenient veil for an intentionally insidious decision.”

These kinds of concerns are exactly why the Campaign to Stop Killer Robots calls for the retention of meaningful human control over the use of force as a general obligation, rather than seeking to regulate any specific technology. The time is now. This article has discussed the United States’ Project Convergence as a timely (and well-reported-on) example, but many major military powers are exploring AI-based targeting and the arms industry is building systems with increasing levels of autonomy.

Photo: Mufid Majnun

This is 2020. We’ve seen the worst global pandemic in a hundred years. We’re in a climate crisis and forests across the globe have burned at a record rate. We’re facing economic downturns, exposure of long existing systemic inequalities and food crises. With every crisis however, that 2020 has thrown our way, there have been people around the world who have stepped up to play their part for humanity. So while 2020 makes for grim accounting, we can still choose a future that won’t add ‘AI enabled kill webs’ and autonomous weapons to the list. The technological developments are not slowing down but there’s time to act, if we act fast.

To find out more about killer robots and what you can do, visit: www.stopkillerrobots.org