Technology is steadily marching in the direction of increased autonomy, a change that will undoubtedly influence weapon platforms in the future. The notion of offensive use of lethal autonomous weapon systems (LAWS)—systems that can independently identify targets and take lethal action—has already stirred disquiet in the international community (even though no such capability exists). While discussion of the legal and ethical ramifications of LAWS is welcome and crucial, it often gets confused and tangled around the technicalities of autonomous systems and artificial intelligence (AI). The “killer robots” rhetoric could stifle valuable technological advances that might produce greater precision and discrimination.
Attention around LAWS skyrocketed in 2012 when Human Rights Watch released Losing humanity: the case against killer robots, arguing for a legal ban on the development of fully autonomous weapons and for the creation of a code of conduct for R&D of autonomous robotic weapons. The report spurred the 2014 UN Convention on Certain Conventional Weapons (CCW) Meeting of Experts, which convened again last month.
There’s also concern about LAWS closer to home. At a Senate Committee hearing last month on the use of unmanned platforms by the Australian Defence Force, witnesses from the Red Cross raised issues about the development of fully-autonomous systems and the capacity of these systems to discriminately target. (You can read the testimonies to the committee here (PDF), including my contribution with Andrew Davies.)
It’s a great sign that the CCW and other bodies are anticipating the challenges posed by LAWS. The US stirred up serious consternation when they first deployed Predators with Hellfire missiles after 9/11, but there were no meetings of experts or inquiries beforehand. A decade on from the first lethal drone strikes, concerns about lethal unmanned aerial vehicles persist despite consensus that the technology doesn’t contravene international humanitarian law. But a bad reputation is hard to shake, and LAWS have already been saddled with the “killer robot” label. The provocative branding has started an important conversation about the extent to which the world is comfortable with autonomous targeting.
But budding discussions on the potential legal and normative challenges of LAWS don’t clearly define what LAWS actually are—the UN’s still without an official definition. This creates confusion as to whether to include capabilities such as missile defence systems that autonomously identify and destroy incoming missiles and rockets. There’s also a complex and evolving spectrum of technological autonomy to take into account. On one end, there’s technology in use today with autonomous functions—like missile defence systems. At the other end, there are systems that have advanced ‘reasoning’ and adaptive problem solving skills, which could more accurately be defined as artificially intelligent rather than autonomous. Systems with human-like reasoning skills don’t yet exist but they’re certainly on the agenda of research groups like DARPA.
Confusion on this subject is in large part created by the novelty of autonomous systems and AI. While we’re only in the early stages of development, general unease is reflected in the blanket bans proposed by Human Rights Watch, along with other initiatives like the Campaign to Stop Killer Robots. These groups assume that LAWS will undermine international humanitarian law (IHL) and challenge the status of civilians in warfare since they would lack the human judgement and decision making. But there’s nothing in IHL currently that states that only a human can make lethal decisions, nor any reason to suggest that those systems won’t eventually be capable of distinguishing between civilians and lawful targets at least as well as humans can.
As Kenneth Anderson and Matthew Waxman have argued, LAWS of the future might actually be more discriminate and proportionate as weaponry. The processing speed possible for LAWS and their ability to remain on station for extended periods without interruption could lead to greatly enhanced battlefield awareness—’dumb’ drones are already providing some of these benefits. There’s also the possibility that removing human emotions—those which can cloud decision-making—could result in fewer civilian casualties. A ban on R&D would suppress potentially ground-breaking developments.
There are many unknowns surrounding the future of autonomous systems and AI. The technology has a long way to go before we can field a system that’s capable of decision-making, reasoning and problem solving in a complex environment on par with a highly trained soldier. There’s also no guarantee that science will ever develop this level of AI. As Chris Jenks commented in his recent lecture on autonomous systems at the ANU, humans are tremendously poor predictors of the future, especially when it comes to technology.
For now, the international community should work to develop an accepted definition for LAWS. It needs to be flexible enough to account for the many unknowns, and capable of evolving to match the development of autonomous systems and AI. Establishing a definition will be challenging but it’s needed to advance the important dialogue around the laws and norms of potential offensive use of LAWS. The use of inflammatory labels like “killer robots” should be discouraged—they serve only to encourage falsehoods and engender confusion about LAWS.