- The Strategist - https://aspistrategist.ru -
Red Cross is seeking rules for the use of ‘killer robots’
Posted By Brendan Nicholson on December 18, 2018 @ 06:00
As autonomous weapons become rapidly more lethal, the International Committee of the Red Cross is in a race to develop a legal framework for the use of ‘killer robots’.
Netta Goussac, a legal adviser with the ICRC’s Geneva-based arms unit, tells The Strategist that nations need to consider the issue of how much control people have over autonomous weapons—which can select and attack a target without human intervention.
‘They need to do it urgently because technological developments are moving very, very quickly’, Goussac says. ‘We think states should not consider this to be an inevitable development but rather make conscious choices now about what is and isn’t acceptable.’
Once a capability has been acquired, it’s extremely difficult to convince states not to use it, she says. ‘It’s easier to reach agreement on what is and isn’t acceptable before it’s a reality.’
An Australian, Goussac previously worked as the principal legal adviser in the Attorney-General’s Department’s Office of International Law.
She says the international discussion has to focus on the role of the humans who deploy autonomous weapons. Those sending them onto the battlefield must take all feasible measures to prevent violations of international humanitarian law.
These responsibilities cannot be delegated to the device, because only humans are responsible for complying with the law, she says.
As the world’s armed forces rely increasingly on technology, artificial intelligence, algorithms and machine learning for military decision-making, judgements must be made about the level of control a human deploying an autonomous weapon has to have in order to meet their legal and ethical responsibilities.
That involves examining the person’s ability to stop the weapon, to supervise it, to communicate with it and to predict reliably what it will do in the environment in which it’s being used.
Guns and explosives still do the greatest humanitarian harm and the Red Cross applies the same approach to new technologies as it does to them. ‘We ask, what are the real and foreseeable humanitarian consequences of these weapons, and what does the law say about their use?
‘We’ve applied that logic to chemical weapons, to landmines, and now we’re applying it to cyber warfare and to autonomous weapon systems. Do they pose any challenges to complying with the rules of international humanitarian law that require parties to a conflict to distinguish between civilians and combatants, to use force proportionally and to exercise caution?’
Technology developed to benefit society generally is also driving advances in arms as militaries demonstrate that they favour greater autonomy in weapons systems. They want more precision, faster decision-making and longer range.
An autonomous weapon is distinct from a remote-controlled system, such as a drone, in which a human selects and attacks the target using a communication link that gives them constant control and supervision over the deployed system.
‘With autonomous weapon systems, the human designs and programs the system, the human launches the system, but it’s the system that selects and attacks the target’, Goussac says.
‘Yes, the system is running a program that’s created by the human, but the human who launches the system doesn’t necessarily know where and when the attack will take place.’
The more autonomous weapon systems are deployed, the greater the chance that they’ll cause humanitarian risks, she says.
With autonomous systems, the human’s decision to use force can be distant in both geography and time.
‘It’s that distance between the human and the effects of their decisions that we’re concerned about because we think that if you stretch that too far you make it difficult for the human to comply with the rules that they’re required to comply with, to make the legal judgements that they have to make at the time they decide to use force.’
A key question, says Goussac, is whether an autonomous weapon system hinders the human’s ability to stop an attack if the circumstances change. What if, for instance, civilians arrive in a killing zone?
In some cases, autonomous systems are used in a very predictable and controlled environment—generally in the air or on the sea—where there’s no likelihood of civilians or ‘non-targetable objects’ being hit.
‘But the more complex the environment, the more mixed it is, the more dynamic it is, the less predictable it is, and the more important it is to have that supervision and ability to control it once the system has been launched’, Goussac says.
‘It’s not just the technical characteristics of the weapon that are important, it’s the circumstances of use. What an appropriate level of control over a system might mean in one context is totally different in another context.’
A range of defensive systems are designed to autonomously select and attack targets in a space where there are no civilian aircraft and when the target is flying at a high velocity (the Iron Dome system is one example). ‘There’s been a certain pre-determinacy here’, Goussac says, ‘but it’s an acceptable level of pre-determinacy’.
She says it’s difficult to set rules based on technical characteristics. ‘We’re really more interested in talking about the role of the human because that’s what we think is universal in all of this.’
‘At what point do we start having ethical concerns about the delegation of decisions to kill or injure, or to destroy property, to machines?’
Article printed from The Strategist: https://aspistrategist.ru
URL to article: /red-cross-is-seeking-rules-for-the-use-of-killer-robots/
Click here to print.
Copyright © 2024 The Strategist. All rights reserved.