- The Strategist - https://aspistrategist.ru -

Balancing the lopsided debate on autonomous weapon systems

Posted By and on December 20, 2019 @ 11:33

The question of whether new international rules should be developed to prohibit or restrict the use of autonomous weapon systems has preoccupied governments, academics and arms-control proponents for the better part of a decade. Many civil-society groups claim to see growing momentum in support of a ban. Yet broad agreement, let alone consensus, about the way ahead remains elusive.

In some respects, the discussion that’s occurring within the UN Group of Governmental Experts (GGE) [1] on lethal autonomous weapons systems differs from any arms-control negotiations that have  taken place before. In other respects, it’s a case of déjà vu.

To begin with, disagreements about the humanitarian risk–benefit balance of military technology are nothing new. Chemical weapons and cluster munitions provide the clearest examples of such controversies.

Chemical weapons have come to be regarded as inhumane, mainly because of the unacceptable suffering they can cause to combatants. But the argument has also been made that they’re more humane than the alternatives: some have described the relatively low ratio of deaths and permanent injuries resulting from chemical warfare as an ‘index of its humaneness’.

Cluster munitions, meanwhile, have been subjected to regulation because of the harm they can inflict on civilians and civilian infrastructure. Yet many have claimed [2] that these weapons are particularly efficient against area targets, and that banning them is therefore counter-humanitarian because it leads to ‘more suffering and less discrimination’.

Autonomous weapon systems have triggered a similar debate: each side claims to be guided by humanitarian considerations. But the debate remains lopsided.

The autonomous weapons debate is unique at least in part because its subject matter lacks proper delimitation. Existing arms-control agreements deal with specific types of weapons or weapon systems, defined by their effects or other technical criteria. The GGE, in contrast, is tasked with considering functions and technologies that might be present in any weapon system. Unsurprisingly, then, it has proven difficult to agree on the kinds of systems that the group’s work should address.

Some [3] set the threshold quite high and see an autonomous weapon system as a futuristic system that ‘can learn autonomously, [and] expand its functions and capabilities in a way exceeding human expectations’. Others consider autonomy to be a matter of degree, rather than a matter of kind, so that the functions of different weapon systems fall along a spectrum of autonomy. According to that view, autonomous weapons include systems that have been in operation for decades, such as air-defence systems (Iron Dome [4]), fire-and-forget missiles (AMRAAM [5]) and loitering munitions (Harop [6]).

All of this has made it harder [7] to pin down the object of the discussion. The GGE so far hasn’t made much headway on clarifying the amorphous concept. Indeed, rather than treat autonomous weapon systems as a category of weapons, the group’s recent reports refer circuitously to ‘weapons systems based on emerging technologies in the area of lethal autonomous weapons systems’. No wonder participants in the debate keep talking past each other.

The uncertainty about what autonomous weapon systems are has led to hypotheses about their adverse effects. The regulation of most other weapons has been achieved in large part due to their demonstrable or clearly predictable humanitarian harm. This is true even with respect to blinding laser weapons [8], the pre-emptive prohibition of which is often touted as a model to follow for autonomous weapon systems. The early evidence of battlefield effects of laser devices enabled reliable predictions to be made about the humanitarian consequences of wide-scale laser weapons use.

When autonomous weapon systems are considered to be some yet-to-exist category, it’s only possible to talk about potential adverse humanitarian consequences—in other words, humanitarian risks. The possible benefits of autonomous weapon systems also have a degree of uncertainty to them. However, the use of limited autonomous functionality in existing systems allows for some generalisations and projections to be made.

The range of risks has been discussed in detail and explicitly referenced in the consensus-based GGE reports [9]. Such risks include harm to civilians and combatants in contravention of international humanitarian law, a lowering of the threshold for use of force, and vulnerability to hacking and interference.

Potential benefits of autonomous functions—for example, increased accuracy in some contexts or autonomous self-destruction, both to reduce the risk of indiscriminate effects—barely find their way into the GGE reports. The closest the most recent report [10] gets to this issue is a suggestion that consideration be given to ‘the use of emerging technologies in the area of lethal autonomous weapons systems in upholding compliance with … applicable international legal obligations’. This vague language has been used despite some [11] governments highlighting a range of military applications of autonomy that further humanitarian outcomes, and others [12] noting that autonomy helps to overcome many operational and economic challenges associated with manned weapon systems.

The issue has become politicised and ideological: many see a discussion of benefits in this context as a way to legitimise autonomous weapon systems, thus getting in the way of a ban.

We do not wish to suggest that risks of autonomous technology be disregarded. Quite the opposite: a thorough identification and a careful assessment of risks associated with autonomous weapon systems remains crucial. However, rejecting the notion that there might also be humanitarian benefits to their use, or refusing to discuss them, is highly problematic and likely to jeopardise the prospect of finding a meaningful resolution to the debate.

Reasonable regulation cannot be devised by focusing on risks or benefits alone; some form of balancing must take place. Indeed, humanitarian benefits might sometimes be so significant as to make the use of an autonomous weapon system not only permissible, but also legally or ethically obligatory [13].



Article printed from The Strategist: https://aspistrategist.ru

URL to article: /balancing-the-lopsided-debate-on-autonomous-weapon-systems/

URLs in this post:

[1] Group of Governmental Experts (GGE): https://unog.ch/80256EE600585943/(httpPages)/8FA3C2562A60FF81C1257CE600393DF6?OpenDocument

[2] claimed: https://www.unog.ch/80256EDD006B8954/(httpAssets)/AC4F9F4B10B117B4C125722000478F7F/$file/14+USA.pdf

[3] Some: https://unog.ch/80256EDD006B8954/(httpAssets)/E42AE83BDB3525D0C125826C0040B262/$file/CCW_GGE.1_2018_WP.7.pdf

[4] Iron Dome: https://www.raytheon.com/capabilities/products/irondome

[5] AMRAAM: https://www.raytheon.com/capabilities/products/amraam

[6] Harop: https://www.iai.co.il/p/harop

[7] harder: https://blogs.icrc.org/law-and-policy/2018/12/11/machine-autonomy-constant-care-obligation/

[8] blinding laser weapons: http://www.weaponslaw.org/weapons/blinding-laser-weapons

[9] GGE reports: https://undocs.org/en/CCW/GGE.1/2018/3

[10] most recent report: https://unog.ch/80256EDD006B8954/(httpAssets)/5497DF9B01E5D9CFC125845E00308E44/$file/CCW_GGE.1_2019_CRP.1_Rev2.pdf

[11] some: https://unog.ch/80256EDD006B8954/(httpAssets)/7C177AE5BC10B588C125825F004B06BE/$file/CCW_GGE.1_2018_WP.4.pdf

[12] others: https://unog.ch/80256EDD006B8954/(httpAssets)/8B03D74F5E2F1521C12583D3003F0110/$file/20190318-5(c)_Mil_Statement.pdf

[13] obligatory: https://www.justsecurity.org/25333/regulating-autonomous-weapons-smarter-banning/

Copyright © 2024 The Strategist. All rights reserved.