- The Strategist - https://aspistrategist.ru -

Ethical AI for Defence

Posted By on August 20, 2019 @ 06:00

The public discourse on artificial intelligence in the military context tends to get messy very quickly. There seems to be an AI variant of Godwin’s law [1] (which holds that as any online discussion grows longer, the probability of a comparison being made to Nazis approaches 1 [2]) at work, except in this case it’s the probability that a reference will be made to killer robots or the Terminator movies’ Skynet [3]. So it was useful to attend a recent conference on ethical AI for defence [4] to gain a better understanding of where things currently stand.

The conference was jointly sponsored by the Defence Science and Technology Group, the air force’s Plan Jericho [5] team, and the Trusted Autonomous Systems [6] Defence Cooperative Research Centre, which is largely funded by the Department of Defence. It’s not surprising, then, that the overall tenor of the conference was that the ethical issues are significant but manageable.

There was broad representation, ranging from defence practitioners to AI developers, philosophers, ethicists and legal experts, resulting in a range of opinions. Granted, some were sceptical of the value of AI. And occasionally it felt like some philosophers were mainly interested in posing increasingly exquisite variations of the trolley problem [7] (do I save the homeless orphans or nuns pushing prams from impending doom?).

Overall, however, there was broad consensus on key issues. One is that when we’re considering AI for defence uses, we’re talking about ‘narrow’ AI—that is, applications optimised for particular problems. Narrow AI is already deeply embedded in our lives. General AI—that is, a system with similar cognitive abilities to humans—is still a long way off. According to one presenter, over the past 50 years predictions for when that would be achieved have stubbornly remained 40 years in the future. Part of the messiness in public discussion tends to stem from a conflation of narrow and general AI.

It is perhaps useful to distinguish between three levels of ethical issues, each of which is distinct and needs to be addressed by different people in a different way.

The first are the macro-ethical issues, such as the implications of the ‘AI singularity’, the point at which machines become smarter than humans (if we do actually reach it in 40 years’ time). Or the question of whether autonomous systems replacing humans on the battlefield makes it easier for leaders to choose to use military means to resolve disagreements. Ultimately these are questions that must be resolved by society as a whole.

But it is important for Defence to have the ability to inform the broader discussion so that our service people aren’t prevented from adopting AI while facing adversaries that have exploited its capability advantages. To do this, Defence needs to have credibility by demonstrating it understands the ethical issues and is acting ethically.

At the other end of the scale are the daily, micro-ethical issues confronted by the designers and users of AI-driven systems in ensuring they embody our values. Certainly, this is not a mechanical task. While it’s relatively easy to develop a list of values or principles that we want our AI-driven systems to adhere to (privacy, safety, human dignity, and so on), things get tricky when we try to apply them in particular tools.

That’s because, in different circumstances, those principles may have different priority, or they may in fact be contradictory. But the developer or user of the system doesn’t need to resolve the macro questions in order to satisfactorily resolve the micro issues involved in, say, developing an AI-driven logistics application that distributes resources more efficiently.

In between are the medium-sized, or enterprise-level, ethical issues. For example, there are ethical challenges as we move to a business model in which people are consuming the outputs of AI, rather than generating those outputs themselves. If the point of keeping a human in—or at least on—the decision loop is to ensure that human judgement plays a role in decision-making, how do you make sure the human has the skill set to make informed judgements, particularly if you expect them to be able to overrule the AI?

Preventing de-skilling is more than an ethical issue; it’s about designing and structuring work, training and career opportunities, and human identity. So ethics has to be embedded in the way Defence thinks about the future.

Once you divide the problem space up, it doesn’t seem so conceptually overwhelming, particularly when many of Defence’s ethical AI issues are not new or unique. Another area of consensus is that the civil sector is already dealing with most of those issues. Designers of self-driving cars are well aware [8] of the trolley problem. So Defence isn’t in this alone and has models to draw upon.

Moreover, with a human in the loop or on the loop, even in specifically military applications (employing lethal weapons, for example) the ethical use of systems employing AI isn’t fundamentally different from the ethical use of manned systems, for which the ADF has well-established processes. Certainly those frameworks will need to evolve as AI becomes central to defeating evolving threats (such as hypersonic anti-ship missiles), but there’s already something to build upon.

In large organisations like Defence, the challenge facing innovators—like those seeking to leverage the opportunities offered by AI—is that they always face long lists of reasons why their ideas won’t work. After all, identifying risks is at the heart of Defence.

Encouragingly, islands of innovative excellence—such as Plan Jericho and the Trusted Autonomous Systems Defence CRC—have shown that they are capable of forging new approaches to developing and delivering defence capability.

So, while it’s important for Defence to understand and address the ethical challenges posed by AI, it’s just as important to not let those challenges blind it to the opportunities offered by AI and discourage our defence planners from pursuing them.



Article printed from The Strategist: https://aspistrategist.ru

URL to article: /ethical-ai-for-defence/

URLs in this post:

[1] Godwin’s law: https://en.wikipedia.org/wiki/Godwin%27s_law

[2] approaches 1: https://en.wikipedia.org/wiki/Convergence_of_random_variables#Convergence_in_probability

[3] Skynet: https://en.wikipedia.org/wiki/Skynet_(Terminator)

[4] ethical AI for defence: https://www.dst.defence.gov.au/news/2019/08/02/ethical-ai-defence-world-experts-gather-canberra

[5] Plan Jericho: https://www.airforce.gov.au/our-mission/plan-jericho

[6] Trusted Autonomous Systems: https://tasdcrc.com.au/

[7] the trolley problem: https://en.wikipedia.org/wiki/Trolley_problem

[8] well aware: https://www.insidescience.org/news/moral-dilemmas-self-driving-cars

Copyright © 2024 The Strategist. All rights reserved.