- The Strategist - https://aspistrategist.ru -
Building a brilliant ADF
Posted By Mick Ryan on April 20, 2018 @ 06:00
In 2017, a US Department of Defense study of the future operating environment [1] described artificial intelligence (AI) as the most disruptive technology of our time.
In this environment, says the report:
big data techniques interrogate massive databases to discover hidden patterns and correlations that form the basis of modern advertising—and are continually leveraged for intelligence and security purposes by nation states and non-state entities alike.
The potential applications of artificial intelligence, and the deep-learning capabilities it brings, may be one of the most profound artefacts of the fourth industrial revolution [2]. Informed and experienced experts such as Secretary James Mattis [3] and US Deputy Secretary of Defense Bob Work [4] are questioning whether AI will change not just the character, but also the nature of war. That highlights just how disruptive this technology is likely to be for society and commerce, as well as for human competition and conflict.
In future conflicts, we can expect decision cycles to become faster than human cognition can process. Military command and control—and strategic decision-makers—will need AI that can process information and recommend options for decisions faster (or of higher quality) than can the enemy.
And as I’ve written previously [5], military organisations will contain thousands or even tens of thousands of unmanned and robotic systems, all with some type of AI. These swarms will demand AI-assisted command and control, as will the other composite human-automated military formations that are likely to exist in future areas of conflict.
While there’s a need to build capacity within the Defence organisation, the guiding principles apply to a wider national security community. Defence exists in an ecosystem of government organisations working towards national objectives. It’s imperative to quickly introduce AI to support decision-making in this joined-up environment. Then, immediate action is required for Defence (the department and the ADF) to rapidly increase its understanding of the applications of AI, and to contribute to a national approach.
Frank Hoffman recently proposed [6] that military organisations may be at the dawn of a seventh revolution in military affairs that he calls the ‘autonomous revolution’. Underpinned by exponential growth in computer performance, improved access to large datasets, continuing advances in machine learning and rapidly increasing commercial investment, the future application of AI [7] and machine learning may change military organisations and, more broadly, how nations prepare for war.
However, as Max Tegmark has written [8], there’s debate among AI researchers about whether human-level AI is possible, and when it might appear. The nearest estimates are ‘in a few decades’, with others predicting ‘not this century’ and ‘not ever’.
But, assisted, augmented and autonomous intelligence [9] capabilities are already in use or can be expected over the coming decade. AI needn’t replicate human intelligence to be a powerful tool. The intellectual preparation of Defence personnel, and that of the wider national security community, to effectively use AI must begin now.
The first of four key imperatives is to start educating Defence and other national security personnel about AI. A Belfer Center report [10] finds that it’s vital for non-technologists to be conversant with the basics of AI and machine learning. The aim is to develop baseline AI literacy among more Defence and national security leaders to supplement the expertise of the few technical experts and contractors who design and apply algorithms. Reading lists, residential programs and online courses, as well as academic partnerships and conferences, will help.
Defence education must be adapted to deliver greater technical literacy so personnel better understand machine learning and AI. This will permit a wider institutional capacity to effect quality control and address the risks of misbehaving algorithms. Personnel must be educated about the ethical issues of using AI for national security purposes. The overarching aim must be to develop a deep institutional reservoir of people who understand the use of AI, and who appreciate how human and AI collaboration can be applied most effectively at each level of command.
The second imperative is for Defence and other agencies to move beyond limited experimentation and thinking in key areas to broader explorations of how to use AI. This might include finding new ways for the national security community to use AI to work together more effectively. It may also allow the use of AI to support decision-making that could drive changes to the organisation of military and non-military elements in national security.
Some aspects of this exploration will require leaps of faith in assessing how capable AI will be in the future. Between the wars, the German army used fake tanks to develop its combined arms operating systems [11]. So too might we use anticipated future capabilities to build new integrated national security and Defence operating systems using AI.
Potentially, this exploration could even permit consideration of significant changes in strategic decision-making processes and organisations—mostly leftovers from second [12] and third industrial revolution [13] mindsets.
The third imperative is for Defence to deepen collaboration with external institutions working on AI applications. All of the Group of Eight universities [14] in Australia conduct AI research and teach applications. A number have partnerships with international institutions, including some who do work for foreign military and security agencies. They could help Defence, and other government agencies, explore the use of AI to support operational capabilities, for decision-making in directing and running operations and in other strategic functions such as education. Broad collaborative research with our closest allies will permit sharing of best practice and offers small nations like Australia the opportunity to develop bespoke applications that complement—not copy—overseas innovations.
Eventually, this could (and probably should) lead to a fourth imperative, the development of an AI equivalent of the Australian Naval Shipbuilding Plan [15], and a focus for a sovereign capability in AI research and development. Such a national approach is important because it might provide resourcing for further collaboration between universities and government and commercial entities. It may provide the basis for a larger national industry to support non-military AI functions.
Finally, if robotics is included, it may provide a basis to mobilise national effort if that’s necessary in the coming decades. Australia sits at the end of a long supply line for almost every element of sophisticated weaponry. Our successors may thank us if we have the forethought to develop an indigenous capacity to design and construct (using additive manufacturing) swarms of autonomous systems.
Sitting back and observing foreign developments isn’t an effective strategy. An aggressive national and departmental program of research, experimentation and educating Defence and other national security personnel is required. The knowledge for such a program exists in our universities and changes in the regional and global security environment provide the strategic drivers for action.
Leveraging potential capabilities, we must start to educate our people now, and we must develop a national sovereign capacity to harness AI for national security purposes. In this way we might develop a truly brilliant future ADF.
Article printed from The Strategist: https://aspistrategist.ru
URL to article: /building-brilliant-adf/
URLs in this post:
[1] study of the future operating environment: http://www.arcic.army.mil/App_Documents/The-Operational-Environment-and-the-Changing-Character-of-Future-Warfare.pdf
[2] fourth industrial revolution: https://www.weforum.org/agenda/archive/fourth-industrial-revolution
[3] Secretary James Mattis: https://www.c4isrnet.com/intel-geoint/2018/02/17/ai-makes-mattis-question-fundamental-beliefs-about-war/
[4] Bob Work: https://breakingdefense.com/2017/05/killer-robots-arent-the-problem-its-unpredictable-ai/
[5] written previously: https://thestrategybridge.org/the-bridge/2018/1/2/integrating-humans-and-machines
[6] recently proposed: https://ssi.armywarcollege.edu/pubs/parameters/issues/Winter_2017-18/5_Hoffman.pdf
[7] future application of AI: https://www.belfercenter.org/sites/default/files/files/publication/AI%20NatSec%20-%20final.pdf
[8] has written: https://www.amazon.com/Life-3-0-Being-Artificial-Intelligence/dp/1101946598
[9] assisted, augmented and autonomous intelligence: http://www.tgdaily.com/technology/assisted-augmented-and-autonomous-the-3-flavours-of-ai-decisions
[10] Belfer Center report: https://www.belfercenter.org/publication/machine-learning-policymakers
[11] develop its combined arms operating systems: https://www.amazon.com/Roots-Blitzkrieg-Seeckt-German-Military/dp/0700606289
[12] second: https://pdfs.semanticscholar.org/769c/a06c2ea1ab122e0e2a37099be00e3c11dd52.pdf
[13] third industrial revolution: https://www.economist.com/node/21553017
[14] Group of Eight universities: https://go8.edu.au/sites/default/files/docs/page/group_of_eight_universities_brochure_-_english_-final_low-res.pdf
[15] Australian Naval Shipbuilding Plan: http://www.defence.gov.au/navalshipbuildingplan/
Click here to print.
Copyright © 2024 The Strategist. All rights reserved.