Many have heralded artificial intelligence as a force-multiplier for defence and intelligence capabilities.
Do you want armed autonomous vehicles to comply with legal and ethical obligations as set out in the Royal Australian Navy’s robotics, autonomous systems and AI strategy? AI can help. Do you want to more effectively analyse intelligence to predict what an adversary will do next? AI can help. And AI’s proponents are right—it could, and likely will, do all of those things, but not yet.
Its ability to spot patterns, compute figures and calculate optimum solutions on an ‘if X happens then do Y’ basis is now unmatched by any human being. But it has a fundamental flaw: we do not measure human motivations solely by numbers.
Classical game theory has been trying to measure this since the 1940s and its practitioners have had so little success that they labelled many such motivations as ‘irrational’ and decided that quantitative modelling is not possible. But, if we could, modelling of non-material payoffs could answer such questions as ‘How will Russia change its defensive posture if Vladimir Putin loses face from military setbacks in Ukraine?’ or ‘Why would a rational person volunteer as a suicide bomber?’
Instead, game theory has only proposed high-level conceptual frameworks in an attempt to guide decision-makers. It has looked, for example, at whether we should have modelled the Cold War nuclear arms race as an iterated prisoner’s dilemma.
This is not the fault of the economists and maths-trained game theory experts or their successors, trying earnestly and for good cause to predict the probability of human actions. Many are experts in the art of programming, but not in all the intricate detail that looks to explain why we humans do what we do. We cannot expect a programmer’s life experience to compare to thousands of years of philosophy, historical precedent and more recent psychological studies. The programmers need back-up and humanities departments are what they need.
The advent of AI has led to consideration of its ethics and the involvement of humanities specialists, often employed to guide programmers with high-level principles-based frameworks and/or to rule on AI testing and wargames as ethical or non-ethical. Both are vital to ensuring AI better understands humanity and our expectations of it, but this engagement is insufficient. To borrow an analogy from mathematics, the former supplies an example answer but no formula to apply, and the latter marks the answer but does not check the working-out.
We need humanities specialists involved at the coding level to help programmers assign mathematical functions to the various factors influencing human decision-making. It is not enough to say love of money, family or duty motivates a person. A fit-for-purpose AI will need to know how much they are motivated and how these motivations interact. In short, we should have a mathematical proof for these factors.
And for those who call such rigour ‘onerous’, they are correct. It will be difficult, detailed and it could be disastrous for our national security community if we do not try. A cursory review of published government AI programs shows just how high the stakes are.
The navy released its AI strategy with particular emphasis on autonomous undersea warfare systems and the Australian Signals Directorate recently announced the REDSPICE investment to boost its AI capabilities; both mark new eras in the incorporation of AI. It should be noted these developments are also happening within police forces at the federal, state and territory levels. And while the national security community no doubt has more opaque AI operations, they are likely taking heed of the recent ASPI report highlighting noteworthy precedents from the US and UK for improving use of AI.
The implications of this pervading emphasis in AI was recently summarised by Michael Shoebridge in another ASPI report:
The national security implications of this for Australia are broad and complicated but, boiled down, mean one thing: if Australia doesn’t partner with and contribute to the US as an AI superpower, it’s likely to be a victim of the Chinese AI superpower and just an AI customer of the US.
Building Australia to become an AI superpower will require collaboration, such as with private companies (like Google) or academia (as in the Hunt Laboratory for Intelligence Research) and employing the ‘build on the low side, deploy on the high’ methodology. Alternatively, it could be delivered in-house through either agency-specific taskforces, the Office of National Intelligence’s joint capability fund or forums to be created under the new action plans from non-traditional security government sectors.
Whatever the manner of collaboration, using humanities specialists to develop a common language for human motivations would solve the so-called Tower of Babel problem between qualitative and quantitative analysts. Its development would be comparable to standardising the type of brick and mortar used in the construction industry, or shipping containers used in the freight industry.
Only by harnessing both ‘soft’ and ‘hard’ sciences to code our humanity can we give the national security community the tools needed for Australia to become an AI superpower.