At the turn of the millennium, the smart phone didn’t exist, Google wasn’t yet the dominant choice for navigating the internet and the majority of investment in cutting edge computing capability was driven by governments.
Since 2000, the speed at which technology is developed and deployed has accelerated rapidly. In the same period, technology crossed a threshold to become a major element in the lives of an increasing number of consumers across the globe.
In launching the iPhone in 2007, Steve Jobs was embarking on a project that altered the way that many of us interact with each other and the world.
Unfortunately, acceleration in the development and use of technology has been matched by changes in the capability of those that would do us harm. State and non-state actors alike are actively leveraging technology to communicate, undertake information operations and conduct cyber attacks. For instance the use of twitter and twitter bots by ISIS to organise and market its message broadly.
In light of technology’s ever-increasing pace of change, it’s an important time for technologists, strategists, policy professionals and economists to collaboratively look to the future for the next technical trends and their security implications. Last December, the Australian Strategic Policy Institute and the SAP Institute for Digital Government hosted a roundtable to consider what the next 15 years of technology development might bring. Four key themes emerged during the roundtable: the growing use of drones; the changing nature of critical infrastructure; quantum computing and the rise of Artificial Intelligence (AI); and the changing nature of the internet.
Current experimentation in the use of drones has moved from single drone activity to models that support Eusocial behaviours. Eusocial behaviours are best represented by ants, bees and other forms of insect life which are capable of supporting complex social behaviour and acting in a highly coordinated manner despite the limited intelligence of individual units within the colony. Those patterns of behaviour have been modelled for the purposes of allowing drones to perform complex tasks in a coordinated fashion.
Perhaps in the next 15 years we will see a shift to true independence of action by drones, leveraging elements of AI allowing missions to be finished when circumstances change.
The growing capabilities of drones will provide significant benefits for emergency response and combat scenarios, as well as a reduction in risk to responders. However, we may also see the hijacking of drones to undertake terrorist actions.
Critical national infrastructure is undergoing a change in definition and distribution. In the future some forms of critical infrastructure will be physically distributed but digitally concentrated.
This physical distribution will, like the original ARPANET, reduce the value of physical attacks on a single point of failure; however, this assumption is highly contestable. Commercial update servers and peer-to-peer relationships between devices will allow for rapid dissemination of viruses and malware which may cripple such infrastructure.
Commercial providers of devices and systems will need to significantly improve cyber security in light of the likely growth in our dependence on digitally distributed systems.
Our understanding of how we’ll leverage AI and how it will impact on our society are limited. The development of AI will have impacts on national security. And there’s a real possibility that AI will disrupt employment and social cohesion.
The largest IT companies on the planet—including Facebook, Google, IBM and Microsoft—are in the front seat to develop AI. The benchmark for AI is technology that’s emulative of the human condition, rather than one that will deliver improvement in the human condition. As such, there’s a real possibility that someone will produce a machine intelligent enough to achieve a single goal, without any ability to understand the broader impact of its actions.
Contemporary approaches to software development can’t meet the needs of the exponentially growing computing power used to support quantum and AI-based systems. To support new capabilities we may see a move to intelligent systems that are decoupled from underlying infrastructure.
In the future, programs may become independent consumers of resources, intelligently negotiating with other programs for resources across all domains, including but not limited to mobile devices, traditional server farms and mainframes. Those programs would follow biological models of behaviour: being born, reproducing as required, and dying when they’re no longer required.
But that kind of model isn’t without risks. A program could potentially become a pandemic in the digital world, propagating like a bacteria and consuming all available resources. Robust protocols around system behaviour and investment in policing programs to ensure fair use will be required to manage the risks arising from those programs.
Divining future developments in technology—and their national security implications—is no easy task. The last 15 years of technological advancement is a mere sample of the potentially staggering change that will confront national security agencies as we approach 2030. How well governments respond to this change will be dependent on agility in policy development, technology adoption and programme implementation. The question remains, how do governments innovate to protect their citizens in a constantly changing national security landscape?