- The Strategist - https://aspistrategist.ru -

De-risking authoritarian AI

Posted By on July 27, 2023 @ 06:00

You may not be interested in artificial intelligence, but it is interested in you. Today, you might have used AI to find the quickest route to a meeting through peak-hour traffic and, while you used an AI-enabled search to find a decent podcast, driver-assist AI might have applied the brakes just before you back-ended the car in front, which braked suddenly for the speed camera attached to AI-controlled traffic lights. In the aftermath, AI might have helped diagnose your detached retina and recalculated your safe-driving no-claim bonus.

So, what’s the problem?

The problem—outlined in my new report [1] released by ASPI today—is that AI-enabled systems make many invisible decisions affecting our health, safety and wealth. They shape what we see, think, feel and choose, they calculate our access to financial benefits as well as our transgressions, and now they can generate complex text, images and code just as a human can, but much faster.

It’s unsurprising that moves are afoot across democracies to regulate AI’s impact on our individual rights and economic security, notably in the European Union.

But if we’re wary about AI, we should be even more circumspect about AI-enabled products and services from authoritarian countries that share neither our values nor our interests. The People’s Republic of China is an authoritarian power hostile to the rules-based international order that routinely uses technology to strengthen its own political and social stability at the expense of individual rights. In contrast to other authoritarian countries, such as Russia, Iran and North Korea, China is a technology superpower with global capacity and ambitions and is a major exporter of effective, cost-competitive AI-enabled technology.

In a technology-enabled world, opportunities for remote, large-scale foreign interference, espionage and sabotage —via internet and software updates—exist at a ‘scale and reach that is unprecedented’ [2]. AI-enabled industrial and consumer goods and services are embedded in our homes, workplaces and essential services. More and more, we trust them to operate as advertised, to always be there for us and to keep our secrets.

Notwithstanding the honourable intentions of individual vendors of Chinese AI-enabled products and services, they are subject to direction from PRC security and intelligence agencies. So democracies need to ask themselves, against the background of growing strategic competition with China, how much risk they are prepared to bear. Three kinds of Chinese AI-enabled technology require scrutiny:

  • products and services (often physical infrastructure), where PRC ownership exposes democracies to risks of espionage (notably surveillance and data theft) and sabotage (especially disruption and denial of products and services)
  • technology that facilitates foreign interference (malign covert influence on behalf of a foreign power), the most pervasive example being TikTok
  • ‘large language model AI’ and other emerging generative AI systems—a future threat that we need to start thinking about now.

The report focuses on the first category and looks at TikTok through the prism of the espionage and sabotage risks posed by such apps.

The underlying dynamic with Chinese AI-enabled products and services is the same as that which prompted concern over Chinese 5G vendors: the PRC government has the capability to compel its companies to follow its directions, it has the opportunity afforded by the presence of Chinese AI-enabled products and services in our digital ecosystems, and it has demonstrated malign intent towards the democracies.

But this is a more subtle and complex problem than deciding whether to ban Chinese companies from participating in 5G networks. Telecommunications networks are the nervous systems that run down the spine of our digital ecosystems; they’re strategic points of vulnerability for all digital technologies. Protecting them from foreign intelligence agencies is a no-brainer and worth the economic and political costs. And those costs are bounded because 5G is a small group of easily identifiable technologies.

In contrast, AI is a constellation of technologies and techniques embedded in thousands of applications, products and services. So the task is to identify where on the spectrum between national-security threat and moral panic each of these products sits, and then pick the fights that really matter.

A prohibition on all Chinese AI-enabled technology would be extremely costly and disruptive. Many businesses and researchers in democracies want to continue collaborating on Chinese AI-enabled products because it helps them to innovate, build better products, offer cheaper services and publish scientific breakthroughs. The policy goal is to take prudent steps to protect our digital ecosystems, not to economically decouple from China.

What’s needed is a three-step framework to identify, triage and manage the riskiest products and services. The intent is similar to that proposed in the recently introduced draft US RESTRICT Act, which seeks to identify and mitigate foreign threats to information and communications technology products and services—although the focus here is on teasing out the most serious threats.

Step 1: Audit. Identify the AI systems whose purpose and functionality concern us most. What’s the potential scale of our exposure to this product or service? How critical is this system to essential services, public health and safety, democratic processes, open markets, freedom of speech and the rule of law? What are the levels of dependency and redundancy should it be compromised or unavailable?

Step 2: Red team. Anyone can identify the risk of embedding many PRC-made technologies into sensitive locations, such as government infrastructure, but, in other cases, the level of risk will be unclear. For those instances, you need to set a thief to catch a thief. What could a team of specialists do if they had privileged access to a candidate system identified in Step 1—people with experience in intelligence operations, cybersecurity and perhaps military planning, combined with relevant technical subject-matter experts? This is the real-world test because all intelligence operations cost time and money, and some points of presence in a target ecosystem offer more scalable and effective opportunities than others. PRC-made cameras and drones in sensitive locations are a legitimate concern, but crippling supply chains through accessing ship-to-shore cranes would be devastating.

We know that TikTok data can be accessed by PRC agencies and reportedly also reveal a user’s location, so it’s obvious that military and government officials shouldn’t use the app. Journalists should also think carefully about this, too. Beyond that, the merits of a general ban on technical security grounds are a bit murky. Can our red team use the app to jump onto connected mobiles and ICT systems to plant spying malware? What system mitigations could stop them getting access to data on connected systems? If the team revealed serious vulnerabilities that can’t be mitigated, a general ban might be appropriate.

Step 3: Regulate. Decide what to do about a system identified as ‘high risk’. Treatment measures might include prohibiting Chinese AI-enabled technology in some parts of the network, a ban on government procurement or use, or a general prohibition. Short of that, governments could insist on measures to mitigate the identified risk or dilute the risk through redundancy arrangements. And, in many cases, public education efforts along the lines of the new UK National Protective Security Authority may be an appropriate alternative to regulation.

Democracies need to think harder about Chinese AI-enabled technology in our digital ecosystems. But we shouldn’t overreact: our approach to regulation should be anxious but selective.



Article printed from The Strategist: https://aspistrategist.ru

URL to article: /de-risking-authoritarian-ai/

URLs in this post:

[1] new report: https://www.aspistrategist.ru/report/de-risking-authoritarian-ai

[2] at a ‘scale and reach that is unprecedented’: https://www.arnnet.com.au/article/576498/telstra-cio-cyber-attacks-foreseeable-events/

Copyright © 2024 The Strategist. All rights reserved.