- The Strategist - https://aspistrategist.ru -
Building trust in artificial intelligence: lessons from the EU AI Act
Posted By Roberto Viola on January 15, 2024 @ 14:30
Artificial intelligence will radically transform our societies and economies in the next few years. The world’s democracies, together, have a duty to minimise the risks this new technology poses through smart regulation, without standing in the way of the many benefits it will bring to people’s lives.
There is strong momentum for AI regulation in Australia, following its adoption of a government strategy and a national set of AI ethics. Just as Australia begins to define its regulatory approach, the European Union has reached political agreement on the EU AI Act, the world’s first and most comprehensive legal framework on AI. That provides Australia with an opportunity to reap the benefits from the EU’s experiences.
The EU embraces the idea that AI will bring many positive changes. It will improve the quality and cost-efficiency of our healthcare sector, allowing treatments that are tailored to individual needs. It can make our roads safer and prevent millions of casualties from traffic accidents. It can significantly improve the quality of our harvests, reducing the use of pesticides and fertiliser, and so help feed the world. Last but not least, it can help fight climate change, reducing waste and making our energy systems more sustainable.
But the use of AI isn’t without risks, including risks arising from the opacity and complexity of AI systems and from intentional manipulation. Bad actors are eager to get their hands on AI tools to launch sophisticated disinformation campaigns, unleash cyberattacks and step up their fraudulent activities.
Surveys, including some conducted in Australia [1], show that many people don’t fully trust AI. How do we ensure that the AI systems entering our markets are trustworthy?
The EU doesn’t believe that it can leave responsible AI wholly to the market. It also rejects the other extreme, the autocratic approach in countries like China of banning AI models that don’t endorse government policies. The EU’s answer is to protect users and bring trust and predictability to the market through targeted product-safety regulation, focusing primarily on the high-risk applications of AI technologies and powerful general-purpose AI models.
The EU’s experience [2] with its legislative process offers five key lessons to approaching AI governance.
First, any regulatory measures must focus on ensuring that AI systems are safe and human-centric before they can be used. To generates the necessary trust, AI systems must be checked for core principles such as non-discrimination, transparency and explainability. AI developers must train their systems on adequate datasets, maintain risk-management systems and provide for technical measures for human oversight. Automated decisions must be explainable; arbitrary ‘black box’ decisions are unacceptable. Deployers must also be transparent and inform users when an AI system generates content such as deepfakes.
Second, rules should focus not on the AI technology itself—which develops at lightning speed—but on governing its use. Focusing on use cases—for example, in health care, finance, recruitment or the justice system—ensures that regulations are future-proof and don’t lag behind rapidly evolving AI technologies.
The third lesson is to follow a risk-based approach. Think of AI regulation as a pyramid, with different levels of risk. In most cases, the use of AI poses no or only minimal risks—for example, when receiving music recommendations or relying on navigation apps. For such uses, no or soft rules should apply.
However, in a limited number of situations where AI is used, decisions can have material effects on people’s lives—for example, when AI makes recruitment decisions or decides on mortgage qualifications. In these cases, stricter requirements should apply, and AI systems must be checked for safety before they can be used, as well as monitored after they’re deployed. Some uses that pose unacceptable risks to democratic values, such as social scoring systems, should be banned completely.
Specific attention should be given to general-purpose AI models, such as GPT-4, Claude and Gemini. Given their potential for downstream use for a wide variety of tasks, these models should be subject to transparency requirements. Under the EU AI Act, general-purpose AI models will be subject to a tiered approach. All models will be required to provide technical documentation and information on the data used to train them. The most advanced models, which can pose systemic risks to society, will be subject to stricter requirements, including model evaluations (‘red-teaming’), risk identification and mitigation measures, adverse event reporting and adequate cybersecurity protection.
Fourth, enforcement should be effective but not burdensome. The act aligns with the EU’s longstanding product-safety approach: certain risky systems need to be assessed before being put on the market, to protect the public. The act classifies AI systems into the high-risk category if they are used in products covered by existing product-safety legislation, and when they are used in certain critical areas, including employment and education. Providers of these systems must ensure that their systems and governance practices conform to regulatory requirements. Designated authorities will oversee providers’ conformity assessments and take action on non-compliant providers. For the most advanced general-purpose AI models, the new regulation establishes an EU AI Office to ensure efficient, centralised oversight of the models posing systemic risks to society.
Lastly, developers of AI systems should be held to account when those systems cause harm. The EU is currently updating its liability rules to make it easier for those who have suffered damages from AI systems to bring claims and obtain relief—surely prompting developers to exercise even greater due diligence before putting AI into the market.
The EU believes an approach built around these five key tenets is balanced and effective. However, while the EU may be the first democracy to establish a comprehensive framework, we need a global approach to be truly effective. For this reason, the EU is also active in international forums, contributing to the progress made, for example, in the G7 and the OECD. To ensure effective compliance, though, we need binding rules. Working closely together as like-minded countries will enable us to shape an international approach to AI that is consistent with—and based on—our shared democratic values.
The EU supports Australia’s promising efforts to put in place a robust regulatory framework. Together, Australia and the EU can promote a global standard for AI governance—a standard that boosts innovation, builds public trust and safeguards fundamental rights.
Article printed from The Strategist: https://aspistrategist.ru
URL to article: /building-trust-in-artificial-intelligence-lessons-from-the-eu-ai-act/
URLs in this post:
[1] in Australia: https://www.oaic.gov.au/__data/assets/pdf_file/0015/2373/australian-community-attitudes-to-privacy-survey-2020.pdf
[2] EU’s experience: https://digital-strategy.ec.europa.eu/en/policies/european-approach-artificial-intelligence
Click here to print.
Copyright © 2024 The Strategist. All rights reserved.