{"id":84574,"date":"2024-01-15T14:30:06","date_gmt":"2024-01-15T03:30:06","guid":{"rendered":"https:\/\/www.aspistrategist.ru\/?p=84574"},"modified":"2024-01-16T04:52:15","modified_gmt":"2024-01-15T17:52:15","slug":"building-trust-in-artificial-intelligence-lessons-from-the-eu-ai-act","status":"publish","type":"post","link":"https:\/\/www.aspistrategist.ru\/building-trust-in-artificial-intelligence-lessons-from-the-eu-ai-act\/","title":{"rendered":"Building trust in artificial intelligence: lessons from the EU AI Act"},"content":{"rendered":"
<\/figure>\n

Artificial intelligence will radically transform our societies and economies in the next few years. The world\u2019s democracies, together, have a duty to minimise the risks this new technology poses through smart regulation, without standing in the way of the many benefits it will bring to people\u2019s lives.<\/p>\n

There is strong momentum for AI regulation in Australia, following its adoption of a government strategy and a national set of AI ethics. Just as Australia begins to define its regulatory approach, the European Union has reached political agreement on the EU AI Act, the world\u2019s first and most comprehensive legal framework on AI. That provides Australia with an opportunity to reap the benefits from the EU\u2019s experiences.<\/p>\n

The EU embraces the idea that AI will bring many positive changes. It will improve the quality and cost-efficiency of our healthcare sector, allowing treatments that are tailored to individual needs. It can make our roads safer and prevent millions of casualties from traffic accidents. It can significantly improve the quality of our harvests, reducing the use of pesticides and fertiliser, and so help feed the world. Last but not least, it can help fight climate change, reducing waste and making our energy systems more sustainable.<\/p>\n

But the use of AI isn\u2019t without risks, including risks arising from the opacity and complexity of AI systems and from intentional manipulation. Bad actors are eager to get their hands on AI tools to launch sophisticated disinformation campaigns, unleash cyberattacks and step up their fraudulent activities.<\/p>\n

Surveys, including some conducted in Australia<\/a>, show that many people don\u2019t fully trust AI. How do we ensure that the AI systems entering our markets are trustworthy?<\/p>\n

The EU doesn\u2019t believe that it can leave responsible AI wholly to the market. It also rejects the other extreme, the autocratic approach in countries like China of banning AI models that don\u2019t endorse government policies. The EU\u2019s answer is to protect users and bring trust and predictability to the market through targeted product-safety regulation, focusing primarily on the high-risk applications of AI technologies and powerful general-purpose AI models.<\/p>\n

The EU\u2019s experience<\/a> with its legislative process offers five key lessons to approaching AI governance.<\/p>\n

First, any regulatory measures must focus on ensuring that AI systems are safe and human-centric before they can be used. To generates the necessary trust, AI systems must be checked for core principles such as non-discrimination, transparency and explainability. AI developers must train their systems on adequate datasets, maintain risk-management systems and provide for technical measures for human oversight. Automated decisions must be explainable; arbitrary \u2018black box\u2019 decisions are unacceptable. Deployers must also be transparent and inform users when an AI system generates content such as deepfakes.<\/p>\n

Second, rules should focus not on the AI technology itself\u2014which develops at lightning speed\u2014but on governing its use. Focusing on use cases\u2014for example, in health care, finance, recruitment or the justice system\u2014ensures that regulations are future-proof and don\u2019t lag behind rapidly evolving AI technologies.<\/p>\n

The third lesson is to follow a risk-based approach. Think of AI regulation as a pyramid, with different levels of risk. In most cases, the use of AI poses no or only minimal risks\u2014for example, when receiving music recommendations or relying on navigation apps. For such uses, no or soft rules should apply.<\/p>\n

However, in a limited number of situations where AI is used, decisions can have material effects on people\u2019s lives\u2014for example, when AI makes recruitment decisions or decides on mortgage qualifications. In these cases, stricter requirements should apply, and AI systems must be checked for safety before they can be used, as well as monitored after they\u2019re deployed. Some uses that pose unacceptable risks to democratic values, such as social scoring systems, should be banned completely.<\/p>\n

Specific attention should be given to general-purpose AI models, such as GPT-4, Claude and Gemini. Given their potential for downstream use for a wide variety of tasks, these models should be subject to transparency requirements. Under the EU AI Act, general-purpose AI models will be subject to a tiered approach. All models will be required to provide technical documentation and information on the data used to train them. The most advanced models, which can pose systemic risks to society, will be subject to stricter requirements, including model evaluations (\u2018red-teaming\u2019), risk identification and mitigation measures, adverse event reporting and adequate cybersecurity protection.<\/p>\n

Fourth, enforcement should be effective but not burdensome. The act aligns with the EU\u2019s longstanding product-safety approach: certain risky systems need to be assessed before being put on the market, to protect the public. The act classifies AI systems into the high-risk category if they are used in products covered by existing product-safety legislation, and when they are used in certain critical areas, including employment and education. Providers of these systems must ensure that their systems and governance practices conform to regulatory requirements. Designated authorities will oversee providers\u2019 conformity assessments and take action on non-compliant providers. For the most advanced general-purpose AI models, the new regulation establishes an EU AI Office to ensure efficient, centralised oversight of the models posing systemic risks to society.<\/p>\n

Lastly, developers of AI systems should be held to account when those systems cause harm. The EU is currently updating its liability rules to make it easier for those who have suffered damages from AI systems to bring claims and obtain relief\u2014surely prompting developers to exercise even greater due diligence before putting AI into the market.<\/p>\n

The EU believes an approach built around these five key tenets is balanced and effective. However, while the EU may be the first democracy to establish a comprehensive framework, we need a global approach to be truly effective. For this reason, the EU is also active in international forums, contributing to the progress made, for example, in the G7 and the OECD. To ensure effective compliance, though, we need binding rules. Working closely together as like-minded countries will enable us to shape an international approach to AI that is consistent with\u2014and based on\u2014our shared democratic values.<\/p>\n

The EU supports Australia\u2019s promising efforts to put in place a robust regulatory framework. Together, Australia and the EU can promote a global standard for AI governance\u2014a standard that boosts innovation, builds public trust and safeguards fundamental rights.<\/p>\n

 <\/p>\n","protected":false},"excerpt":{"rendered":"

Artificial intelligence will radically transform our societies and economies in the next few years. The world\u2019s democracies, together, have a duty to minimise the risks this new technology poses through smart regulation, without standing in …<\/p>\n","protected":false},"author":1892,"featured_media":84577,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_mi_skip_tracking":false,"footnotes":""},"categories":[1],"tags":[1291,1025,722,3148,332],"class_list":["post-84574","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-general","tag-artificial-intelligence","tag-european-union","tag-innovation","tag-neurotechnology","tag-technology","dinkus-of-minds-and-machines-an-ai-series"],"acf":[],"yoast_head":"\nBuilding trust in artificial intelligence: lessons from the EU AI Act | The Strategist<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/www.aspistrategist.ru\/building-trust-in-artificial-intelligence-lessons-from-the-eu-ai-act\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Building trust in artificial intelligence: lessons from the EU AI Act | The Strategist\" \/>\n<meta property=\"og:description\" content=\"Artificial intelligence will radically transform our societies and economies in the next few years. The world\u2019s democracies, together, have a duty to minimise the risks this new technology poses through smart regulation, without standing in ...\" \/>\n<meta property=\"og:url\" content=\"https:\/\/www.aspistrategist.ru\/building-trust-in-artificial-intelligence-lessons-from-the-eu-ai-act\/\" \/>\n<meta property=\"og:site_name\" content=\"The Strategist\" \/>\n<meta property=\"article:publisher\" content=\"https:\/\/www.facebook.com\/ASPI.org\" \/>\n<meta property=\"article:published_time\" content=\"2024-01-15T03:30:06+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2024-01-15T17:52:15+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/www.aspistrategist.ru\/wp-content\/uploads\/2024\/01\/GettyImages-1432540398.jpg\" \/>\n\t<meta property=\"og:image:width\" content=\"1024\" \/>\n\t<meta property=\"og:image:height\" content=\"683\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"author\" content=\"Roberto Viola\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@ASPI_org\" \/>\n<meta name=\"twitter:site\" content=\"@ASPI_org\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Roberto Viola\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"5 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"WebSite\",\"@id\":\"https:\/\/www.aspistrategist.ru\/#website\",\"url\":\"https:\/\/www.aspistrategist.ru\/\",\"name\":\"The Strategist\",\"description\":\"ASPI's analysis and commentary site\",\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/www.aspistrategist.ru\/?s={search_term_string}\"},\"query-input\":\"required name=search_term_string\"}],\"inLanguage\":\"en-AU\"},{\"@type\":\"ImageObject\",\"inLanguage\":\"en-AU\",\"@id\":\"https:\/\/www.aspistrategist.ru\/building-trust-in-artificial-intelligence-lessons-from-the-eu-ai-act\/#primaryimage\",\"url\":\"https:\/\/www.aspistrategist.ru\/wp-content\/uploads\/2024\/01\/GettyImages-1432540398.jpg\",\"contentUrl\":\"https:\/\/www.aspistrategist.ru\/wp-content\/uploads\/2024\/01\/GettyImages-1432540398.jpg\",\"width\":1024,\"height\":683,\"caption\":\"LONDON, ENGLAND - OCTOBER 11: Ai-Da Robot, the world's first ultra-realistic humanoid robot artist, appears at a photo call in a committee room in the House of Lords on October 11, 2022 in London, England. AI-Da Robot will deliver her maiden speech to members of the Lords Communications and Digital Committee in the House of Lords. She explores the theme of whether creativity in the UK is under attack from technology and also the role of machine learning, machine creativity and Artificial Intelligence within the UK's creative industries. (Photo by Rob Pinney\/Getty Images)\"},{\"@type\":\"WebPage\",\"@id\":\"https:\/\/www.aspistrategist.ru\/building-trust-in-artificial-intelligence-lessons-from-the-eu-ai-act\/\",\"url\":\"https:\/\/www.aspistrategist.ru\/building-trust-in-artificial-intelligence-lessons-from-the-eu-ai-act\/\",\"name\":\"Building trust in artificial intelligence: lessons from the EU AI Act | The Strategist\",\"isPartOf\":{\"@id\":\"https:\/\/www.aspistrategist.ru\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\/\/www.aspistrategist.ru\/building-trust-in-artificial-intelligence-lessons-from-the-eu-ai-act\/#primaryimage\"},\"datePublished\":\"2024-01-15T03:30:06+00:00\",\"dateModified\":\"2024-01-15T17:52:15+00:00\",\"author\":{\"@id\":\"https:\/\/www.aspistrategist.ru\/#\/schema\/person\/a274bcb88a08ae786138f9dec53ee133\"},\"breadcrumb\":{\"@id\":\"https:\/\/www.aspistrategist.ru\/building-trust-in-artificial-intelligence-lessons-from-the-eu-ai-act\/#breadcrumb\"},\"inLanguage\":\"en-AU\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/www.aspistrategist.ru\/building-trust-in-artificial-intelligence-lessons-from-the-eu-ai-act\/\"]}]},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/www.aspistrategist.ru\/building-trust-in-artificial-intelligence-lessons-from-the-eu-ai-act\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\/\/www.aspistrategist.ru\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Building trust in artificial intelligence: lessons from the EU AI Act\"}]},{\"@type\":\"Person\",\"@id\":\"https:\/\/www.aspistrategist.ru\/#\/schema\/person\/a274bcb88a08ae786138f9dec53ee133\",\"name\":\"Roberto Viola\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-AU\",\"@id\":\"https:\/\/www.aspistrategist.ru\/#\/schema\/person\/image\/\",\"url\":\"https:\/\/secure.gravatar.com\/avatar\/2e5d29de729c5a4783697a91e72173d7?s=96&d=mm&r=g\",\"contentUrl\":\"https:\/\/secure.gravatar.com\/avatar\/2e5d29de729c5a4783697a91e72173d7?s=96&d=mm&r=g\",\"caption\":\"Roberto Viola\"},\"url\":\"https:\/\/www.aspistrategist.ru\/author\/roberto-viola\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Building trust in artificial intelligence: lessons from the EU AI Act | The Strategist","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/www.aspistrategist.ru\/building-trust-in-artificial-intelligence-lessons-from-the-eu-ai-act\/","og_locale":"en_US","og_type":"article","og_title":"Building trust in artificial intelligence: lessons from the EU AI Act | The Strategist","og_description":"Artificial intelligence will radically transform our societies and economies in the next few years. The world\u2019s democracies, together, have a duty to minimise the risks this new technology poses through smart regulation, without standing in ...","og_url":"https:\/\/www.aspistrategist.ru\/building-trust-in-artificial-intelligence-lessons-from-the-eu-ai-act\/","og_site_name":"The Strategist","article_publisher":"https:\/\/www.facebook.com\/ASPI.org","article_published_time":"2024-01-15T03:30:06+00:00","article_modified_time":"2024-01-15T17:52:15+00:00","og_image":[{"width":1024,"height":683,"url":"https:\/\/www.aspistrategist.ru\/wp-content\/uploads\/2024\/01\/GettyImages-1432540398.jpg","type":"image\/jpeg"}],"author":"Roberto Viola","twitter_card":"summary_large_image","twitter_creator":"@ASPI_org","twitter_site":"@ASPI_org","twitter_misc":{"Written by":"Roberto Viola","Est. reading time":"5 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"WebSite","@id":"https:\/\/www.aspistrategist.ru\/#website","url":"https:\/\/www.aspistrategist.ru\/","name":"The Strategist","description":"ASPI's analysis and commentary site","potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/www.aspistrategist.ru\/?s={search_term_string}"},"query-input":"required name=search_term_string"}],"inLanguage":"en-AU"},{"@type":"ImageObject","inLanguage":"en-AU","@id":"https:\/\/www.aspistrategist.ru\/building-trust-in-artificial-intelligence-lessons-from-the-eu-ai-act\/#primaryimage","url":"https:\/\/www.aspistrategist.ru\/wp-content\/uploads\/2024\/01\/GettyImages-1432540398.jpg","contentUrl":"https:\/\/www.aspistrategist.ru\/wp-content\/uploads\/2024\/01\/GettyImages-1432540398.jpg","width":1024,"height":683,"caption":"LONDON, ENGLAND - OCTOBER 11: Ai-Da Robot, the world's first ultra-realistic humanoid robot artist, appears at a photo call in a committee room in the House of Lords on October 11, 2022 in London, England. AI-Da Robot will deliver her maiden speech to members of the Lords Communications and Digital Committee in the House of Lords. She explores the theme of whether creativity in the UK is under attack from technology and also the role of machine learning, machine creativity and Artificial Intelligence within the UK's creative industries. (Photo by Rob Pinney\/Getty Images)"},{"@type":"WebPage","@id":"https:\/\/www.aspistrategist.ru\/building-trust-in-artificial-intelligence-lessons-from-the-eu-ai-act\/","url":"https:\/\/www.aspistrategist.ru\/building-trust-in-artificial-intelligence-lessons-from-the-eu-ai-act\/","name":"Building trust in artificial intelligence: lessons from the EU AI Act | The Strategist","isPartOf":{"@id":"https:\/\/www.aspistrategist.ru\/#website"},"primaryImageOfPage":{"@id":"https:\/\/www.aspistrategist.ru\/building-trust-in-artificial-intelligence-lessons-from-the-eu-ai-act\/#primaryimage"},"datePublished":"2024-01-15T03:30:06+00:00","dateModified":"2024-01-15T17:52:15+00:00","author":{"@id":"https:\/\/www.aspistrategist.ru\/#\/schema\/person\/a274bcb88a08ae786138f9dec53ee133"},"breadcrumb":{"@id":"https:\/\/www.aspistrategist.ru\/building-trust-in-artificial-intelligence-lessons-from-the-eu-ai-act\/#breadcrumb"},"inLanguage":"en-AU","potentialAction":[{"@type":"ReadAction","target":["https:\/\/www.aspistrategist.ru\/building-trust-in-artificial-intelligence-lessons-from-the-eu-ai-act\/"]}]},{"@type":"BreadcrumbList","@id":"https:\/\/www.aspistrategist.ru\/building-trust-in-artificial-intelligence-lessons-from-the-eu-ai-act\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/www.aspistrategist.ru\/"},{"@type":"ListItem","position":2,"name":"Building trust in artificial intelligence: lessons from the EU AI Act"}]},{"@type":"Person","@id":"https:\/\/www.aspistrategist.ru\/#\/schema\/person\/a274bcb88a08ae786138f9dec53ee133","name":"Roberto Viola","image":{"@type":"ImageObject","inLanguage":"en-AU","@id":"https:\/\/www.aspistrategist.ru\/#\/schema\/person\/image\/","url":"https:\/\/secure.gravatar.com\/avatar\/2e5d29de729c5a4783697a91e72173d7?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/2e5d29de729c5a4783697a91e72173d7?s=96&d=mm&r=g","caption":"Roberto Viola"},"url":"https:\/\/www.aspistrategist.ru\/author\/roberto-viola\/"}]}},"_links":{"self":[{"href":"https:\/\/www.aspistrategist.ru\/wp-json\/wp\/v2\/posts\/84574"}],"collection":[{"href":"https:\/\/www.aspistrategist.ru\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.aspistrategist.ru\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.aspistrategist.ru\/wp-json\/wp\/v2\/users\/1892"}],"replies":[{"embeddable":true,"href":"https:\/\/www.aspistrategist.ru\/wp-json\/wp\/v2\/comments?post=84574"}],"version-history":[{"count":8,"href":"https:\/\/www.aspistrategist.ru\/wp-json\/wp\/v2\/posts\/84574\/revisions"}],"predecessor-version":[{"id":84605,"href":"https:\/\/www.aspistrategist.ru\/wp-json\/wp\/v2\/posts\/84574\/revisions\/84605"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.aspistrategist.ru\/wp-json\/wp\/v2\/media\/84577"}],"wp:attachment":[{"href":"https:\/\/www.aspistrategist.ru\/wp-json\/wp\/v2\/media?parent=84574"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.aspistrategist.ru\/wp-json\/wp\/v2\/categories?post=84574"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.aspistrategist.ru\/wp-json\/wp\/v2\/tags?post=84574"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}