{"id":86045,"date":"2024-03-21T07:52:07","date_gmt":"2024-03-20T20:52:07","guid":{"rendered":"https:\/\/www.aspistrategist.ru\/?p=86045"},"modified":"2024-03-21T14:37:49","modified_gmt":"2024-03-21T03:37:49","slug":"how-to-think-about-ai-policy","status":"publish","type":"post","link":"https:\/\/www.aspistrategist.ru\/how-to-think-about-ai-policy\/","title":{"rendered":"How to think about AI policy"},"content":{"rendered":"
<\/figure>\n

In Poznan, 325 kilometers east of Warsaw, a team of tech researchers, engineers, and child caregivers are working on a small revolution. Their joint project, \u2018<\/span>Insension<\/span><\/a>\u2019, uses facial recognition powered by artificial intelligence to help children with profound intellectual and multiple disabilities interact with others and with their surroundings, becoming more connected with the world. It is a testament to the power of this quickly advancing technology.<\/span>\u00a0<\/span><\/p>\n

Thousands of kilometers away, in the streets of Beijing, AI-powered facial recognition is used by government officials to track citizens\u2019 daily movements and keep the entire population under close surveillance. It is the same technology, but the result is fundamentally different. These two examples encapsulate the broader AI challenge: the underlying technology is neither good nor bad in itself; everything depends on how it is used.<\/span>\u00a0<\/span><\/p>\n

AI\u2019s essentially dual nature informed how we chose to design the <\/span>European Artificial Intelligence Act<\/span><\/a>, a regulation focused on the uses of AI, rather than on the technology itself. Our approach boils down to a simple principle: the riskier the AI, the stronger the obligations for those who develop it.<\/span>\u00a0<\/span><\/p>\n

AI already enables numerous harmless functions that we perform every day\u2014from unlocking our phones to recommending songs based on our preferences. We simply do not need to regulate all these uses. But AI also increasingly plays a role at decisive moments in one\u2019s life. When a bank screens someone to determine if she qualifies for a mortgage, it isn\u2019t just about a loan; it is about putting a roof over her head and allowing her to build wealth and pursue financial security. The same is true when employers use <\/span>emotion-recognition software<\/span><\/a> as an add-on to their recruitment process, or when AI is used to <\/span>detect illnesses<\/span><\/a> in brain images. The latter is not just a routine medical check; it is literally a matter of life or death.<\/span>\u00a0<\/span><\/p>\n

In these kinds of cases, the new regulation imposes significant obligations on AI developers. They must comply with a range of requirements\u2014from running risk assessments to ensuring technical robustness, human oversight, and cybersecurity\u2014before releasing their systems on the market. Moreover, the AI Act bans all uses that clearly go against our most fundamental values. For example, AI may not be used for social scoring or subliminal techniques to manipulate vulnerable populations, such as children.<\/span>\u00a0<\/span><\/p>\n

Though some will argue that this high-level control deters innovation, in Europe we see it differently. For starters, time-blind rules provide the certainty and confidence that tech innovators need to develop new products. But more to the point, AI will not reach its immense positive potential unless end-users trust it. Here, even more than in many other fields, trust serves as an engine of innovation. As regulators, we can create the conditions for the technology to flourish by upholding our duty to ensure safety and public trust.<\/span>\u00a0<\/span><\/p>\n

Far from challenging Europe\u2019s risk-based approach, the recent boom of general-purpose AI (GPAI) models like ChatGPT has made it only more relevant. While these tools help scammers around the world produce alarmingly credible phishing emails, the same models also could be used to detect AI-generated content. In the space of just a few months, GPAI models have taken the technology to a new level in terms of the opportunities it offers, and the risks it has introduced.<\/span>\u00a0<\/span><\/p>\n

Of course, one of the most daunting risks is that we may not always be able to distinguish what is fake from what is real. GPAI-generated deepfakes are already causing scandals and hitting the headlines. In late January, fake pornographic images of the global pop icon Taylor Swift reached <\/span>47 million<\/span><\/a> views on X (formerly Twitter) before the platform finally suspended the user who had shared them.<\/span>\u00a0<\/span><\/p>\n

It is not hard to imagine the damage that such content can do to an individual\u2019s mental health. But if applied on an even broader scale, such as in the context of an election, it could threaten entire populations. The AI Act offers a straightforward response to this problem. AI-generated content will have to be labeled as such, so that everyone knows immediately that it is not real. That means providers will have to design systems in a way that synthetic audio, video, text, and images are marked in a machine-readable format, and detectable as artificially generated or manipulated.<\/span>\u00a0<\/span><\/p>\n

Companies will be given a chance to bring their systems into compliance with the regulation. If they fail to comply, they will be fined. Fines would <\/span>range<\/span><\/a> from \u20ac35 million ($58 million) or 7% of global annual turnover (whichever is higher) for violations of banned AI applications; \u20ac15 million or 3% for violations of other obligations; and \u20ac7.5 million or 1.5% for supplying incorrect information. But fines are not all. Noncompliant AI systems will also be prohibited from being placed on the EU market.<\/span>\u00a0<\/span><\/p>\n

Europe is the first mover on AI regulation, but our efforts are already helping to mobilize responses elsewhere. As many other countries start to embrace similar frameworks\u2014including the United States, which is <\/span>collaborating<\/span><\/a> with Europe on \u2018a risk-based approach to AI to advance trustworthy and responsible AI technologies\u2019\u2014we feel confident that our overall approach is the right one. Just a few months ago, it inspired G7 leaders to agree on a first-of-its-kind <\/span>Code of Conduct on Artificial Intelligence<\/span><\/a>. These kinds of international guardrails will help keep users safe until legal obligations start kicking in.<\/span>\u00a0<\/span><\/p>\n

AI is neither good nor bad, but it will usher in a global era of complexity and ambiguity. In Europe, we have designed a regulation that reflects this. Probably more than any other piece of EU legislation, this one required a careful balancing act\u2014between power and responsibility, between innovation and trust, and between freedom and safety.<\/span>\u00a0<\/span><\/p>\n","protected":false},"excerpt":{"rendered":"

In Poznan, 325 kilometers east of Warsaw, a team of tech researchers, engineers, and child caregivers are working on a small revolution. Their joint project, \u2018Insension\u2019, uses facial recognition powered by artificial intelligence to help …<\/p>\n","protected":false},"author":1327,"featured_media":86046,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_mi_skip_tracking":false,"footnotes":""},"categories":[1],"tags":[1291,1025],"class_list":["post-86045","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-general","tag-artificial-intelligence","tag-european-union"],"acf":[],"yoast_head":"\nHow to think about AI policy | The Strategist<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/www.aspistrategist.ru\/how-to-think-about-ai-policy\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"How to think about AI policy | The Strategist\" \/>\n<meta property=\"og:description\" content=\"In Poznan, 325 kilometers east of Warsaw, a team of tech researchers, engineers, and child caregivers are working on a small revolution. Their joint project, \u2018Insension\u2019, uses facial recognition powered by artificial intelligence to help ...\" \/>\n<meta property=\"og:url\" content=\"https:\/\/www.aspistrategist.ru\/how-to-think-about-ai-policy\/\" \/>\n<meta property=\"og:site_name\" content=\"The Strategist\" \/>\n<meta property=\"article:publisher\" content=\"https:\/\/www.facebook.com\/ASPI.org\" \/>\n<meta property=\"article:published_time\" content=\"2024-03-20T20:52:07+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2024-03-21T03:37:49+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/www.aspistrategist.ru\/wp-content\/uploads\/2024\/03\/GettyImages-1238094209-scaled.jpg\" \/>\n\t<meta property=\"og:image:width\" content=\"2560\" \/>\n\t<meta property=\"og:image:height\" content=\"1707\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"author\" content=\"Margrethe Vestager\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@ASPI_org\" \/>\n<meta name=\"twitter:site\" content=\"@ASPI_org\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Margrethe Vestager\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"5 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"WebSite\",\"@id\":\"https:\/\/www.aspistrategist.ru\/#website\",\"url\":\"https:\/\/www.aspistrategist.ru\/\",\"name\":\"The Strategist\",\"description\":\"ASPI's analysis and commentary site\",\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/www.aspistrategist.ru\/?s={search_term_string}\"},\"query-input\":\"required name=search_term_string\"}],\"inLanguage\":\"en-AU\"},{\"@type\":\"ImageObject\",\"inLanguage\":\"en-AU\",\"@id\":\"https:\/\/www.aspistrategist.ru\/how-to-think-about-ai-policy\/#primaryimage\",\"url\":\"https:\/\/www.aspistrategist.ru\/wp-content\/uploads\/2024\/03\/GettyImages-1238094209-scaled.jpg\",\"contentUrl\":\"https:\/\/www.aspistrategist.ru\/wp-content\/uploads\/2024\/03\/GettyImages-1238094209-scaled.jpg\",\"width\":2560,\"height\":1707,\"caption\":\"31 January 2022, China, Peking: The Olympic rings on a flag can be seen behind the security cameras. The Beijing Winter Olympics will take place from 04-20.02.2022 under strict Corona conditions. Photo: Peter Kneffel\/dpa (Photo by Peter Kneffel\/picture alliance via Getty Images)\"},{\"@type\":\"WebPage\",\"@id\":\"https:\/\/www.aspistrategist.ru\/how-to-think-about-ai-policy\/\",\"url\":\"https:\/\/www.aspistrategist.ru\/how-to-think-about-ai-policy\/\",\"name\":\"How to think about AI policy | The Strategist\",\"isPartOf\":{\"@id\":\"https:\/\/www.aspistrategist.ru\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\/\/www.aspistrategist.ru\/how-to-think-about-ai-policy\/#primaryimage\"},\"datePublished\":\"2024-03-20T20:52:07+00:00\",\"dateModified\":\"2024-03-21T03:37:49+00:00\",\"author\":{\"@id\":\"https:\/\/www.aspistrategist.ru\/#\/schema\/person\/6d45b05ed2e5adea354e01a918f9ed76\"},\"breadcrumb\":{\"@id\":\"https:\/\/www.aspistrategist.ru\/how-to-think-about-ai-policy\/#breadcrumb\"},\"inLanguage\":\"en-AU\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/www.aspistrategist.ru\/how-to-think-about-ai-policy\/\"]}]},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/www.aspistrategist.ru\/how-to-think-about-ai-policy\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\/\/www.aspistrategist.ru\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"How to think about AI policy\"}]},{\"@type\":\"Person\",\"@id\":\"https:\/\/www.aspistrategist.ru\/#\/schema\/person\/6d45b05ed2e5adea354e01a918f9ed76\",\"name\":\"Margrethe Vestager\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-AU\",\"@id\":\"https:\/\/www.aspistrategist.ru\/#\/schema\/person\/image\/\",\"url\":\"https:\/\/secure.gravatar.com\/avatar\/1529f2a3ab07dfa8d718d83d98a4e85c?s=96&d=mm&r=g\",\"contentUrl\":\"https:\/\/secure.gravatar.com\/avatar\/1529f2a3ab07dfa8d718d83d98a4e85c?s=96&d=mm&r=g\",\"caption\":\"Margrethe Vestager\"},\"url\":\"https:\/\/www.aspistrategist.ru\/author\/margrethe-vestager\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"How to think about AI policy | The Strategist","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/www.aspistrategist.ru\/how-to-think-about-ai-policy\/","og_locale":"en_US","og_type":"article","og_title":"How to think about AI policy | The Strategist","og_description":"In Poznan, 325 kilometers east of Warsaw, a team of tech researchers, engineers, and child caregivers are working on a small revolution. Their joint project, \u2018Insension\u2019, uses facial recognition powered by artificial intelligence to help ...","og_url":"https:\/\/www.aspistrategist.ru\/how-to-think-about-ai-policy\/","og_site_name":"The Strategist","article_publisher":"https:\/\/www.facebook.com\/ASPI.org","article_published_time":"2024-03-20T20:52:07+00:00","article_modified_time":"2024-03-21T03:37:49+00:00","og_image":[{"width":2560,"height":1707,"url":"https:\/\/www.aspistrategist.ru\/wp-content\/uploads\/2024\/03\/GettyImages-1238094209-scaled.jpg","type":"image\/jpeg"}],"author":"Margrethe Vestager","twitter_card":"summary_large_image","twitter_creator":"@ASPI_org","twitter_site":"@ASPI_org","twitter_misc":{"Written by":"Margrethe Vestager","Est. reading time":"5 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"WebSite","@id":"https:\/\/www.aspistrategist.ru\/#website","url":"https:\/\/www.aspistrategist.ru\/","name":"The Strategist","description":"ASPI's analysis and commentary site","potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/www.aspistrategist.ru\/?s={search_term_string}"},"query-input":"required name=search_term_string"}],"inLanguage":"en-AU"},{"@type":"ImageObject","inLanguage":"en-AU","@id":"https:\/\/www.aspistrategist.ru\/how-to-think-about-ai-policy\/#primaryimage","url":"https:\/\/www.aspistrategist.ru\/wp-content\/uploads\/2024\/03\/GettyImages-1238094209-scaled.jpg","contentUrl":"https:\/\/www.aspistrategist.ru\/wp-content\/uploads\/2024\/03\/GettyImages-1238094209-scaled.jpg","width":2560,"height":1707,"caption":"31 January 2022, China, Peking: The Olympic rings on a flag can be seen behind the security cameras. The Beijing Winter Olympics will take place from 04-20.02.2022 under strict Corona conditions. Photo: Peter Kneffel\/dpa (Photo by Peter Kneffel\/picture alliance via Getty Images)"},{"@type":"WebPage","@id":"https:\/\/www.aspistrategist.ru\/how-to-think-about-ai-policy\/","url":"https:\/\/www.aspistrategist.ru\/how-to-think-about-ai-policy\/","name":"How to think about AI policy | The Strategist","isPartOf":{"@id":"https:\/\/www.aspistrategist.ru\/#website"},"primaryImageOfPage":{"@id":"https:\/\/www.aspistrategist.ru\/how-to-think-about-ai-policy\/#primaryimage"},"datePublished":"2024-03-20T20:52:07+00:00","dateModified":"2024-03-21T03:37:49+00:00","author":{"@id":"https:\/\/www.aspistrategist.ru\/#\/schema\/person\/6d45b05ed2e5adea354e01a918f9ed76"},"breadcrumb":{"@id":"https:\/\/www.aspistrategist.ru\/how-to-think-about-ai-policy\/#breadcrumb"},"inLanguage":"en-AU","potentialAction":[{"@type":"ReadAction","target":["https:\/\/www.aspistrategist.ru\/how-to-think-about-ai-policy\/"]}]},{"@type":"BreadcrumbList","@id":"https:\/\/www.aspistrategist.ru\/how-to-think-about-ai-policy\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/www.aspistrategist.ru\/"},{"@type":"ListItem","position":2,"name":"How to think about AI policy"}]},{"@type":"Person","@id":"https:\/\/www.aspistrategist.ru\/#\/schema\/person\/6d45b05ed2e5adea354e01a918f9ed76","name":"Margrethe Vestager","image":{"@type":"ImageObject","inLanguage":"en-AU","@id":"https:\/\/www.aspistrategist.ru\/#\/schema\/person\/image\/","url":"https:\/\/secure.gravatar.com\/avatar\/1529f2a3ab07dfa8d718d83d98a4e85c?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/1529f2a3ab07dfa8d718d83d98a4e85c?s=96&d=mm&r=g","caption":"Margrethe Vestager"},"url":"https:\/\/www.aspistrategist.ru\/author\/margrethe-vestager\/"}]}},"_links":{"self":[{"href":"https:\/\/www.aspistrategist.ru\/wp-json\/wp\/v2\/posts\/86045"}],"collection":[{"href":"https:\/\/www.aspistrategist.ru\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.aspistrategist.ru\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.aspistrategist.ru\/wp-json\/wp\/v2\/users\/1327"}],"replies":[{"embeddable":true,"href":"https:\/\/www.aspistrategist.ru\/wp-json\/wp\/v2\/comments?post=86045"}],"version-history":[{"count":5,"href":"https:\/\/www.aspistrategist.ru\/wp-json\/wp\/v2\/posts\/86045\/revisions"}],"predecessor-version":[{"id":86064,"href":"https:\/\/www.aspistrategist.ru\/wp-json\/wp\/v2\/posts\/86045\/revisions\/86064"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.aspistrategist.ru\/wp-json\/wp\/v2\/media\/86046"}],"wp:attachment":[{"href":"https:\/\/www.aspistrategist.ru\/wp-json\/wp\/v2\/media?parent=86045"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.aspistrategist.ru\/wp-json\/wp\/v2\/categories?post=86045"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.aspistrategist.ru\/wp-json\/wp\/v2\/tags?post=86045"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}