{"id":88375,"date":"2024-08-16T13:10:21","date_gmt":"2024-08-16T03:10:21","guid":{"rendered":"https:\/\/www.aspistrategist.ru\/?p=88375"},"modified":"2024-08-16T13:10:21","modified_gmt":"2024-08-16T03:10:21","slug":"ai-disinformation-lessons-from-the-uks-election","status":"publish","type":"post","link":"https:\/\/www.aspistrategist.ru\/ai-disinformation-lessons-from-the-uks-election\/","title":{"rendered":"AI disinformation: lessons from the UK’s election"},"content":{"rendered":"
\"\"<\/figure>\n

The year of elections was also feared to be the year that deepfakes would be weaponised to manipulate election results or undermine trust in democracy.<\/p>\n

The record-breaking 2024 figure<\/a> of about 4 billion voters eligible to go to the polls across more than 60 countries coincided with the full-fledged arrival and widespread uptake of multimodal generative artificial intelligence<\/a> (AI), which enables almost anyone to make fake images, videos and sound.<\/p>\n

Have these fears been realised? Our centre has analysed the incidence of AI-generated disinformation around the UK election held on July 4 and found both reasons for some reassurance, but also grounds for concern over long-terms trends eroding democracy that these threats exacerbate.<\/p>\n

In contrast to fears of a tsunami of AI fakes targeting political candidates, the UK saw only a handful of examples<\/a> of such content going viral during the campaign period.<\/p>\n

While there\u2019s no evidence these examples swayed any large number of votes, we did see spikes in online harassment against the people targeted by the fakes. We also observed confusion among audiences<\/a> over whether the content was authentic.<\/p>\n

These early signals point to longer term trends that would damage the democratic system itself, such as online harassment creating a \u2018chilling\u2019 effect on the willingness of political candidates to participate in future elections, and an erosion of trust in the online information space as audiences become increasingly unsure about which content is AI-generated and therefore which sources can be trusted.<\/p>\n

Similar findings on the impact of generative AI misuse in 18 other elections since January 2023 are reported in a recent CETaS briefing paper<\/a>.<\/p>\n

There has of course been a sensible case for heightened vigilance this year. From India to the UK, and from France to the US, the outcome of many of 2024\u2019s elections have had, or will have, enormous geopolitical implications, thus giving malicious actors strong incentives to interfere.<\/p>\n

The capability that generative AI gives users to create highly realistic content at scale using simple keyboard prompts has enhanced the disruptive powers of sophisticated state actors. But it has also dramatically lowered the barriers to access, such that even individual members of the public can pose risks to the integrity of democratic processes \u2013 including elections.<\/p>\n

The latter threat has been underscored by comments from Australia\u2019s Director-General of Security (Mike Burgess) last week, when he helped announce the lifting of the country\u2019s terrorism threat level. The basis for the increase was in part, Burgess said, that people with violent intent were \u2018motivated by a diversity of grievances and personal narratives\u2019 and were \u2018interacting in ways we have not seen before<\/a>\u2019.<\/p>\n

As a result, the risk of mis- and disinformation influencing election outcomes is much more serious.<\/p>\n

Looking at the UK general election however, generative AI turned out to play a lesser role than traditional automated threats. For instance, several investigations<\/a> into election-related content on online platforms found hallmarks of bot accounts seeking to sow division over controversial campaign issues such as immigration.<\/p>\n

Some had possible links to Russia, and pushed pro-Kremlin narratives<\/a> about the war in Ukraine. While these bot activities did include a few instances of AI-generated election material being circulated, the majority used a well-established tactic known as \u2018astroturfing\u2019<\/a>, in which many automated accounts are used to increase perceived popular support for a particular policy stance or political candidate by spamming thousands of fake comments on relevant social media posts.<\/p>\n

Alongside these bot incidents, the UK was targeted by a fake news operation with strong connections to a Russian-affiliated disinformation network called Doppelganger. Known as \u2018CopyCop\u2019, the operation involved the spreading of fictitious articles<\/a> about the war in Ukraine, to confuse the UK public and reduce support for military aid. As part of CopyCop, real news stories were pasted into AI chatbots and then re-written to align them to the network\u2019s strategic aims.<\/p>\n

However, many had prompts left in, which betrayed obvious signs of AI editing and therefore failed to attract much engagement. That said, some of these sources were picked up by Russian media influencers and spread across their channels to tens of thousands of users. Often, the real sources of the articles were concealed in a tactic called \u2018information laundering\u2019 in an effort to trick users into assuming it originated from a credible news outlet.<\/p>\n

While these disinformation activities can be connected to hostile foreign states, most viral misleading AI content in the UK election came from members of the public<\/a>. This content included deepfakes that implicated political candidates in controversial statements that they never made. Interestingly, many users behind the content claimed they were doing it<\/a> for satirical or \u2018trolling\u2019 purposes. Others may have pushed the content to increase support for their political party or because they were disillusioned with conventional political campaigns. This range of motives across different users highlights the new sources of risk and the expanded threat landscape that stem from such wide access to generative AI systems.<\/p>\n

Taken together, the most prominent disinformation problems during the UK election did not arise from novel AI technology, but from longstanding issues tied to social media platforms \u2013 including the role of influencer accounts and recommender algorithms.<\/p>\n

As we look ahead to the US election in November, it is vital that these platforms co-ordinate with different sectors to invest in measures to protect users.<\/p>\n

This includes red-teaming exercises, requiring clear labels on AI-generated political adverts, and engaging with fact-checking organisations to detect malicious content before it goes viral.<\/p>\n

And with Australia facing its own federal election in the next nine months, continued scrutiny of the risks and the malicious perpetrators \u2013 and emerging measures to combat them \u2013 is also vitally in the country\u2019s interests.<\/p>\n