Monday’s meeting between Donald Trump and Vladimir Putin in Helsinki caps a big few days on the topic of cyber election interference. On Friday, special counsel Robert Mueller—tasked with investigating Russian interference in the 2016 US election—indicted 12 Russian intelligence officers over the hacking of the Democratic National Committee and the Hillary Clinton campaign. The indictment, remarkable for publicly revealing the extent to which US intelligence services themselves have compromised Russian networks, is further confirmation of the US intelligence community’s unanimous view on the certainty of the Kremlin’s interference.
Trump doesn’t agree, expressing support for Putin’s denial of Russian involvement. With the 2018 US midterm elections approaching, the ‘warning lights are blinking red again’, and the world is watching to see how, or even whether, Washington will be willing and able to counter such activities. Tech companies are making significant preparations—even if they don’t always agree on what to do. But the White House has shown little urgency in taking concrete steps to protect the electoral process or even assist with private sector initiatives towards that end.
The 2016 US presidential election was a remarkable event in the history of US national security, in terms of the brazen nature of Russia’s interference (not to mention unresolved questions about the Trump campaign’s involvement) and the unprecedented nature and scale of the cyber tools deployed. Yes, foreign ‘influence operations’ are a long-established stratagem of intelligence services, but the brazen leveraging of cyberspace—and social media in particular—to influence voter behaviour took most by surprise.
Our recent article in the journal Contemporary Politics develops an analytical framework for thinking about how cyber interference affects voting behaviour. We ask how cyber tools can shape individual decisions, by making someone more or less likely to vote for a particular candidate, vote at all, or engage with political processes in other ways. We examine three cyber tools in particular.
Doxing is where confidential information is obtained and then released publicly, as detailed in Friday’s indictment. Another example was when the email account of Clinton’s campaign chair John Podesta was hacked and his emails subsequently published on WikiLeaks. The online avatar Guccifer 2.0, now revealed to be a creation of Russian intelligence, played a prominent role in disseminating the stolen information.
Fake news, now a familiar (though often misused) term, is false or misleading content usually spread via social media and fashioned to appear credible or authentic. For example, a US Justice Department indictment alleges that during the 2016 campaign, Russian operatives promoted (false) allegations of voter fraud by the Democratic Party, echoing Trump’s own message that the system was rigged.
Finally, trolling is the act of posting provocative and/or lurid content online to elicit emotional responses and widen existing social or political cleavages. Trolling behaviour isn’t just comments authored by automated bots or hired ‘troll farms’; it can also include paid advertisements on divisive social and political issues such as race, religion and gun control.
These tools were deployed with sophistication during the 2016 campaign. The hacked Podesta emails were published within minutes of the infamous Trump Access Hollywood tape emerging, distracting from that scandal. Paid advertisements targeted highly specific demographics such as Beyoncé fans or secessionist Texans. Facebook’s general counsel described Russia-linked advertisements as ‘an insidious attempt to drive people apart’. Our research unpacks how these activities might have affected the political behaviour of American voters and potentially swung a historically close election.
It’s impossible to know for sure whether Russia’s efforts were decisive, though James Clapper, the former director of national intelligence, believes they were. But there’s enough evidence to understand how these tools operated and identify local conditions that may have exacerbated or blunted their impact.
In particular, it’s instructive to compare the 2016 US election with the 2017 French presidential election, which saw similar tactics deployed. Emails from the campaign of frontrunner (and eventual winner) Emmanuel Macron were leaked just before the final-round vote, fake news stories included a report that the Macron campaign was being funded by Saudi Arabia and supported by al-Qaeda, and a coordinated effort including trolling activity was directed towards supporting Russia’s preferred candidate, far-right nationalist Marine Le Pen.
Analysing and comparing the two elections suggests that local factors matter in shaping the effectiveness of cyber voter interference.
First, an election season spanning months in the US (compared to a narrow five-week window in France, including a media blackout just prior to voting) gave ample time for hackers to infiltrate political campaigns. It also allowed the fatiguing psychological impacts of fake news and trolling to accumulate.
Second, the incumbent French government, the Macron campaign and tech companies each took more initiative in conducting cyber defence—for example, taking decisive and public action in the wake of the Macron email leaks to caution the public and remind the media of their duty to protect the integrity of the vote. The Obama administration, in contrast, was inhibited by both confusion about the nature of the attack and partisan resistance from Republicans.
Third, the mainstream media environment in the US was far more sensationalist, overwhelmingly negative in tone and light on policy. In addition to the profit motive of covering Trump himself, news outlets were vulnerable to manipulation by being fed stolen information and focusing on scandal—such as the content of stolen emails—rather than the implications of the theft itself.
As Thomas Rid observes, cyber voter interference seeks ‘to drive wedges into pre-existing cracks’, not create new ones. Highly polarised or divided democracies may be more vulnerable due to their tribal nature, but our comparison of the US and French elections suggests that effectiveness of cyber tools also depends on how the sources and integrity of information entering into the public discourse are regulated.
Trust in the underlying integrity of public discourse is vital for the functioning of a liberal democracy. The policy challenge is to regulate but not suffocate flows of information, promoting transparency and civility while leaving space for a diversity of opinions and perspectives. These issues are just as complex closer to home—Australia’s new Electoral Integrity Task Force has much on its plate.