In a report released earlier this week, I and my ASPI colleagues Albert Zhang and Jake Wallis investigated a small-scale digital influence operation linked to Chinese-speaking actors. This activity targeted social media users in the United States and involved the use of heavily automated bot accounts on Facebook and Twitter to boost legitimate media coverage and social media content that presented negative or divisive views of the US. There was a particular focus on racial inequality, the Covid-19 response, the failings of the Trump administration and scandals linked to President Donald Trump himself.
The activity appears to have gained little direct traction with real social media users. However, the impact of inauthentic activity like this can be hard to gauge, because the goal isn’t necessarily about generating direct engagement. Social media algorithms often prioritise the content users are shown based on how much engagement that content has received, which means that inauthentic engagement can lead to the content being shown to more real users than it otherwise would have been. For example, a bot that shares a New York Times article may not itself receive any likes or shares, but it could still be considered successful if it leads to platform algorithms promoting that content to others.
Despite being small and generating little direct engagement, the campaign makes for an interesting case study because of what it demonstrates about the information ecosystem writ large: digital foreign political interference is no longer the sole preserve of well-resourced actors. Today, even very small-scale operators can launch persistent, cross-platform (and, in this case, likely foreign) political influence efforts, run on the metaphorical smell of an oily rag.
This has broader implications. Even if each individual small-scale operation, such as this one, has little direct impact, the proliferation of coordinated inauthentic influence efforts run by anyone and everyone with a political axe to grind could have a serious and detrimental effort on the political information environment.
We’ve seen something similar happen before in another context. Over the past several years, the tools and techniques enabling cybercrime have become ever more accessible to small-scale and low-skilled criminals. Cyberattacks that not so long ago could have been perpetrated only by nation-state actors are now readily carried out by criminal groups using off-the-shelf malware and purchased exploits. Other hacker groups offer crime-as-a-service, effectively making the ability to conduct cyberattacks available to anyone with the money to pay for it.
Individually, each small ransomware operator or online fraudster may have a minimal impact, but overall the global cost is in the trillions of US dollars each year.
That same dynamic is beginning to play out in the political information sphere, and the cost will be counted not in dollars, but in increased distrust, polarisation and fragmentation.
Disinformation that is not effective in achieving its intended goal still has effects. It distorts the authentic social media conversation in a range of ways, whether by interfering with algorithms through inauthentic engagement or by contributing to the suspicion that anyone with a divergent opinion is in fact a bot (aka the now widespread ‘anyone who disagrees with me is a Russian bot’ meme). This is in some senses a dehumanisation of political opponents, and contributes to growing political polarisation.
The widespread proliferation of low-quality, unsophisticated but persistent disinformation and political influence efforts on social media has the potential to crowd the information space. The small scale of such operations may contribute to the problem. Large, noisy campaigns such as the Chinese-state-linked operation we researched in Retweeting through the Great Firewall are likely to be detected and removed, but small operations may simply continue to buzz along at a low level over a sustained period, as this one has, persisting through account removals and blending into the background noise of the internet—and, over time, changing the tune.