A new website launched by ASPI’s International Cyber Policy Centre is designed to identify nations using deception operations to manipulate potential adversaries, and their own populations.
The Understanding Global Disinformation and Information Operations website provides a visual breakdown of the publicly available data from state-linked information operations on social media. ASPI’s information operations and disinformation team has analysed each of the datasets in Twitter’s information operations archive to provide a longitudinal analysis of how each country’s willingness, capability and intent has evolved over time.
Our analysis demonstrates that there’s a proliferation of state actors willing to deploy information operations targeting their own populations, as well as those of their adversaries. Russia, Iran, Saudi Arabia, China and Venezuela are the most prolific perpetrators. By making these complex datasets available in accessible form, ASPI is broadening meaningful engagement on the challenge of state actor information operations and disinformation campaigns for policymakers, civil society and the international research community.
Since October 2018, Twitter has released the tweets, media and details of associated accounts that the social network believes were part of state-linked information operations. The datasets originated from 17 countries, including the usual suspects Russia, China and Iran, but also Armenia, Bangladesh, Cuba, Ecuador, Egypt, Honduras, Indonesia, Serbia, Spain, Thailand, Turkey, the United Arab Emirates and Venezuela.
Analysis of information operations that exploit social media as a vector has tended towards the examination of individual sets of takedown data, particularly those relating to high-profile significant state actors, such as Russia, China and Iran.
As a taste, between October 2018 and March 2021, Twitter removed eight networks it believed originated in Russia and were attributed to the Internet Research Agency and other Russian state actors. ASPI’s analysis of all Russia-linked operations found that mentions of the US dwarfed those of all other countries and that the most used hashtags were heavily focused on hot-button US political issues, including President Donald Trump’s ‘MAGA’ slogan, QAnon and anti-Islam sentiment.
US domestic politics wasn’t the only focus. Other narratives included efforts to undermine NATO to European audiences, slander Ukrainian leaders, promote Russian foreign and military policy in Syria, and discredit candidates in US and European democratic elections.
Between November 2019 and March 2021, Twitter removed seven networks it believed originated in Iran and were backed by or associated with the Iranian government. Given that Twitter is banned in Iran, the campaigns sought to influence international perceptions of Iran while stirring up political division and encouraging unrest in adversary states. These networks also amplified content relating to social divisions in the US, such as the Black Lives Matter movement.
Unlike Russia-linked messaging—which was overwhelmingly focused on the US—Iran-linked messaging referenced countries in Iran’s region, including Pakistan, Palestine, Israel and Syria.
The network’s fake personas were sometimes convincing, well-rounded characters, giving the appearance of locals concerned with particular political issues. Other assets may have been part of an influence-for-hire network. Some of those networks benefited from Iran’s sophisticated fake-news and state-media apparatus.
Between September 2019 and July 2020, as the pro-democracy movement erupted on the streets of Hong Kong, Twitter removed three networks of accounts that originated within China, which is notable, given that the platform is blocked to most of the population there. In terms of geographical mentions, Hong Kong dominated the data compared with mentions of China itself and the US.
The networks disclosed in these datasets generally sought to influence the attitudes of Chinese diaspora communities and citizens overseas on domestic and foreign policy issues that were of concern to the Chinese Communist Party. Tweets contained text in both simplified Chinese characters, which are used by Chinese citizens originating from the Chinese mainland, and traditional Chinese characters, which are used in Hong Kong and Taiwan. Notably, the posting pattern for the China-origin tweets almost perfectly mapped to Chinese working hours, with a peak of posting at 10 am and a lunch break around noon.
Few research entities—internationally, let alone in the Indo-Pacific—have the technical and analytical capability to investigate more complex takedown datasets, hindering the international community’s capacity to understand the tactics and tradecraft of actors willing to mobilise strategic deception as a tool of statecraft. Yet traits within this data help us determine who was responsible, who the targets were, the narratives propagated and the patterns of coordination and inauthentic behaviour.
Twitter’s information operations archive now has sufficient longitudinal data for us to learn more about how actors behave over time. To that end, ASPI has built this unique website to analyse and compare all the data from the archive at the same time. Policymakers and researchers can now consistently compare the activity, techniques and narratives across each operation, and compare what states do differently from each other and how their activities change over time.
Twitter has been perhaps the most forward-leaning entity in the social media industry in terms of its public engagement on information operations. No other company has consistently provided complete state-actor-linked information operations datasets for public scrutiny. Twitter’s recent signalling that it will discontinue the information operations archive makes ASPI’s longitudinal analysis of these datasets all the more pertinent.