In the wake of the Christchurch attacks, the internet giants Facebook, Google and Twitter have come under pressure for failing to prevent the killer from using their platforms to share his message, including live-streaming the shooting.
While some of this criticism may be deserved, what this episode really shows is that automated systems for detecting and removing terrorist content are still no match for humans determined to promote vile messages.
New Zealand Prime Minister Jacinda Ardern said on Sunday that she intended to ask Facebook how the terrorist was able to live-stream his attack on its platform for a full 17 minutes. In a matter of moments, the footage was ricocheting around the internet, not just on Facebook but on YouTube, Google, Twitter, Reddit and a host of smaller social media and video-sharing platforms.
‘I do think there are further questions to be answered’, Ardern said. ‘These social media platforms have wide reach. This is a problem which goes well beyond New Zealand … So whilst we might have seen action taken here, that hasn’t prevented [the video] being circulated beyond New Zealand’s shores.’
The social media giants have been quick to highlight their efforts to stop the footage from spreading. Facebook, which also owns Instagram and WhatsApp, said on Monday that it had removed over 1.5 million copies of the footage in just the first 24 hours after the attack, 80% of which were blocked immediately when the user attempted to upload them.
Google, which also owns YouTube, deleted the terrorist’s YouTube account before he was able to upload footage of the attack, but said the number of copies and related videos being uploaded was ‘unprecedented in scale and speed, at times as fast as a new upload every second’. In response, Google called in its ‘war room’ of crisis responders and removed tens of thousands of videos which were automatically flagged as containing even a 5% match to the footage. For the first time, the company even ‘broke’ parts of YouTube’s own search function to make it more difficult for viewers to find copies of the video.
Twitter has been the least forthcoming of the three major platforms about what action it has taken to prevent the footage from proliferating, simply saying that it is ‘continuously monitoring’ content uploaded to the platform. As of Tuesday morning, it was still possible to find links to the entire, uncensored footage on Twitter within seconds with a simple search.
Under the circumstances, it’s fair to ask whether platforms are really doing enough. At the same time, it’s important to recognise the scale and nature of the challenge they’re up against. We shouldn’t allow (justifiable) anger at the platforms and their fallible algorithms to distract us from where the lion’s share of the blame really lies: the human users who are actively uploading, promoting and sharing this content.
The killer wanted his message to go viral and, as a native of the dark corners of the internet where far-right extremism grows like poisonous mushrooms, he knew how to make it happen.
There is significant evidence that he planned his communications campaign well in advance of the attack. Moments before the shooting began, he posted an 18,300 word ‘manifesto’ on Twitter, uploaded multiple times across three platforms in a clear effort to make it more difficult to take down. (As of Tuesday morning, the manifesto was also still easily accessible online as the first result in a simple Google search, despite the apparent efforts of hosting providers like Document Cloud to take it down.)
The gunman announced his attack and shared the link to the live-stream on 8chan/pol/, one of the mouldy internet corners frequented by the kind of people who could be relied on to not only watch but gleefully promote his video and his message. ‘Please do your part by spreading my message, making memes and shitposting as you usually do’, he wrote.
And they did—or at least some of them did. Others were divided as to whether it was an attempt to entrap them in either a communist, FBI, Islamic or Jewish conspiracy, because that’s how people in these forums think.
Recordings of the initial live-stream were downloaded, saved and shared across a network of far-right and white nationalist forums. Users were uploading new versions faster than the platform providers could take them down. Google says that it deleted hundreds of YouTube accounts which were created after the shooting specifically to share the shooting footage or express sympathy with the perpetrator.
Some of the more tech-savvy users also took deliberate steps to circumvent the platform’s automated systems for recognising terrorist content. Facebook, Google, Twitter and a number of other major digital companies share ‘hashes’ (sort of like digital fingerprints) of problematic content via a specialised database. This is what Google was referring to when it said it was taking down videos with even a 5% match to the footage—its algorithms were comparing every new video to the hash of the original live-stream, and blocking everything with even a partial match.
But hashes are easy to break. Small alterations, such as skewing the video’s size or adding a watermark, can be enough to prevent the system from recognising the video, and that’s precisely what many of the uploaders did. Millions of copies may have been caught on upload, but far more clearly slipped through the net. Realistically, this footage will almost certainly never be entirely scrubbed from the web.
This tragedy should serve as a wakeup call to the internet giants. The weaponisation of information networks is not new, but the digital architecture of the major platforms creates unprecedented opportunities for bad actors to spread their message across the globe in minutes, and to keep it out there indefinitely for as long as extremists continue to lurk, like colonies of mould, in the nooks and crannies of the internet.
And as this episode shows, for all their expertise and resources, the platforms’ automated systems are not yet enough to stop the spores from spreading.