- The Strategist - https://aspistrategist.ru -
Serious flaws in EU plan to automate detection of terror material online
Posted By Elise Thomas on February 20, 2019 @ 06:00
The EU’s recently proposed regulation [1] to prevent the online dissemination of terrorist content has sparked significant concern among experts and civil society groups. One of the most controversial elements of the proposal is the requirement for internet service providers to engage in proactive measures to remove or block access to terrorist content on their platforms, including through the use of automated detection systems.
To date, technologies for automatically detecting terrorist content have been seriously flawed. The worry is that the EU’s proposal may drive online platforms to implement these tools despite high levels of inaccuracy. In addition to concerns about endangering freedom of speech or censoring legitimate news, the regulation could pose a major risk to digital evidence of human rights abuses and war crimes.
The spread of camera-equipped smartphones allows for conflicts to be documented in a way which has never before been possible. This footage can be invaluable for human rights advocates, and form the basis for potential war crimes investigations. By their very nature, these images and videos often contain graphic violence, explosions, weapons and other content which is visually very similar to that contained in terrorist propaganda material.
The accuracy of automated tools for content analysis varies widely depending on the type of content. Automatically detecting terrorist content in a video relies on a different set of underlying technologies than detecting written terrorist propaganda, for example. Despite their differences, however, there are a number of weaknesses which virtually all such systems share.
Arguably the greatest and most common weakness is the software’s inability to understand context. An algorithm might be capable of detecting whether a video contains explosions or graphic violence, but may struggle to determine whether that video is terrorist propaganda or a news report. Innocuous phrases like ‘I totally bombed in that meeting’ might be enough to trip up a system based on textual analysis, while euphemistic or even very lightly coded messages pertaining to hate speech like neo-Nazis’ use of triple parentheses [2], for example, may fly completely under the radar.
A key example of automated detection that all sides in the debate over the EU plan and the broader discussion point to is the ‘hash database’, which internet giants such as Google, Facebook, Microsoft and Twitter use to share hashes (like digital ‘fingerprints’) of terrorist content and, in theory, enables content identified on one platform to be automatically recognised by the others.
While the EU’s proposal singles out the hash database and its close collaboration with Europol as a positive example of industry taking action against online terrorist content, human rights advocates and civil society groups have expressed concern [3] over the database’s lack of transparency or accountability. They point out that there is almost no public information about how ‘terrorist content’ is being defined, how accurate the system actually is in detecting it and what unintended harm the system might be causing.
Automated detection on YouTube, for example, has led to the deletion of thousands of videos [4] flagged by the Syrian Archive, a civil society organisation [5] dedicated to preserving evidence of human rights abuses in the Syrian conflict. Facebook has been accused of banning, blocking or deleting potential evidence of war crimes, including during the ethnic cleansing campaign [6] conducted against the Rohingya in Myanmar. The destruction of this digital evidence may make prosecution of those responsible much harder.
The EU has attempted to build safeguards into the proposed legislation, including a requirement for hosting providers to offer a complaints mechanism for users who believe their content has been wrongfully removed. Hosts are obliged to store the removed content for six months to allow for complaints or for access by security and law enforcement agencies. This does present a potential window to rescue crucial digital evidence removed by the algorithms—but only if a human being is actively fighting to keep it.
The key point of a redress model based on complaints is that there has to be someone able to make a complaint. In the context of preserving digital evidence of human rights abuses, this presents a fundamental problem. People caught up in violent conflicts or living under authoritarian regimes are busy trying to stay alive and keep themselves and their families safe. It is absurd to think that someone living through the Syrian conflict or coming under attack from Boko Haram has the time, capacity or desire to go through a protracted, bureaucratic complaints process with some distant tech company.
It’s not clear whether a third party such as a human rights group might be able to make a complaint on a person’s behalf. However, even this would require the uploader to realise that their content had been removed, and to have the time and ability to reach out for help to someone less concerned with running for their life.
The sheer amount of content uploaded to the internet every day makes some degree of automated detection of terrorism-related material an inevitability. However, legislators and hosting service providers should think carefully about how to implement such measures—in particular, whether a mechanism which puts the onus on the uploader is really the best way of managing the risk of inaccurate decision-making by an algorithm.
It would be a bitter irony if, in the effort to prevent terrorists from spreading their messages online, authorities and hosting service providers end up destroying the very evidence that could have helped bring them to justice.
Article printed from The Strategist: https://aspistrategist.ru
URL to article: /serious-flaws-in-eu-plan-to-automate-detection-of-terror-material-online/
URLs in this post:
[1] proposed regulation: https://ec.europa.eu/commission/sites/beta-political/files/soteu2018-preventing-terrorist-content-online-regulation-640_en.pdf
[2] neo-Nazis’ use of triple parentheses: https://mic.com/articles/144228/echoes-exposed-the-secret-symbol-neo-nazis-use-to-target-jews-online#.Lq9wknZTE
[3] expressed concern: https://cdt.org/insight/letter-to-members-of-the-european-parliament-on-concerns-with-terrorism-hash-database/
[4] deletion of thousands of videos: https://www.nytimes.com/2017/08/22/world/middleeast/syria-youtube-videos-isis.html
[5] civil society organisation: https://syrianarchive.org/en/about
[6] ethnic cleansing campaign: https://www.theguardian.com/technology/2017/sep/20/facebook-rohingya-muslims-myanmar
Click here to print.
Copyright © 2024 The Strategist. All rights reserved.