Sections

Research

Terrorist definitions and designations lists

What technology companies need to know

Vishant Patel, senior manager of investigations at the Microsoft Digital Crimes Unit, shows a heat map and talks about how malicious computer networks known as the Citadel Botnets attack computers in Western Europe at the Microsoft Cybercrime Center in Redmond, Washington November 11, 2013. Microsoft, the maker of the most popular computer operating system in the world is launching a new strategy against criminal hackers by bringing together security engineers, digital forensics experts and lawyers trained in fighting software pirates under one roof at its new Cybercrime Center.   Picture taken November 11, 2013. To match Feature MICROSOFT-CYBERCRIME/    REUTERS/Jason Redmond (UNITED STATES - Tags: BUSINESS SCIENCE TECHNOLOGY CRIME LAW) - GM1E9BE1NSH01
Editor's note:

This publication is part of a series of papers released by the Global Research Network on Terrorism and Technology, of which the Brookings Institution is a member. The research conducted by this network seeks to better understand radicalisation, recruitment and the myriad of ways terrorist entities use the digital space.

Introduction

Terrorist groups pose a profound challenge for technology companies. The ‘blitzscaling’ model pioneered by YouTube, Instagram and others has enabled social networks and file-sharing services to gain tens and even hundreds of millions of users globally before they make meaningful revenue, much less profits. By the time technology companies can afford to hire a counterterrorism expert, it is often too late: any application with tens of millions of users worldwide but little oversight is ripe for terrorist exploitation. Worse, even when companies are able to hire counterterrorism experts, they are often unable to do so at a scale commensurate with the problem.

For the vast majority of technology companies, developing an in-house competence in counterterrorism is thus not a viable strategy for moderating potential terrorist accounts. Instead, most companies must choose between one of two imperfect strategies. The first is to adjudicate possible terrorist accounts on an ad hoc basis. The downside to this approach, as CloudFlare CEO Matthew Prince made clear when he ‘woke up one morning and decided’ to take the Daily Stormer, a popular online forum for white nationalists, offline after the Charlottesville attack, is that it is arbitrary. By contrast, another strategy is to rely on third-party terrorist definitions and designation lists. Although this approach offers a more principled means of account moderation, it is not without its own drawbacks. Most notably, off-the-shelf definitions and designation lists all contain biases and limitations that may not be obvious to non-experts and that could unwittingly bias a company’s efforts at platform governance. Just as companies lack the competence to identify terrorist actors, they also lack the expertise to discriminate between various definitions and lists – often with significant consequences.

Relying on third party definitions and lists is preferable to ad hoc adjudication, but companies that adjudicate terrorist accounts based on such lists should understand how to evaluate them. The aim of this policy paper is to provide such an understanding.

  • Footnotes
    1. ‘Blitzscaling’ is a business strategy that prioritises rapid growth and that leverages cloud computing. From the mid-2000s on, companies could scale quickly and globally without investing in massive data centres and personnel. For instance, YouTube had 50 million users but only 65 employees when it was purchased for $1.6 billion, while Instagram had 30 million users and 13 employees when it was purchased for $1 billion. Both companies were only two years old when purchased. See Reid Hoffman and Chris Yeh, Blitzscaling: The Lightning-fast Path to Building Massively Valuable Businesses (New York, NY: Currency, 2018).
    2. The technology platform with the largest known staff of terrorism experts is Facebook, which had hired over 150 terrorism analysts by 2017. Yet even Facebook still struggled to moderate potential terrorist accounts on its network. See Jeremy Kahn, ‘Facebook Enlists AI, Human Experts in New Push Against Terrorism’, Bloomberg, 15 June 2017.
    3. Will Oremus, ‘Cloudflare’s CEO Is Right: We Can’t Count on Him to Police the Internet’, Slate, 17 August 2017.