Sections

Commentary

How to create a terrorism designation process useful to technology companies

Facebook CEO Mark Zuckerberg speaks about privacy during his keynote at Facebook Inc's annual F8 developers conference in San Jose, California, U.S., April 30, 2019. REUTERS/Stephen Lam - HP1EF4U1CO4AA
Editor's note:

To moderate terrorist content online, technology companies rely on existing terrorist designation lists that may not be suitable for their use. The technology sector and representatives from civil society, academia, and government should work together to develop a global, unbiased, and real-time database of possible terrorist entities, argues Daniel Byman. This piece originally appeared in Lawfare.

On August 3, a shooter opened fire at a crowded Walmart in El Paso, Texas, killing 22 people. Shortly beforehand, it seems that he posted a screed on the online messageboard 8chan, framing the shooting as an act of terrorism against what he saw as the increasing Latino population of Texas. The El Paso shooting was the third act of mass violence this year with a link to 8chan—and by the end of the weekend, the network provider Cloudflare decided to pull its services from the website, making it more difficult for 8chan to stay on the web.

In justifying the decision, Cloudflare CEO Matthew Prince wrote that 8chan had crossed a boundary: “[T]hey have proven themselves to be lawless and that lawlessness has caused multiple tragic deaths.” But, he indicated, he was uncomfortable with Cloudflare’s ability to unilaterally decide what websites should and should not have the protections necessary to remain online. “What’s hard,” he said, “is defining the policy that we can enforce transparently and consistently going forward.”

Prince is correct that making such determinations is a difficult process. Cloudflare used its own judgment—but as my colleague Chris Meserole and I have argued, to stop terrorists and extremists from using their platforms, it is not enough for technology companies to simply rely on U.S. government lists or other single sources of information:

Many technology companies refer to third-party terrorist definitions and designation lists when moderating potential terrorist accounts. However, those definitions and lists are often produced for specific legal, political or academic purposes and may not be suitable for general use. Technology companies should understand such lists’ relative strengths and limitations before relying on them.

As Benjamin Wittes and Zoe Bedell argue in a series on Lawfare about Twitter and designated foreign terrorist organizations (see herehere, and here), if technology companies knowingly provide service to a designated foreign terrorist organization, it constitutes a criminal act. Unfortunately, the designation process is at best incomplete and of limited use for keeping the worst elements off the internet.

U.S. government definitions themselves vary from agency to agency, with right-wing terrorist groups not usually included. The difference in what is labeled as terrorism grows even larger when we look overseas, as global technology companies must. Some countries label enemies of the regime as terrorists, while others pay little attention to radical groups operating on their soil. As Meserole and I point out, even among the “Five Eyes” countries, all liberal democracies that have a close security partnership, only 11 terrorist groups appear on all five countries’ terrorism lists, and almost half of the groups listed appear on only one country’s list. In addition, while jihadists are well represented, right-wing groups are systematically and grossly undercounted around the world. Politics, not surprisingly, play a big role in which groups are listed and which are not.

Nor can technology companies look elsewhere for answers. Academic lists might be less biased, but they are rarely timely or global. Civil society organizations such as the Southern Poverty Law Center often have an excellent grasp of one type of group (e.g., right-wing violence) but are not comprehensive and often focus on “hate” rather than violence per se.

With all this in mind, technology companies must go beyond existing sources. Meserole and I contend that technology companies and civil society should work together to develop a vetted list of terrorist groups that should be banned and otherwise blocked from using their platforms:

The technology sector and representatives from civil society, academia and government should work together to develop a global, unbiased and real-time database of possible terrorist entities. The database could be used to produce different designation lists based on various inclusion criteria.

Starting this process to create such a database is difficult. The biggest actors—major governments and big technology companies—face a credibility problem, as each would be perceived as biased, and for technology companies there are political, business and legal reasons for them to avoid taking the lead on this issue. Many smaller companies and local civil society organizations do not have the resources to play a major role. Finally, there is a collective action problem: Although having standards is in the interests of both governments and technology firms, getting the ball rolling is difficult given the sheer number of actors and the problems associated with each.

To build on the thinking that Meserole and I have done, below are my ideas for how to start the process.

Convening a working group is the first step. Ideally this convening would be done by a respected but relatively neutral democratic government that has a record of support for liberal values along with the Global Internet Forum to Counter Terrorism (GIF-CT). Other government officials would attend but not be formally represented. Together they would make initial decisions on which civil society and technology officers to invite and the criteria for doing so. Norway, New Zealand or Denmark might be ideal convening countries. Civil society actors would be chosen to represent different cultures and intellectual perspectives.

The initial group would start small, focusing on a select group of countries and developing procedures for listing. Over time, as initial mistakes and omissions are ironed out, the group would expand the number of countries regulated and, ideally, become truly global.

Technology companies with large numbers of users and significant revenue would help fund the process. Ideally, foundations would contribute—the more types of funders, the less risk of bias—and larger civil society groups would donate their time and people. Governments might help convene and otherwise contribute logistically and with information, but to avoid accusations of bias they should not directly fund the process after it is up and running. Smaller companies would benefit from its decisions but would not join in the process (and bear any associated costs and opprobrium) until they reached a large number of users. Costs would include full-time and part-time staff and consultants, travel and technology associated with the process, and regular face-to-face conferences among participants.

Total agreement among the group is not likely (nor, in fact, desired), and paralysis or underlisting might result if unanimity is required. Instead, the working group should develop multiple lists, with different levels of responsibility for each one. At the top would be uncontested terrorist groups, such as al-Qaeda. Another list might contain groups that use violence but also engage in an array of social and political activities, such as Hamas, to ensure that a hospital website in Gaza linked to the Hamas-run government there is not taken down when the group’s military wing’s propaganda is. Still another might be gray area groups whose actions and rhetoric often cross the line but that are not clearly on one side or the other.

In addition to following U.S. and other relevant laws regarding formally designated groups, companies (and countries) might respond by banning groups in Category One, do partial takedowns in Category Two, put restrictions on users in Category Three, and so on. In addition, the new working group could note that some members list a particular group in one category while others favor another interpretation—technology companies could be “soft” or “hard” in their response, but they could do so recognizing the uncertainties and disagreements involved.

Companies, of course, are private entities and are not obligated to treat groups on the list in any particular way. However, the designation process, at the very least, would provide useful information to the companies. In addition, it would be a convenient and resource-efficient way for companies to avoid being singled out politically yet still take action. They are simply respecting an agreed-upon process, and if a government or organization complains, they can take it up with the working group. Finally, if the process gains a foothold, there would be a shaming process over time, with companies that do not meet the standards criticized for being either too permissive or too strict.

The criteria and the process must be transparent. The new working group would make mistakes, and new evidence will come to light that should change designations and categorizations. More important, part of the purpose of the rules is to change extremist behavior online and to influence future groups. If violent extremists do change their behavior for a sufficient period of time, this should be recognized and policies adjusted.

Authors