Sections

Commentary

How big tech and policymakers miss the mark when fighting online extremism

U.S. President Donald Trump speaks about the shootings in El Paso and Dayton in the Diplomatic Room of the White House in Washington, U.S., August 5, 2019. REUTERS/Leah Millis     TPX IMAGES OF THE DAY - RC12ACFFDB90

Why is it still so easy to find violent white supremacist content online, even though social media companies keep claiming that they are working overtime to delete it? Last weekend, the El Paso shooter—just like the San Diego and Christchurch shooters back in April—posted his manifesto to the same extremist website and claimed to take inspiration from what he had read there. Mainstream social media companies say they are trying to stop the spread of this kind of violent content, but today’s domestic terrorists are highly motivated to stay online, and they are using every trick in the book to stay that way. Fighting a motivated, evolving adversary is a classic challenge of cybersecurity, but the difference here is that lives—not dollars or passwords—hang in the balance.

Platforms are ripe for abuse

“We need to strike hard, and strike fast. We need complete elimination of the enemy.” On July 24, three neo-Nazi podcasters introduced their latest show without mincing words. Men of the white race, they said, should adopt an accelerationist and exterminationist mindset, openly embracing violence against people of color, Jews, LGBT people. “We’re going to need our soldiers in the field, not only murdering our enemies…and also being killed in return. There’s going to be martyrs.”

After 81 minutes, the hosts concluded their show, and this video, like all the other videos produced by this streaming network, was immediately removed from YouTube—by its own hosts. They know it is far too risky to leave the video up on their channel. YouTube’s guidelines prohibit violent speech, and because of its three strikes policy, if enough people flag the video, the entire channel could be banned. Plus, the hosts know they really only need YouTube for its interactive chats and high bandwidth during the livestream itself. So why take the risk of leaving the show on YouTube?

Instead, the hosts relocated the video to Bitchute, a small, UK-based peer-to-peer video hosting service with looser rules governing acceptable content. As of this writing, the video was flagged but has not been removed from Bitchute. Since then, the podcasters have uploaded four additional videos this same way, one of which lauded the “body count” of the El Paso terror attack (“20 dead, 26 wounded? Fantastic, fantastic.”) before reading the manifesto in its entirety and providing commentary.

I relate this story because it is a canary in the coal mine for future battles over content moderation. “Big Tech” continues to promise increasingly complex AI-based moderation schemes, while exhibiting willful blindness about the simple ways in which their services are being abused. These podcasters are just one example of persistent, motivated threat actors intentionally probing the limitations of content moderation to see what they can get away with, and modifying their behavior accordingly. YouTube, owned by a company with an $800 billion market cap, seems unable or unwilling to acknowledge that their content moderation has been trampled for over a year by three guys with a microphone and a couple of extra Gmail accounts. After the Christchurch attack, Facebook announced new video streaming rules that are similarly reactive, designed to stop the last mass shooter, not to anticipate what the next one will do.

Focused on yesterday’s problems

Now, critics of Big Tech are lining up to plead, cajole, and threaten—from the left and right. In June, Senators Cruz (R-TX) and Hawley (R-MO) proposed legislation that would require social media companies over a certain size to prove no political bias when moderating content. Convinced that conservatives voices are being unfairly targeted, they followed up with a letter to the Federal Trade Commission asking for an investigation into moderation policies on Facebook, YouTube, and Twitter. In July, Tulsi Gabbard (D-HI), candidate for the Democratic nomination for President, filed suit against Google for, among other things, “playing favorites, with no warning, no transparency—and no accountability” in their content moderation.

As with the reactive policies implemented by the media companies themselves, this legislation and lawsuit are also naïvely focused on yesterday’s problems. They do not acknowledge the way the platforms are actually being gamed today, nor how they will be abused tomorrow.

Serial harassers and trolls will always figure out tricks to avoid the bans, and even if they somehow catch a block that sticks, the growing “Alt Tech” ecosystem made of decentralized, niche services often located in other countries, is more than willing to scoop them up. Finally banned from YouTube? Try Bitchute or DLive. De-platformed from Facebook or Twitter? Try Telegram, Parler, or Gab—the site favored by the Pittsburgh synagogue shooter back in October 2018—or the so-called “image boards” like 8chan, where the Christchurch, San Diego, and El Paso shooters all posted their manifestos.

There are dozens of these barely-moderated havens for extremism online right now, and none of them will be affected by legislation proposed to supposedly rid US-based Big Tech of hatred. In fact, most of these sites do not bother to conduct any meaningful self-regulation and are too small to qualify for monitoring under proposed legislation. Making things worse, when a “Big Tech” site is removed, an “Alt-Tech” clone rises up to take its place. Thus, the real danger slips right through the cracks.

Expand your expertise

And so, we are at an impasse. Legislative and corporate policies are designed to solve a specific problem for a particular stakeholder at a set time and place. In contrast, the online hate ecosystem is volatile, unpredictable, constantly changing, and deliberately confusing. Battling hate and extremism online has much in common with the attack-and-defend world of cybersecurity in which the attacker only has to successfully exploit one crack in the system, while the defender must guard 100% of it.

So just like with cybersecurity, solving the problem of extremism online will include significant investment in expertise. Social media companies must acknowledge that their current understanding of the groups and individuals that pose the most risk is lacking, at best. While dozens of virulently racist podcasters and video streamers routinely game the system and build their audiences, Big Tech instead issues bans of high-profile clowns and trolls. These companies need to broadly expand their in-house knowledge so they can tackle white supremacist extremism as it really looks online today. If such rapid skill-up is not possible, then they need to call in outside experts with real experience—quickly.

The Alt-Tech clone ecosystem is currently surviving on extremely thin margins and succeeds because of the short attention span of the media. For example, 8chan is in the news currently as it is dropped from mainstream hosting providers, but its software is open source and a copycat site with a slightly different name can be created easily. Alt-Tech companies are also scrambling to provide replacement infrastructure services for sites that have been de-platformed.

But we saw similar attention-boredom cycles in August 2017 following the deadly Unite the Right events in Charlottesville. The Daily Stormer and Stormfront, two notorious neo-Nazi sites, were banned following a high-profile but temporary media outrage cycle. The resilience of Stormfront in particular is troubling as it has a very long tenure online and its users have been tied to over 100 murders. Without strong public pressure resulting from consistent, long-term media coverage, these sites easily resurface within days or hours of a ban.

As for policymakers, instead of cutting budgets for extremism prevention, why not expand them? Instead of creating distracting side shows and political theater that belie any real understanding of violent extremism and how it spreads, focus on hammering out long-term strategies to tackle an increasingly decentralized, encrypted internet and a rapidly expanding Alt-Tech ecosystem.

It will be impossible to make real headway in combating the spread of violent extremism online without a realistic understanding of how internet platforms are being abused today, and an admission from the social media companies and policymakers that they do not really understand the phenomenon. Expertise and transparency remain critical to combating the spread of hate online. Otherwise, extremists will continue to stay one step ahead, using technology to fundraise, recruit, and radicalize with impunity.


Facebook and Google are general, unrestricted donors to the Brookings Institution. The findings, interpretations, and conclusions posted in this piece are solely those of the author and not influenced by any donation.

Author