Sections

Commentary

The push for content moderation legislation around the world

United States Senator Lindsey Graham (Republican of South Carolina), Chairman, US Senate Judiciary Committee, left, US Senator Richard Blumenthal (Democrat of Connecticut), center, and US Senator Dianne Feinstein (Democrat of California), Ranking Member, US Senate Judiciary Committee, right, chat prior to the committee markup of the "Eliminating Abusive and Rampant Neglect of Interactive Technologies (EARN IT) Act of 2020," and judicial nominations in Russell Senate Office Building on Capitol Hill in Washington, DC, USA, Thursday, July 2, 2020. Photo by Rod Lamkey/CNP/ABACAPRESS.COM

The summer of 2020 was very consequential for online speech. After years of national debate in the United States, several reform initiatives around the world, and the added pressure of the global pandemic, the demand for policy action finally boiled over. We are witnessing a shift in the primary driver of regulation from protecting innovation at all costs to ostensibly protecting aggrieved citizens at all cost. The U.S., Europe, and Brazil are in the throes of a fundamental intermediary liability legislative fight: who deserves safeguarding, what are the major threats, and can government rewrite the rules without pulling the plug on the internet as we know it? Let’s review what the period of debate is shaping up across the world and what it means for government action.

A hot legislative summer

In May 2020, France passed the “Fighting hate on the Internet” law, built in the image of Germany’s much-maligned 2017 Network Enforcement Act (NetzDG) Law, one of the most stringent intermediary liability legislations on the European continent. The law requires social network companies to almost instantly take down material deemed “obviously illegal”, at risk of heavy fines and without judicial decision-making safeguards. After its passage, the French Constitutional Court struck it down, as it found it to be an attack on freedom of expression among many other concerns. Meanwhile, in June, Germany decided that NetzDG was not enough; it introduced and passed reform in the Bundestag. The new law commands social media platforms to not just take down violent hate speech, but also report it to the police.

Also in June 2020, Brazil passed, in one of its legislative chambers, a bill fighting fake news, “Brazilian Law of Freedom, Liability, and Transparency on the Internet” whose initial drafts also mirrored the original NetzDG text. The final version, not without controversy, tackled intermediary liability by only requiring mandatory transparency reports, political content disclosure, and ensuring due process and appeals for content moderation decisions.

Similarly, in the U.S., The Eliminating Abusive and Rampant Neglect of Interactive Technologies Act of 2019 (EARN IT) has been hotly contested not just on content moderation but also on potentially breaking strong encryption. The bill had an entirely different initial draft to the one that passed its congressional committee vote in July 2020. Originally, it changed the liability standard for platforms from “actual knowledge” of sexual abuse or exploitation materials related to children to the mere existence of such material. The proposed bill would also create a 19 member national commission, chaired by the attorney general, charged with creating a set of mandatory “best practices” for intermediaries to follow or else lose their liability protection. Ultimately, the version that passed a committee vote scrapped the change of standard and made the best practices optional, while adding in a questionable carve out of Section 230 for state laws against child sexual abuse materials.

The legislative pushback

The build-up to the bills highlights some general trends. Germany’s bill suffered significant pushback, but did not originate nor go through a public fact-finding commission. On the other hand, France and Brazil had set up committees to understand the problem of content moderation and the entire suite of potential solutions. The French government backed down after its original draft bill was panned not just for damage to freedom of speech and potential harms to disadvantaged groups, but also its failure in fighting hate, disinformation and other unsavory online content. It seemingly settled into a longer, more thorough process, through a nuanced and well researched executive branch commission report.

Similar to France, by the end of 2019 the Brazilian National Congress created an ad-hoc misinformation investigative committee. Unlike France, the committee was not able to even hold hearings with representatives of social media platforms let alone issue a report before the pandemic hit. The nature of the pandemic shifted priorities for both countries. In France it meant rushing the bill through under the cover of national security despite the nuanced perspective of the report. In Brazil it meant no report, and an introduction of a bill that got a series of online public hearings and an entirely revised text after strong pushback.

While no external committee was even suggested, the trajectory of the EARN IT Act is similar to Brazil’s fake news bill: an initial draft, universally criticized, is introduced, stakeholders rush in to explain its potential damage, and the version that passes the first vote is materially different and watered down while barely addressing earlier criticisms.

Unlike the others, the eminently bureaucratic and consultative nature of the European Union lends itself to a long and overly thorough process as it attempts to reform its decades old eCommerce Directive through the Digital Services Act. Incidentally, the bill is the only one whose text is not available before the global consultations wrap up. However, the general trend is worrisome: All the legislation discussed so far started from the premise that something had to be done and the NetzDG censorship model was the best. Lawmakers would have largely followed this model if left to their own devices and unencumbered by open debate or impartial fact-finding: Until December 2019 13 countries approved laws in the spirit, if not also the letter of NetzDG. The most recent one, Turkey, is billed as the strictest. As a harbinger of potential future global reforms, NetzDG itself is getting stricter.

Key takeaways

Vigilance across stakeholder groups has so far led to meaningful if limited success in changing the free speech- and privacy-encroaching regulations across the world, which may be enough to send a strong message to the drafters of the EU’s Digital Services Act. While France and Germany have passed legislation, the Brazilian and U.S. bills are still uncertain. EARN IT Act drafters, specifically Senator Lindsey Graham (R-SC), were hoping to pass the legislation in the Senate before the August recess. The Brazilian bill’s status is unclear, awaiting discussion and passage in the country’s other chamber, but with mounting national and international criticism, there may still be hope for positive change.

High profile bills get attention and resultant national and international pushback, but it is worrisome that the default intermediary liability legislation seems to be the draconian NetzDG, or underdeveloped concepts like duty of care. With some sense of what the Digital Services Act might contain, it is the only bill that is not solving for a perceived immediate problem like disinformation, child sexual abuse, or hate speech, without regard to the potential aftermath.

But speaking more generally, the 2020 bills mark a change of mindset from the innovation and freedom of expression that catalyzed the original legislation now being marked for reform. Now besieged by disinformation, harassment, and threats of violence or deplatforming, users have demanded new legislation to protect not just themselves, but the platforms they paradoxically hold as both integral to and infringing on their fundamental rights. The “do something” ethos behind the reform bills is a direct answer to this phenomenon. However, replacing the myopic view of moderation as mostly inconsequential with the equally myopic view of forced moderation regardless of larger systemic implications will not make us any less blind.

Authors