Online content moderation lessons from outside the US

social media

After President Donald Trump’s executive order and related tweets, Senator Josh Hawley (R-MO) is working on a bill that would reform Section 230, the internet platforms’ intermediary liability regime. While the US has had several failed attempts at reforming the law, other countries have created, passed, reformed, or invalidated their own legislation. This post explains the general trends in regulation around the world, highlighting two specific cases, Europe and Brazil, where similar reforms to those called for in the US are happening, before drawing conclusions.

Around the world

One of the oldest intermediary liability regulations outside the U.S. comes from India, where its IT Act of 2000 creates a safe harbor for intermediaries. Unlike Section 230, which provides immunity with limited exceptions, India’s safe harbor regulation shields the platform from liability only if it follows certain conditions. These conditions were tightened in 2011 to include more necessary actions and types of content that should be taken down once platforms are aware (via users pointing them out), to maintain safe harbor protection. In 2018, a draft bill yet to be voted on would have pushed for the platforms to proactively monitor content and remove illegal content within 24 hours (down from 2011’s 36) when flagged by court order or government agency.

The 2017 Kenyan Guidelines similarly asked for platforms to “moderate and control” any undesirable content within 24 hours of notification, but the law was shot down by Kenya’s High Court and criticized by UN Special Rapporteurs. An unratified 2018 bill in Honduras does not ask for constant monitoring, but adds significant fines or outright blocking to its demands, while also creating a non-judicial external body with ambiguous mandates and a lack of redress.

Other countries have even more stringent rules and consequences for specific types of content. Passed in 2019 as a response to the Christchurch, NZ Mosque attack livestream, an amendment to Australia’s Criminal Code forces companies not just to proactively moderate content that promotes or streams acts of “abhorrent violence”, but also to report it, with penalties of large fines or even prison for non-compliance. The strictest is Thailand’s 2007 Computer Crimes Act, amended in 2016 which practically forces platforms to preemptively monitor and moderate all content, as well as take down anything the military junta demands, or suffer large fines and prison time.

At the other end of the spectrum is Canada’s Copyright Modernization Act of 2011, which codifies liability immunity specifically only in matters of copyright. Intermediaries are not tasked with taking down the content but with being the conduit for exchanging of notices and counter-notices.


In 2000, the European Union passed the E-commerce Directive which, similar to India, opted for a safe harbor approach, if the platform is a “mere conduit”, or if it expeditiously removes illegal content, once made aware. It also expressly adds that member countries should not ask platforms to actively monitor the content they host. Two decades later, much like in the US, there are mounting calls for an update to the Directive, from its own Executive Commission. However, unlike the US, there has already been significant movement at the level of member states.

Unlike other countries that are preoccupied with the cases in which the platforms may be shielded from liability, Germany’s 2017 Network Enforcement Act (NetzDG) and France’s 2020 “Fighting hate on the Internet” bill focus on how and when internet companies will be punished, via heavy fines. Created in an attempt to halt the dissemination of unlawful and harmful content online, these laws give platforms a brief (24 hours in Germany) or very brief (1 hour in France) time period to respond by taking down supposedly-unlawful content that individuals complained about. While stopping short of asking for constant monitoring, this would arguably lead to more restrictive moderation practices.

No longer part of the European Union, the UK released a 2019 White Paper on Online Harms, which would go further than any other EU regulation. Beyond the now-standard perspective of asking platforms to have a mechanism for taking down unlawful content, it introduces the notion of an undefined “duty of care” which brings with it proactive and constant scanning of posts (still not allowed within the EU), all under the patronage of a new regulatory institution which would have ultimate authority, including creating and enforcing best practices, issuing fines and even prison sentences.


Brazil is debating a bill that could eventually tighten already-codified liability for platforms. In 2014, the country passed the landmark Marco Civil da Internet (Bruna Santos was part of the drafting team for the bill) which attempted to safeguard rights for users and create duties for internet corporate actors. The law shields certain platforms from civil damages, except for failure to comply with judicial orders that ask for taking down “offensive” content (specific content types do not require judicial orders).

A heavily contested 2016 election saw wide dissemination of disinformation. Jair Bolsonaro’s election as president made progressive MPs consider that Marco Civil is not enough for holding platforms accountable. A new bill called the Brazilian Law of Freedom, Liability and Transparency on the Internet has been introduced, which in its original version reproduced the structure of NetzDG (and 2020 French bill) but with an even wider pool of content it reaches. Subsequent versions moved away from the European perspective, but veered towards constant monitoring, as well as mass identification and traceability of users on messaging apps like WhatsApp.


The international intermediary liability regulation ecosystem is a chaotic one, as most laws address specific slices of content. Each issue, be it copyright, disinformation, hate speech, or sexually abusive materials has its own unique solution. As the example of Brazil shows, even a regime established to ensure freedom of expression can draw calls for changes. Issues like disinformation and hate speech have taken center stage and stress-tested the legislation’s boundaries. However, American reform should also heed the fallout of stricter laws like Germany’s NetzDG, which has been criticized as the reason for overzealous takedowns and less online content availability.

There are many questions drafters of new intermediary liability laws should answer. What is clear is that the U.S. should ultimately steer clear of mandating generalized proactive monitoring of content, something that the even the European Union has so far avoided and high courts around the world have invalidated. Alternatively, lawmakers can also take cues from the human rights perspective of the empirically-based Manila Principles and the 2018 UN Special Rapporteur’s Report, which argue for limited restrictions on liability, an independent judiciary’s involvement, and a focus on transparency and due process. A more US-focused 2019 Principles for Policymakers (David Morar was a signatory) argues any reform should not require the removal of constitutionally-protected speech, while also not discouraging from moderating content. If reform is on the horizon, the United States can once more set a benchmark for liability legislation, or at a minimum, learn from the mistakes of other countries.