Sections

Commentary

What U.S. policymakers can learn from the U.K.’s Online Safety Bill

PA via ReutersEDITORIAL USE ONLY (Left to right) Advocates from Girlguiding UK, Caitlyn, Maddie, Fran and Phoebe, unveil a 2m x 2m Girlguiding badge with the words 'ONLINE HARM IS REAL HARM. END IT NOW' printed onto the material, before meeting with MPs to lobby for amends to be made to the Online Safety Bill, to include violence against young women and girls explicitly, Westminster, London. Picture date: Wednesday February 9, 2022.No Use UK. No Use Ireland. No Use Belgium. No Use France. No Use Germany. No Use Japan. No Use China. No Use Norway. No Use Sweden. No Use Denmark. No Use Holland. No Use Australia.

On March 17, the Department for Digital, Culture, Media, and Sport of the United Kingdom submitted its revised Online Safety Bill to Parliament. It is a sweeping proposal to throw a regulatory net around social media companies and search engines while still preserving their role as public platforms for robust discussions of issues of public importance.

Parliament can still modify the details of this proposal, but the general outlines are clear enough to provide guidance for policymakers struggling with similar regulatory challenges and opportunities in the United States. There’s a lot to be learned from this comprehensive proposal, including requirements for dealing with illegal material, special duties to protect children, exemptions from content moderation rules for privileged content, duties about fraudulent advertising, and identity verification rules. This Lawfare podcast and summary from Demos analyst Ellen Judson outlines the major elements in the bill. But I want to concentrate on two overarching policy concepts that are politically feasible here in the United States and would be likely to withstand constitutional scrutiny.

One policy approach that could be imitated is to empower an independent agency to implement a system of regulation designed for the special characteristics of social media companies and search engines. The U.K. bill properly treats social media companies as unique in the media landscape, designs its regulatory requirements to fit those special features, and assigns the implementation and enforcement role to the Office of Communications (Ofcom), the U.K.’s traditional media regulator.

A second lesson is that this regulatory structure should focus on systems and processes social media companies use to order content on their systems and to take action against content that violates their terms of service. The U.K. bill does this by mandating disclosure of content rules, due process protections, transparency reports, risk assessments, and mitigation measures for harmful content. This systems approach is a promising way to reduce noxious but legal online material while preserving free expression. Crucially, the bill demonstrates that giving government agencies the power to order social media companies to remove or downgrade harmful but legal material is not a necessary feature of a regulatory structure that aims to improve content moderation.

Despite this attractive overall approach, the bill is missing a few crucial features that seem essential in an adequate regulatory regime for social media companies. The most important omission is the lack of mandated access to social media data for independent researchers and auditors. A new social regulatory regime will be flying blind without this crucial independent mechanism to assess the adequacy of steps social media companies are taking to improve content moderation without harming free expression. The bill also fails to require a substantial role for civil society groups, academics, technical experts, and industry representatives to participate in the creation and evolution of the details of the regulatory program.

A Sector-Specific Regulatory Regime for Digital Companies

Despite lodging the new regulatory regime for social media and search engines in Ofcom, the Online Services Bill does not mimic the regulatory structure in place for broadcasting or communications common carriers. This is a very good thing. They are very different lines of business and require different regulatory treatment.

Companies in the U.K. need a license from Ofcom to provide an over-the-air broadcasting service or a satellite or cable channel. This licensing requirement enables Ofcom to enforce its Broadcasting Code, which contains rules related to under-eighteens, harm and offense, crime, disorder hatred and abuse, religion, due impartiality and due accuracy, elections and referendums, fairness, privacy, and commercial references.

Ofcom can issue warnings or fines for failure to abide by the Broadcasting Code. It reviews individual programs for compliance. For instance, in May 2020, Ofcom ruled that certain broadcasts about the 2019 Hong Kong protests were not duly impartial and sanctioned the Chinese TV channel CGTN, the broadcaster involved. In another case, a community radio station was sanctioned for allowing a talk show guest to make false claims about coronavirus.

In extreme cases, the agency can revoke a broadcast license, thereby forcing the company to cease operations in the U.K. In February 2021, Ofcom revoked CGTN’s broadcast license, and a year later in March 2022 after the start of Russia’s invasion of Ukraine, Ofcom revoked the license of the Russian TV channel RT.

But social media companies are very different from traditional broadcasting. The bill covers a “user-to-user service,” and defines that line of business as “an internet service by means of which content that is generated directly on the service by a user of the service, or uploaded to or shared on the service by a user of the service, may be encountered by another user, or other users, of the service.”

In its guidance for video-sharing platforms under an earlier sectoral regulation that will be replaced by the Online Safety Bill, Ofcom provides some background on the differences. Online services in general are “a medium to entertain, educate, share information, [and] influence opinions.” They have “the potential to influence large numbers of people in a similar manner to traditional television and broadcast services…”

Nevertheless, a video service provider does not “have general control over what videos are available on it, but does have general control over the manner in which videos are organized on it.” Online services organize content in a variety of ways. This includes determining how content appears to users such as promotion or recommendation of content, tagging, sorting, or categorizing content, and sequencing it to determine the order in which it appears to users. Organization can be generalized, personal, or both. It can rely on inputs from users such as sharing or liking content or implicit signals such as time spent engaged with content but how these inputs affect organization of content is up to the platform.

Online services allow users to share content subject only “to their terms of service or enforcement of their content policies.” But undertaking content moderation is not the same as “exercising control over what videos are available” on the service. The key determinant of exercising control is “the role the service plays in actively choosing the selection of videos that is available on the service.” This defining characteristic means that the online service does not have “editorial responsibility” in the same way that traditional broadcasters do.

The Online Safety Bill takes account of these differences in several ways. It does not require social media platforms or their users to obtain a license from Ofcom. Crucially, as discussed below, Ofcom is not authorized to review individual pieces of content the way it reviews broadcast programs. The U.K. bill has no mandate for a social media company to suppress legal material that might cause harm or offense to other users. It has a code of practice that has not yet been developed, but that code does not mandate content deletion or demotion. It is oriented to systems and processes for dealing with harmful content and disclosure of the platform policies.

While lacking licensing authority and the power to review content, Ofcom does have strong enforcement powers, including “business disruption” powers that would allow the agency to ask a court to prevent companies from providing access or other services to social media platforms or would require companies to block access to the offending platforms. It also has the authority to impose fines of up to 10% of global turnover. But the regulatory structure Ofcom enforces is quite different from broadcasting, focused on systems and processes rather than content review.

Duties Relating to Material That is Legal But Harmful to Adults

A good way to see how the U.K. bill focuses on systems and processes is to examine how it regulates material that is legal but harmful to adults. The bill requires the Secretary of State to designate categories of content that are harmful to adults. Parliament has to approve this list and any updates to this list that the Secretary of State recommends. The bill defines harm as “physical or psychological” harm and says it can arise from the content itself or from the fact or manner of its distribution.

The fact sheet accompanying the bill notes that the government-determined category of priority content that is harmful to adults is “likely to include issues such as abuse, harassment, or exposure to content encouraging self-harm or eating disorders.” It also mentions that the category is likely to include “misogynistic abuse” and “disinformation.”

The bill requires the larger platforms to assess various risks associated with this legal but harmful material. This includes the risk that adult users will encounter this harmful content, especially through algorithmic dissemination, and the risk of harm to adults once they have encountered this material. Risk assessments must also identify the functionalities of the service that present a high risk of facilitating or disseminating this harmful material.

The bill requires the larger platforms to include in their terms of service how they treat the legal but harmful material designated by the Secretary of State. These actions have to include one of the following: taking down the content, restricting users’ access to the content, limiting the recommendation or promotion of the content, or recommending or promoting the content. The companies must say in a clear and accessible way in their terms of service which of those four alternatives they have chosen and then apply those actions in a consistent way. This “consistency” requirement means that Ofcom can hold platforms to account for abiding by their own terms of service but will not impose their own content rules.

The fact sheet accompanying the bill says twice that the bill would not require companies to remove legal content. The bill requires disclosure of whether a platform permits such content and if they do how they treat it. Adults will still be able to access and post legal but harmful material “if companies allow that on their services.” Disclosure is the key element that will enable adults to “make informed decisions about the online services they use.”

The press release accompanying the introduction of the bill makes the same point that the mandate in the bill is transparency about systems and processes, not content regulation. The larger social media companies, it says, “will have to set out clearly in terms of service how they will deal with such content and enforce these terms consistently. If companies intend to remove, limit or allow particular types of content they will have to say so.”

The bill puts an additional burden on platforms that choose to carry legal but harmful material. They must develop features of their service that enable users to “increase their control over harmful content” and they must describe those features “in clear and accessible provisions in terms of service.” These user control features include “systems or processes” available to users that are designed to “reduce the likelihood” that the user will encounter harmful content or, alternatively, that “alert the user to the harmful nature of” the material. This mandate is a mandate for establishing and maintaining filtering or labeling systems, but it does not require platforms to take down, demote or refrain from amplifying the harmful content.

Limitations

Stanford law professor Nate Persily explained that regulators cannot make good social media policy “if they do not know the character and scale of what’s going on” and why a legal mandate is needed to overcome the platform resistance to data disclosure to qualified researchers.

But the Online Safety Bill does not require access to data. The bill only requires Ofcom to prepare a report on researchers’ access to information from social media companies to inform their research. This is a huge disappointment. The European Commission’s Digital Services Act, which is poised for final action in the coming months, contains a mandate for access to social media data for vetted researchers. In the United States, a bipartisan Senate bill, the Platform Accountability and Transparency Act (PATA), would require social media companies to provide vetted, independent researchers with access to social media data.

Susan Ness, former Federal Communications Commission Commissioner, now with the German Marshall Fund, and Chris Riley, formerly with the State Department and Mozilla, and now with R Street, argue in a recent article that implementing content media regulation will require a “modular” approach that can interoperate across jurisdictions. Civil society groups, academics, technical experts, and the regulated industry have an indispensable role in creating such a system. The U.K. bill, however, provides no significant role for these groups to craft the details of regulatory requirements, a defect that U.S. policymakers should fix in their approach.

A Way Forward for the United States

Some measures in the U.K. bill probably would not withstand First Amendment scrutiny in the United States—in particular, the authority of the government to define harmful but legal material. U.S. policymakers would be well-advised to allow the companies themselves to determine what categories of harmful material they will seek to control on their systems. The government’s role should be to ensure that they have procedurally adequate systems and processes in place to address the content they view as harmful. It is also hard to see how Ofcom could enforce the mandate that social media companies apply their own terms and services “consistently” without second-guessing the content moderation decisions the companies make. U.S. policymakers might want to avoid that requirement or narrowly constrain it to prevent a regulator from making individualized judgments about moderating content or accounts.

In her summary of the U.K. bill, Demos analyst Ellen Judson says that it contains “fundamental misunderstandings” of a systems-based approach because it takes social media content into account in determining social media duties. She would prefer only content-neutral measures and refers approvingly to various measures to slow all content distribution and new account growth.

Such content-agnostic measures would be worthwhile elements in any system-based regulation. The U.K. bill allows for them and in the United States, a bipartisan Senate bill, the Social Media NUDGE Act, would authorize the Federal Trade Commission to impose them.

But they cannot be the whole story. As I’ve pointed out, to deal with harmful content, someone, the government or the social media companies, or some third party, will have to make judgments about which content causes harm and take steps to address that content, while not interfering with other content. Content neutral measures would reduce the availability of all content, not just the harmful material.

The U.K. bill puts regulatory authority over social media in the hands of the traditional media regulator. Ofcom is required to coordinate with the U.K. competition authority and the U.K. privacy regulator when it develops policies but, in the end, it decides what the content moderation rules are.

The counterpart in the United States would be to treat the Federal Communications Commission as the social media regulator. But U.S. legislators have gone in a different direction. One proposed bill after another assigns regulatory responsibility for competition, privacy, or social media content moderation to the Federal Trade Commission. Effectively making the FTC the nation’s digital regulator.

This choice of the FTC as the single digital regulator has the advantage of lodging the different digital policy agendas in the same agency to provide for coherence and avoids the need for a cumbersome regulatory cooperation forum that the U.K. has developed.

But the FTC is not an ideal choice. It is a generalist law enforcement agency, not a sector-specific rulemaking agency. Ultimately, a digital platform commission should be responsible for ensuring that social media companies and other digital companies operate in the public interest. Proposals from former FCC Chairman Tom Wheeler and his colleagues and from Public Knowledge advocate Harold Feld call for the creation of such an agency. On May 12 Sen. Michael Bennett (D-Colo.) introduced a bill that would establish such a commission. It would be an attractive way forward. Even if, for practical reasons, Congress initially lodges digital responsibility in the FTC, over time it should transfer authority to a separate digital regulator with responsibilities for promoting competition, protecting privacy, and preserving free speech in digital industries.