Sections

Research

A dispute resolution program for social media companies

A 3D plastic representation of the Facebook logo is seen in front of displayed logos of social networks in this illustration in Zenica, Bosnia and Herzegovina May 13, 2015. Facebook announced deals with nine publishers -- including NBC News, the New York Times and BuzzFeed -- to deliver select articles "instantly" on mobile apps. A next logical step for the social giant would be to extend the program to Internet-video providers. Under the Instant Articles program, Facebook caches content on its servers so that it loads up to 10 times faster than regular article posts, which take an average of eight seconds to access. The other launch partners in the program are National Geographic, The Atlantic, the U.K.'s Guardian, BBC News, Spiegel and Bild. REUTERS/Dado Ruvic  - GF10000093280
Editor's note:

This report from The Brookings Institution’s Artificial Intelligence and Emerging Technology (AIET) Initiative is part of “AI Governance,” a series that identifies key governance and norm issues related to AI and proposes policy remedies to address the complex challenges associated with emerging technologies.

In a speech following his 2020 Democratic senatorial primary victory, U.S. Sen. Ed Markey said triumphantly, “The era of incrementalism is over.” This call to “think big and take bold action” is part of a growing wave of policy innovation, especially with regard to regulating digital platforms that are the source of increasing public concern and scrutiny. New ideas for digital governance are most urgent in addressing the information disorder within the social medial industry, including the hate speech and disinformation present on the largest platforms.

Regulating content disorder, however, faces the special burden of preventing a partisan government from tilting public discussion to favor its own point of view. For this reason, proposals for government agencies to direct social media companies to remove harmful material are non-starters. An appealing alternative starts with mandated transparency and accountability rules that require disclosure of content standards and follow through. As a baseline, this informs the public and forces social media companies to act consistently with their own rules.

But this idea faces an insurmountable enforcement problem. How could a government agency enforce consistency without analyzing content decisions—and thereby increasing the very real and present danger of partisan abuse of regulatory authority?

A non-governmental regulator supervised by a government agency might provide a way out of this enforcement dilemma. The Financial Industry Regulatory Authority (FINRA), the self-regulatory organization established to oversee broker-dealers, provides one model. Lessons can also be learned from the National Association of Broadcasters, which developed and enforced broadcast code from the 1930s to the 1980s. As outlined below, I show how such an enterprise could operate and why it would be beneficial.

An Attractive Ideal

Social media companies allow their users to both send and receive information. This capacity distinguishes these companies from broadcasters, cable channels, newspapers, and other media outlets which are publishers of their own material. With this added power, however, comes additional responsibilities, particularly the responsibility to moderate the content posted by social media users.

When these companies are large enough, they become systemically important and able to dominate the direction and content of public discussion. They are effectively unavoidable for people seeking to fully engage in their respective communities. Large social media companies face the additional responsibility of creating access to a broad range of ideas.

One minimum requirement to satisfy these obligations is that the larger, systemically important companies must operate their content moderation programs in accordance with publicly disclosed standards. This consistency requirement creates a right to access and an assurance that users will be afforded full access to the platform’s communication services if they stay within the company’s boundaries of acceptable discourse. The diversity of ideas on the platform is advanced through this consistency duty and the access right it implies.

Transparency and Accountability Rules

To implement and guarantee due process for social media users, Congress should pass legislation establishing transparency and accountability rules for the social media industry and authorizing a federal regulatory agency to implement and enforce these rules. The legislation should ensure, among other things, the agency’s guiding principle as fostering the right of users, established by Article 19 of the Universal Declaration of Human Rights: “To seek, receive and impart information and ideas.” It would authorize the agency to define when a social media company is important enough to warrant coverage under the transparency and accountability rules.

A detailed description of transparency rules that Congress might mandate is available in my working paper for the Transatlantic Working Group on Content Moderation and Free Expression. A key element is a requirement for social media companies to establish and maintain content moderation programs consisting of publicly disclosed content rules, enforcement techniques, explanations of moderation decisions, and internal dispute resolution procedures. This requirement should allow sufficient flexibility for companies to update their standards to fit evolving challenges and community norms.

Without more legislation, however, this transparency framework leaves social media companies in complete control of their programs, with unfettered discretion to enforce their rules—or not—in an arbitrary and capricious fashion. To avoid this, Congress should also require that companies maintain consistency with their publicly announced standards and face some form of accountability for failure to do so.

The Enforcement Dilemma

It is hard to see how a government agency can enforce this requirement without creating an opportunity to manipulate social media discourse to favor a partisan perspective. Suppose a social media company announces a policy against hate speech. This creates two different enforcement issues. The first arises when material is taken down for violating this rule. How can users gain redress if they think the material is not hate speech, despite the company’s judgment? The second arises when material is not taken down even though it seems to violate the hate speech ban. How can users who complain about the material’s continuing presence on the platform obtain redress?

In either case, a government agency as the adjudicator of this dispute would have to compare the posted material to the company’s hate speech standard and decide. This need for agency judgment creates the danger of partisan abuse. Agency leaders in a conservative administration might be tempted to treat harsh speech from progressives as hate speech and seek its removal. Conversely, agency heads in a progressive administration might condemn right-wing speech as falling under the hate speech prohibition.

The danger of partisan administration of government responsibilities is always present and cannot be eliminated entirely. Still, it would be courting disaster to provide a government agency with the power to control political speech on social media platforms even in this indirect fashion. This means that government regulators cannot enforce even a content-neutral requirement for consistency with publicly disclosed standards.

The best defense against this very real and present danger of partisan abuse of regulatory authority would be to arrange a regulatory structure that insulates any independent review of social media content decisions from regulatory action by a government agency.

External Dispute Resolution

To provide necessary accountability without government review of content decisions, the legislation should mandate that—in addition to their internal dispute resolution procedures—the companies must establish and maintain external dispute resolution procedures. This would allow users to appeal content moderation decisions to an independent body empowered to second-guess company decisions. An independent reviewing body is a vital element in accountability. But if companies can pick their own review boards, the way Facebook has done in setting up its own review board, this independence is not genuine. Over the long term, the company could effectively control the outcomes of these supposedly independent reviews.

“An independent reviewing body is a vital element in accountability. But if companies can pick their own review boards, … this independence is not genuine.”

To avoid this, Congress should establish a non-governmental industry-public authority under the supervision of a federal regulatory commission to provide affordable and efficient arbitration and mediation services to social media companies to satisfy this obligation for independent review. The social media authority would have a board consisting of an equal number of representatives of the social media industry and representatives of the public. The legislation would require all covered social media companies to belong to this accountability council and to fund it through membership fees.

This non-governmental accountability council would have the necessary insulation from government to provide user redress through arbitration and mediation services, without the specter of government partisanship. The federal regulatory agency should be authorized to supervise the operations of the accountability authority, but only to ensure procedural regularity and to guard against inefficiency, neglect, or corruption. The agency itself should be explicitly forbidden to take any action to second guess the judgment of social media companies or the accountability authority. Sens. Brian Schatz (D-Hawaii) and John Thune (R-S.D.) do this in their bipartisan legislation, the Platform Accountability and Consumer Transparency (PACT) Act, which establishes a transparency regime for social media companies supervised by the Federal Trade Commission (FTC). The bill does not allow the FTC “to review any action or decision by a provider of an interactive computer service related to the application of the acceptable use policy of the provider.”

It is important to be clear that the social media accountability authority does not promulgate content standards. It does not mandate that social media companies ban harmful speech such as hate speech and disinformation. Content standards are set by the social media companies themselves. Transparency rules only require that social media companies publicly articulate what content they view unacceptable, including content rules they might adopt on hate speech or disinformation campaigns. The authority does not enforce its own interpretation of internationally accepted rights to free expression. It does not apply or interpret the law. It does not impose its own view of what is acceptable speech. It is there solely to enforce a consistency requirement.

Many details of such a dispute resolution system would need to be resolved before this proposal is ready for legislative consideration. The dispute resolution system established and operated by the Financial Industry Regulatory Authority (FINRA) for regulated broker-dealers might provide a model that would help to flesh out these crucial details.

The FINRA Model of a Non-Governmental Regulator Overseen by a Federal Agency

FINRA is a not-for-profit self-regulatory organization authorized by federal law to regulate the broker-dealer industry. The Securities and Exchange Commission (SEC) supervises FINRA’s operations and a board of governors, consisting of an equal number of public and industry representatives, governs it.

In 2019, its membership consisted of 624,674 registered brokers or broker-dealers, 3,517 securities firms, and the major exchanges, including NYSE, AMEX, and NASDAQ. That same year, FINRA conducted 6,740 exams and reviews, levied $39.5 million in fines, restored $27.9 million in restitution to harmed investors, expelled six firms, suspended 415 brokers, and barred 348 of them from the industry. The organization had operating revenues of $899 million and maintained assets of $2.3 billion. It employs approximately 3,400 people and its CEO was paid over $3 million in total compensation.

FINRA administers an elaborate regulatory program for the industry, consisting of rulemaking, guidance, supervision, monitoring, and examination to ensure compliance and enforcement. Member firms must adopt policies and procedures reasonably designed to establish compliance with these rules. FINRA functions like a trade association except broker-dealer firms must be members and individually registered representatives must register with FINRA. Sanctions against firms and individuals include fines, suspensions, or barring from the securities industry. The entire regulatory structure is subject to oversight and review by the SEC.

FINRA runs an arbitration and mediation service for its members—securities firms and their registered employees—and investors. Most broker-dealer contracts with their customers require arbitration to settle disputes, and FINRA’s Code of Arbitration Procedure allows a customer to compel a broker-dealer or person associated with a broker-dealer to arbitrate a dispute at the customer’s request.

FINRA maintains a roster of 7,900 arbitrators, some of whom are or have been affiliated with the securities industry; others, called public arbitrators, are not. The parties select their arbitrators—either a sole arbitrator or a three-person panel—from a short list of arbitrators generated at random from the master roster. Investors can demand public arbitrators chosen from this short list, but the parties have to agree on who will resolve their disputes, which are almost always monetary and usually end with a settlement.

Arbitrators must complete a training program before FINRA will add them to the roster. They are not employees of FINRA, but rather independent contractors paid by FINRA. The party initiating a claim must pay an initial filing fee, which can be waived upon a showing of financial hardship. The arbitration award determines how the parties share these fees and other administrative costs. The arbitration award is final and binding on the parties. As non-judicial proceedings, these awards create no precedents and cannot be appealed to FINRA, the SEC, or the courts.

Lessons Learned from FINRA

The major lesson for social media regulation from the FINRA example is that establishing a self-regulatory structure would be neither simple nor cheap, but it can be done. In addition, FINRA’s dispute resolution system provides guidance for designing a similar process without the danger of partisan abuse by a government agency. One outline of such a structure is the following.

The social media accountability authority would offer dispute resolution services to social media users who would be able to bring issues to the authority for resolution. Users would be required to pay a small initial filing, refundable if the award is favorable to their complaint. Administrative fees and arbitration fees would be paid by the social media authority. The company and the users involved in a dispute would have to agree on a three-person arbitration panel chosen from a list selected by the council.

“The major lesson for social media regulation from the FINRA example is that establishing a self-regulatory structure would be neither simple nor cheap, but it can be done.”

Arbitrators could not be employed by the social media industry or provide professional services to the industry. However, they could have held such positions in the past and be eligible to serve as arbitrators after a certain period of time. Arbitration panels should have no more than one person formerly associated with the industry and would be required to have one from both civil society and academia. The social media authority would maintain a list of arbitrators who are experts in various fields associated with content moderation, specifically for hate speech and disinformation.

Social media arbitrators would function as independent contractors and not employees of the social media authority. They would also be paid an honorarium for their service on a panel. Each of the covered social media companies would provide free training for arbitrators in its content moderation program standards, enforcement techniques, and internal dispute resolution system. In addition, the social media authority would operate its own training program to make sure that arbitrators understood their function of applying the social media company’s standards, not their own, in resolving disputes. The social media council would be responsible for ensuring that the pool of arbitrators reflects the community, including through diversity of age, gender, and race. The arbitration hearings would be conducted entirely online.

The decision of the panel would be final and binding. It would not be appealable to the social media authority, regulatory agency, or courts. The arbitration panel would be empowered to order the social media company to take down or to reinstate content, to end demotion and delay measures or to impose them, to order accounts suspended or restored, and so on. In short, it would act as an appeals court authorized to reverse any of the content moderation enforcement actions taken by the social media company itself. It would not be authorized to impose fines or penalties on the social media company or the plaintiff.

A possible criticism of the dispute resolution approach is that it does not provide the sought-for consistency. Arbitration is aimed at resolving immediate conflicts—not establishing precedents that help users understand speech standards. A series of idiosyncratic decisions from different arbitration panels could just as easily engender confusion as reassurance.

One response to this might involve the flexibility for social media companies to learn from dispute resolution decisions. If one or more dispute resolution decisions seem to suggest that a responsible outside interpretation of a social media standard is not what the company wanted or intended, then the company would be free to clarify its own standards. Ultimately, the company is the keeper of its own acceptable speech rules. Dispute resolution mechanisms can help them adjust those standards to avoid misinterpretations and to meet changing conditions.

Much more would need to be done to build out the details of this dispute resolution mechanism. However, the model of FINRA and these suggestions toward adapting that model to the case of social media oversight show that such a program can be operated successfully on a large scale and without direct government control, and can be funded through industry fees with no burden on the taxpayers.

A Larger Regulatory Role for the Social Media Authority

This paper seeks a solution to the difficulties of enforcing consistency with publicly disclosed content standards. It focuses on setting up a social media dispute resolution system run by an accountability authority insulated from government control. But the social media accountability authority could be given a larger regulatory role, similar to FINRA’s functions, of supervising the industry transparency and accountability program more broadly.

“Several proposals for increasing competition and protecting consumers in digital markets recommend an industry code of conduct developed by a self-regulatory organization with supervision by a government digital platform agency.”

Several proposals for increasing competition and protecting consumers in digital markets recommend an industry code of conduct developed by a self-regulatory organization with supervision by a government digital platform agency, including the report by Jason Furman on digital markets for the U.K. government and a recent report from the Shorenstein Center by Tom Wheeler, Phil Verveer, and Gene Kimmelman.

The Shorenstein Center report calls for a new regulatory paradigm to protect competition and consumers in digital markets. It proposes an innovative institutional structure consisting of a new three-person federal regulatory commission, the Digital Protection Agency, and a Code Council with equal membership from the industry and the public. The division of responsibilities—the Code Council drafts “enforceable behavioral rules for the affected companies” and the DPA approves and enforces them—is meant to provide agility for regulation to keep pace with the technological and business developments in a rapidly evolving industry.

The Shorenstein Center report aims to overcome the limitation of current antitrust, noting “it would be a serious mistake to rely on antitrust enforcement as the sole mechanism for securing our society’s interest in the workings of the ever more critical digital platforms.” But its dual regulatory structure is focused on remedies aimed at protecting competition, including “non-discrimination, access to data sets, interoperation, and similar requirements designed to lower barriers to competition with the major platforms.” The Furman report’s similar recommendation for a digital platform unit with authority to develop a code of conduct for digital companies with significant market status also aims to remedy competitive difficulties in these markets.

This dual regulatory approach might be worth pursuing for social media regulation. Outsourcing transparency and accountability rulemaking, supervision, and enforcement to a self-regulatory organization might allow for more flexibility. Moreover, it could provide still more insulation from the dangers of a rogue regulatory agency using its implementation and enforcement authority in a partisan way to tilt content moderation decisions. As the FINRA model suggests, however, such an extension of the social media authority’s functions to a regulatory and supervisory role would be neither cheap nor simple.

The Broadcasting Content Codes

To clearly understand the role of the social media accountability authority and its dispute resolution program, it is helpful to contrast it with the role played by the National Association of Broadcasters (NAB) in developing and administering the radio and television broadcast codes.

The NAB created its first code of practice for radio during the era of the National Recovery Administration’s (NRA) encouragement of industry codes of fair competition. It survived the demise of the NRA in the Supreme Court’s 1935 Schechter decision and the NAB extended it to television in the early 1950s. A division of the NAB applied its code, disciplining the companies who violated it through fines and sanctions, including expulsion from the organization. No broadcaster had to belong to the trade association, but almost all did. Being a member in good standing allowed a broadcaster to display the NAB seal of good practice during broadcasts to assure the public that their programing abided by the code. The NAB administered its code free of interference from the industry regulator, the Federal Communications Commission (FCC), or any other government agency.

The NAB code itself was publicly available and contained extensive and explicit content rules. It barred profanity. Family life and law enforcement officials could not be portrayed in a negative light. It forbade irreverence for God and religion. Illicit sex, drunkenness, and addiction could not be depicted positively. On-air programs were not allowed to present cruelty, describe or show detailed techniques of crime, or use horror for its own sake.

The code was withdrawn in the 1980s because of antitrust and First Amendment issues. In its place, the broadcast television networks established and maintained their own individual codes enforced by standards and practices divisions that reviewed program and advertising content.

The particular defects of the NAB code that led to its demise need not infect the idea of a transparency code for social media or accountability. The antitrust issues had to do with limiting the number of commercial ads per hour, which the Department of Justice viewed as an attempt to restrict the supply of ads so as to increase the price. The First Amendment issue had to do with a particular rule in the NAB code relating to the family viewing hour, which the trade association adopted after an FCC chairman’s attempt to regulate created the appearance of state action.

Lessons from the NAB Code

The major lesson of the NAB codes for social media companies is to avoid having the social media accountability authority write and enforce substantive content standards. One of the reasons to avoid empowering a regulatory agency to create and operate a substantive content code is the fear of compelled cultural and political uniformity. This danger is present in the private sector as well.

“The major lesson of the NAB codes for social media companies is to avoid having the social media accountability authority write and enforce substantive content standards.”

Even without a government mandate to join the NAB, the broadcaster code—like its companion for the motion picture industry, the Hayes Code—exercised a strong influence toward cultural uniformity. Ideas and perspectives that would be of immense public value were removed from public discourse for the sole reason that they might offend some particular segment of the broadcasters’ mass audience. One of the major successes of the social media industry in promoting freedom has been opening public discourse to a wider range of ideas and experiences. Any attempt to impose a uniform perspective on this extraordinary diversity of ideas is a step backward.

The social media authority’s dispute resolution panels will have to make content judgments and second-guess the decisions the social media companies have made. But they should not be allowed to impose the social media authority’s own view of acceptable speech.

The touchstone for these dispute resolution panels is not the standards that the authority has developed; nor are the standards derived from local or international law. The basis for the panel judgment is consistency with the social media company’s own content standards. Their function, in the principles of Rousseau and Kant, is to require the social media companies to obey the laws they have made for themselves.

The transparency and accountability rules apply only to the few social media companies with the scope and size to become systemically important platforms for the nation’s discourse. The hundreds of smaller social media companies will have the freedom to set whatever standards they chose, or none at all, and unfettered discretion to enforce them as they see fit. In addition, the covered social media companies will also have the freedom to offer different content standards to their users. Thus, this system advances the goal that has been the touchstone of communications policy in the U.S. for generations, the Associated Press’s standard of “diverse and antagonistic sources of information.”

The First Amendment Challenge

Any mandated external review of social media content decisions creates two key First Amendment issues. If the review reverses a company’s decision to remove material, it forces the company to say things it does not want to say. If the review reverses a company’s decision to leave material on its system, it prevents the company from saying what it wants to say. The result is either speech compelled by government or government censorship and would open the law requiring external review to a First Amendment challenge.

Under First Amendment law, as currently understood and practiced, this challenge might be fatal. Courts seem to privilege the right of companies to speak or keep silent over the rights of users to be informed or protected. A social media accountability authority with the power to reverse the content moderation decisions of social media companies, to require them to take down or leave up material against their own judgment, would reverse that priority.

What could justify this reversal, making user rights paramount and relegating social media speech rights to a secondary role? One line of thinking starts with the recognition that larger social media companies hold a central position in the political, economic, and social life of this country and they are unavoidable for people who want to engage in their community. When users have no or restricted alternatives for communication services that are essential to modern life, their rights should be treated as paramount.

The defense against a First Amendment challenge to the specific program outlined in this report might note also that it accommodates the speech rights of social media companies, as well as the broader public interest in diversity of information and ideas by providing these companies with discretion to set their own acceptable use standards. But once users conform to those freely chosen standards, they should not be forced to surrender access to the platform services. No one imposes external standards on the companies, but they should be required to live up to their announced standards. Without external review, there’s no check on their discretion to enforce their own standards.

“There are hundreds of social media companies, but only a few have the scale to play a systemically important role in the nation’s life.”

Perhaps the key element in this defense is that the mandate for external review would apply only to the social media companies with a central and dominant economic position. There are hundreds of social media companies, but only a few have the scale to play a systemically important role in the nation’s life. The accountability requirements would only apply to these companies.

The Supreme Court in decision in Turner Broadcasting v. FCC opened the door to such a defense by allowing access requirements when companies play a crucial bottleneck role. The court ruled that the “increasing economic power in the cable industry” to cut off a vital avenue for broadcasters to reach their audience justified a carriage obligation. The Supreme Court’s Red Lion decision also endorses this basis for a First Amendment defense. This decision upheld broadcast access obligations and found that “it is the right of the viewing and listening public, and not the right of the broadcasters, which is paramount.” The broadcasters’ role as gatekeepers and the resulting lack of viewer and listener alternatives was the real basis for this decision. It does not stand for the idea that only spectrum scarcity and no other source of economic power can justify a fairness obligation, as it is often interpreted. Rather, it points to spectrum scarcity as creating a central social and economic position for broadcasters and depriving viewers and listeners of broadly available alternatives. It is that central position and lack of alternatives that form the basis for the access duty

Whether the courts would affirm the proposed external review system for the larger social media companies on the basis of their systemic role and restricted user choice is an open question. Given current jurisprudence, the composition of the courts, and their inclination to favor business rights, it might be a long shot. But the precedents are there for courts to do so if they want to and are persuaded that such a regulatory structure would be the least intrusive way to accomplish a compelling government interest.

The Way Forward

Today’s social media governance crisis underscores an urgent need to act quickly and address immediate problems. But policymakers aiming to form the right institutions should plan for the long term—not only for today’s crisis. They must put in place institutions that can deal with a large and evolving industry that will in many respects look different from what it looks like today. Mark Zuckerberg is almost certainly right when he said to the House Antitrust Subcommittee in July of this year: “Things change fast in tech. … The nature of our industry is that someday a product will replace Facebook.” Policymakers cannot build a regulatory structure for the social media business based on our current concerns with Facebook.

“Today’s social media governance crisis underscores an urgent need to act quickly and address immediate problems. But policymakers aiming to form the right institutions should plan for the long term—not only for today’s crisis.”

As policymakers seek a regulatory net to contain information disorder on social media companies without government censorship, a promising way forward is a transparency and accountability regime enforced by a federal regulatory commission and a dispute resolution system administered by an industry self-regulatory organization aiming to ensure consistency with a company’s publicly disclosed content standards. That’s a mouthful, but this intricate regulatory structure is needed to allow effective social control over increasingly powerful social media platforms without exposing the country to the very real dangers of partisan censorship by government officials.

Some will look at this proposal with skepticism, seeing it as roadmap to impose partisan government control over social media discourse or, alternatively, as an industry power-grab to legitimize arbitrary company control over public discourse. The dangers of government partisanship or regulatory capture by the industry are not fanciful, and the risks of abuse can only be mitigated and not eliminated. But the greater danger is to do nothing out of fear of making things worse.

The idea that we can live a life free of private-sector bottlenecks and gatekeepers without creating risks of government overreach is a dangerous illusion. Government power is required to check private power. Our fear of the abuse of government power is dangerous simply because we might do nothing to rein in the large and powerful business organizations that actually exercise dominion over us.


The Brookings Institution is a nonprofit organization devoted to independent research and policy solutions. Its mission is to conduct high-quality, independent research and, based on that research, to provide innovative, practical recommendations for policymakers and the public. The conclusions and recommendations of any Brookings publication are solely those of its author(s), and do not reflect the views of the Institution, its management, or its other scholars.

Microsoft provides support to The Brookings Institution’s Artificial Intelligence and Emerging Technology (AIET) Initiative, and Facebook provides general, unrestricted support to the Institution. The findings, interpretations, and conclusions in this report are not influenced by any donation. Brookings recognizes that the value it provides is in its absolute commitment to quality, independence, and impact. Activities supported by its donors reflect this commitment.