Sections

Commentary

Transparency is essential for effective social media regulation

iPhone being used

In response to the information disorder on social media platforms, governments around the world are imposing transparency requirements in the hopes that they will improve content moderation practices. In the U.S., for instance, a new California law would impose a range of disclosure and openness requirements. In Europe, the recently finalized Digital Services Act is filled almost entirely with transparency requirements.

But a surprising number of academics, observers, and policymakers say “meh.” One participant in a recent Georgetown University tech policy symposium said transparency is a “timid” solution. Another participant shared that disclosure rules imply that whatever the companies do is fine as long as they are transparent about it. Haven’t policymakers learned anything from the failures of privacy notice and consent rules? Transparency initiatives, the critics say, just distract from the hard work of developing and implementing more effective methods of control.

These critiques of transparency make two points. The first is that transparency requirements written into law wouldn’t ensure much useful disclosure. The second is that substantially increased disclosures wouldn’t do much to mitigate the information disorder on social media.

In a series of reports and white papers (here, here and here), I’ve argued that transparency is a necessary first step in creating a regulatory structure for social media companies. The answer to the first criticism, that a legal requirement for disclosure will not produce useful disclosures, is to insist on the importance of a regulator. Disclosure requirements alone are not self-enforcing. A dedicated regulatory agency must define and implement them through rulemaking and must have full enforcement powers, including the ability to fine and issue injunctions. That will help to ensure that disclosure actually happens as mandated by law.

The response to the second criticism, that transparency by itself won’t do much to stem the tide of disinformation and hate speech, is that without transparency, no other regulatory measures will be effective. Whatever else governments might need to do to control social media misbehavior in content moderation, they have to mandate openness, which requires implementing specific rules governing these disclosures.

Regulatory Oversight of Transparency

Transparency is not a single policy tool. It has different dimensions. Roughly, they are providing disclosures to users, public reporting, and access to data for researchers.

Disclosure to users includes revealing information about the content moderation standards a social media company has in place, its enforcement processes, and explanations of take downs and other content moderation actions, descriptions of complaint procedures, among other things. Each of these outputs provide users with opportunities to complain about problematic content and to receive due process when social media companies take action against them.

While general requirements for disclosures to users can be written in statute, a regulator would have to determine the specifics, which might differ according to the characteristics of a company’s line of business. The regulator would have to specify, for instance, at what level of detail content rules and enforcement processes need to be disclosed to users, when and how often, and the adequacy and timing of notification and complaint procedures. This should be done through public rulemaking with input from civil society, industry, and academia. Without these regulatory specifications and enforcement, disclosures to users might well be useless.

The second dimension of transparency is transparency reporting. This includes reports and internal audits of platform content moderation activity, the risks created through social media company activities, the role of algorithms in distributing harmful speech, assessments of what the companies do about hate speech, disinformation, material harmful to teens, and other problematic content. Transparency reporting could also include a company’s own assessment of whether its activities are politically discriminatory, a favorite topic of political conservatives. For instance, a 2021 internal Twitter assessment disconfirmed conservative complaints of bias, finding instead greater algorithmic amplification of tweets by conservative political leaders and media outlets.

Regulators are absolutely key in implementing transparency reporting duties. They have to specify what risks must be assessed and what statistics have to be provided to assess these risks. It cannot be left up to the companies to determine the content of these reports; and the details cannot be specified in legislation. How to measure prevalence of harmful material on social media is not an immediately obvious thing. Is it views of hate speech, for instance, as a percentage of all views of content? Or is it hateful posts as a percentage of all posts?

The metrics that must be contained in these reports have to be worked out by the regulator, in conjunction with the industry and with researchers who will use this public information to assess platform success in content moderation. There might be a place here, although not a determinative one, for a social media self-regulatory group, similar to the Financial Industry Regulatory Authority, the broker-dealer industry organization, to define common reporting standards. Almost certainly the important and relevant statistics will change over time and so there must be regulatory procedures to review and update reporting statistics.

The third element of transparency, access to data for researchers, is a very powerful tool, perhaps the most important one of all. It requires social media companies to provide qualified researchers with access to the internal company data that researchers need to conduct independent evaluations. These outside evaluations would not be under company control and would assess company performance on content moderation and the prevalence of harmful material. Data transparency would also allow vetted researchers to validate internal company studies, such as Twitter’s own assessment of political bias. The digital regulator in conjunction with research agencies such as the National Science Foundation or the National Institute of Health would have to vet the researchers and the research projects. Researchers and civil society groups working with an industry self-regulatory organization can help define access parameters, but ultimately, they will have to be approved by a government agency.

The regulator must at a minimum assure that companies do not seek to frustrate the goals of access transparency by not providing timely or accurate data. But even assuming company goodwill to comply with the rules, there are lots of controversies in this area that only a regulator can resolve.

Regulators will need to decide whether researchers will be able to access data on premises, through application programming interfaces, or a combination of both. Resolving this might require the regulator to make a balancing judgment on compliance burden versus data utility. Another issue is whether the data has to be provided to researchers in a form that technologically protects privacy, and if so, which form. Is it differential privacy or K-anonymity or some other technique? Alternatively, some research might demand access to identifiable data, and privacy can only be assured by contract such that a researcher involved in a serious privacy violation would be banned from further access to social data and might face financial penalties as well.

Proprietary algorithms need to be protected from public disclosure, and yet researchers need access to algorithmic input data, output data, and the algorithm itself to assess the systematic effects of social media policies and practices on information disorder. Confidentiality conditions have to be specified and enforced by the regulator.

Another controversial issue is whether access should be limited to researchers with an academic or research institute affiliation. What about civil society groups, advocacy groups, and journalists? This could be specified in advance by legislation, but only a regulator can make a determination about whether a particular institutional affiliation counts as one of the approved affiliations. In addition to vetting researcher qualifications and affiliations, the regulatory agency must develop techniques to avoid or mitigate the risks of partisan hatchet jobs or industrial espionage disguised as research. Resolution of each of these issues will require the combined efforts of all the stakeholders and ultimately a regulatory decision to reach closure.

No one should pretend implementation of these transparency measures will be easy, but without an authoritative regulatory to get to closure on these questions and to enforce decisions once made, the entire transparency project is at risk of being an empty exercise.

Why transparency is essential

Seeing all that is involved in implementing transparency measures brings us back to one question: Why do this at all? What good does transparency do? In general, the answer is that it provides public accountability. Without transparency, due process for social media users is impossible. If users do not know what the rules are, how can they stay within the guardrails? If they have no mechanism to complain about problematic content or about mistaken moderation decisions, how can the content moderation process satisfy the demands of procedural fairness for users? Disclosure and due process are so intimately linked that user transparency measures often include due process requirements. For instance, often legislation does not merely require platforms to disclose whatever complaint process they happen to have, it also requires platforms to have a complaint process.

Mandated public reports on moderation practices are likely to improve content moderation, just as accounting reporting requirements encourage good corporate financial conduct. When companies know that they will have to issue these detailed assessments, they are more likely to take steps to mitigate problems to reduce the reputational risks that would come with a poor report card.

Researcher access to internal company data allows independent outside assessments of company conduct and can indicate areas for improvement that the companies themselves might have missed or tried to cover up in their public reports. As one of the participants in the Georgetown tech workshop mentioned, in some cases companies might be involved in illegal activity, such as targeting housing ads on the basis of prohibited characteristics. Outside independent audits can help to detect that.

Of course, more might be needed than just transparency mandates. When I described this transparency approach to one government official recently, he said, “We don’t tell companies that pollute our air and water to describe the inner workings of their pollution control equipment. We put in place permissible emission levels and require them to meet these requirements.”

It’s a good analogy. Content moderation efforts suffer from an externality problem similar to environmental pollution. Polluting companies export the cost of pollution rather than experience it themselves. Social media companies also externalize the harms they exacerbate. The Rohingya suffered far more from the hate speech that Facebook (now, Meta) allowed on its platform than Facebook did. It might seem sensible, then, to mandate content controls on social media companies just like we mandate pollution controls on companies.

The difference, of course, is that content moderation has to do with speech. There are good policy reasons to be careful about mandating speech controls—they constantly risk throwing out the baby with the bathwater, the good speech with the bad. There are also constitutional standards that would constrain, in the U.S. at least, efforts to ban harmful material such as hate speech and disinformation.

Still, there might be additional regulatory measures short of speech controls that would be helpful. A key advantage of transparency is that it can illuminate the areas where additional measures are needed and which ones are most likely to be effective. Maybe we should require use of neutral disinformation prevention techniques, such as the pre-bunking measures Google investigated recently, or the inoculation techniques researchers have tested to counteract the persuasive effects of repetition. Social scientists like Leticia Bode at Georgetown regularly test other misinformation prevention techniques, adding to our understanding of what works and what doesn’t.

Or perhaps, as another participant at the Georgetown symposium mentioned, it might be effective to mandate the recall of certain amplification techniques if they proved harmful, the way the Food and Drug Administration can recall a drug with serious toxic side effects. Along those lines, Senator Amy Klobuchar has proposed legislation to have the National Science Foundation identify “content neutral” restrictions on social media algorithms that might reduce the spread of harmful material.

The problem is that we don’t know enough yet to mandate any of these measures. In the face of this ignorance, it is pointless and maybe dangerous to mandate specific remediation techniques. In a recent conversation, a European regulator shared that transparency is an essential part of a social media regulatory feedback loop. Transparency leads to more information about what content moderation techniques work, which leads to better regulations, which leads to better information, which leads to better regulations, and the cycle continues. The more policymakers know about the inner workings of social media companies through transparency disclosures, the more likely it is that they will be able to devise ways to improve content moderation.

We know what the problems of social media content moderation are: hate speech, disinformation, amplification of terrorist material, and material that harms minors. Without further information about what remedies are effective, though, policymakers are shooting in the dark in setting rules to control these problems. Transparency provides the input that can reveal further steps that policymakers can take to improve social media content moderation.


Google and Meta are general unrestricted donors to the Brookings Institution. The findings, interpretations, and conclusions posted in this piece are solely those of the authors and not influenced by any donation.