Sections

Commentary

Internet referral programs are in urgent need of reform

July 13, 2023


  • A recent court injunction barring government agencies from collaborating with social media companies on content moderation raises concerns about the state of internet referral programs.
  • These internet referral programs, while valuable, lack an adequate system of procedural protections.
  • Article V of the European Union’s draft Regulation on Terrorist Content provides a guide for a regulated and transparent system of internet referral programs.
the White House

On July 4, 2023, the U.S. District Court for the Western District of Louisiana barred certain government agencies from working with social media companies for “the purpose of urging, encouraging, pressuring, or inducing in any manner the removal, deletion, suppression, or reduction of content containing protected free speech.” 

The district court’s preliminary injunction was not a final ruling on the merits. In fact, the Biden-Harris administration might prevail in its appeal to the U.S. Court of Appeals for the Fifth Circuit asking for a stay pending further proceedings, after its request for a stay to the trial judge was rejected. Still, this decision is a substantial victory for the conservative groups that have complained that these government programs are improperly targeted at conservative thought.  

The ruling has apparently already led the State Department to cancel its regular meeting with Meta. Civil rights groups and academics worry that the order will undo ongoing efforts to preserve election integrity through disinformation monitoring research. The Department of Justice made this point in the appeal for a stay. And indeed, the injunction seems to threaten the myriad of government programs—including those in the Department of Homeland Security, the Department of Justice, the Department of Health and Human Services and the State Department—that seek to work with social media platforms to fulfill their missions including assuring election integrity and safety. These operations function as internet referral programs, as they are called in many other countries where they exist, since they ask the companies to assess whether certain material on their systems is harmful or illegal. 

Without a doubt, the ruling raises significant free speech issues for these internet referral programs companies. They can easily turn from conveying valuable information to social media companies to coercion. Government agencies may not constitutionally order social media companies to delete or demote legal content. Nor can they do this indirectly by means of suggestions or referrals backed by threats to social media business interests. Some of the examples described in the court’s ruling appear to verge on intimidation, as when a government official’s call for certain content to be removed was accompanied by comments about antitrust action or initiatives to reform Section 230, a law which grants social media companies the legal immunity for the material posted by their users.

The case is likely to go up to higher courts, perhaps even to the Supreme Court. Still, the administration and Congress should not wait for years while the courts sort all this out. The best next step might be for the Biden-Harris administration, Congress, and social media companies themselves to work together to construct a regulatory regime for these cooperative referral programs. These rules should clearly forbid agencies to engage in coercion or significant encouragement that leave social media companies no choice when faced with an agency’s referral of material. The new rules should also forbid partisanship either in the content or operation of referral programs.

An essential additional requirement is transparency. Government agencies should be required to disclose what content they referred to social media companies and why. Social media companies should disclose what they did in response to this contact and why. Researchers should have full access to these reports and underlying data so they can undertake independent assessments of these activities. In this way, the public, policymakers, and reviewing courts will have a better understanding of whether these programs avoid coercion and partisanship.

The district court’s decision should serve as a wakeup call to the executive branch, Congress, and social media companies that these programs, valuable as they are, have operated too long without an adequate system of procedural protections. Policymakers should act quickly to make sure these programs both protect the public from online information threats and prevent government abuse of social media companies and online users.

The court’s injunction

Supreme Court cases, including Bantam Books v. Sullivan and Blum v. Yaretsky, allow extensive contact between government agencies and private parties. But they draw the line at coercion or such significant encouragement as to amount to state action. The district court’s memorandum ruling, but not the injunction itself, contains a substantial discussion based on this traditional standard of how the operation of various government programs might have crossed the line to become impermissibly coercive.

But the injunction itself is problematic. It prohibits agencies from “urging, encouraging, pressuring, or inducing” social media companies to downgrade content. But it appears to allow actions of “informing,” “contacting and/or notifying,” and “communicating with” social media companies on a range of topics including “postings involving criminal activity,” and “national security threats. It also allows “permissible public government speech promoting government policies or views on matters of public concern.”

This vague language and the ad hoc list of exceptions should not be the last word on how to draw the vital distinction between permitted and impermissible government contact with social media companies. As Jameel Jaffer, the executive director of the Knight First Amendment Institute at Columbia University, says, it “doesn’t really offer any principled way of separating legitimate government speech from illegitimate government coercion.” As this case goes up for review, we can expect higher courts to revise or suspend the injunction and provide more nuanced parameters for government agencies seeking to operate internet referral programs within constitutional limits.

Some guidance to the courts might become available through new legal scholarship that will be generated through a one-day conference in October on “jawboning” at the Knight Institute for the First Amendment. The workshop will assess informal government efforts to change social media content moderation policies and decisions. Much good legal work has already been done by Stanford University’s Daphne Keller, Chicago School of Law Professor Genevieve Lakier and University of Arizona Law Professor Derek E. Bambauer. This workshop can only bring forth more high-quality legal scholarship that can inform future policymaking and regulation.

The regulatory alternative

When the courts take up an issue, this typically freezes any further reform efforts until the courts have issued a final ruling. It would be unfortunate, however, if the executive branch, Congress, and the social media companies themselves were to put reform of internet referral programs on hold for years while the issues are sorted out in court. Reform is urgently needed now, especially with a strongly contested election coming up and legitimate worries about campaign integrity. Furthermore, developing and implementing a protective regulatory program for internet referral programs is beyond the capacity of the courts. The role of the courts would be to evaluate the protectiveness of the system of rules as a whole and to adjudicate complaints that those rules have not been followed in particular cases. It is up to the executive branch, Congress, and social media companies to design and implement the system of regulation.

What should such a regulatory program look like? Some preliminary guidance can be found in the first draft of the European Union’s Regulation on Terrorist Content.

Article V of the draft would have required platforms to assess on a priority basis referral from competent national authorities concerning terrorist material on their platforms. These referrals had to be sent by electronic means and had to contain “sufficiently detailed information, including the reasons why the content is considered terrorist content.” Article V clearly leaves the decision to delete the material in the hands of the social media companies, and it contemplates no consequences for a company that decides not to delete referred content. It further requires the company to respond back to the referring agency describing “the outcome of the assessment and the timing of any action taken as a result of the referral.”

Article V was deleted from the final draft of the regulation, apparently at the request of civil liberties groups that did not want these national referral programs to be validated by the European Union. From the perspective of trying to establish a regulatory system to control abuses in the operation of these programs, however, the removal of Article V can be seen as a failure to protect civil liberties not as a victory for user rights. The programs continue under loose national laws but without the protection system built into the provisions stricken by the final pan-European law.

The approach of “authorize but regulate” seems a better way to protect user rights and civil liberties. Another important part of this regulatory system would be clear statements of goals and purposes. Agencies operating referral systems should be required to show that the relevant agency’s mission would be advanced by cooperation with social media companies. It should be mandatory for agencies that such cooperation be entirely voluntary and that there would be no legal, regulatory, or other adverse consequences for the companies that did not participate. Coercion or significant encouragement would be explicitly banned in these rules. The legal community could contribute to the crafting of the precise standard used in this prohibition, informed by court precedent and ongoing research such as the Knight Institute’s workshop.

Administrative constraints should be imposed on government officials’ conduct in their social media interactions. The rules should adopt the EU’s proposed requirement that submission of material for review should be in written electronic form and be accompanied by a statement of reasons.

Many other issues would have to be dealt with. To prevent mission creep and incoherence in government efforts, each agency should be limited to issues directly and materially connected to its mission. Partisanship in the content or operation of referral programs should be banned. These programs should also go beyond asking companies to assess material against their own rules. Agencies should also inform social media companies when they detect material that appears to violate the law and their referral should include the specific law the agency thinks has been violated and why. There should be review by appropriately high-level officials, including agencies’ legal departments, for allegations of illegality. Some provision would have to be made in connection with the consequences for agencies when they falsely accuse users of violations of company rules or legal standards. Agencies should also be prohibited from circumventing these rules through arrangements with private parties, including academics and civil society groups, to pass on complaints to social media companies, although not in ways that limits those groups’ freedom of speech or academic freedom to undertake research.

Returning to transparency

But none of this will really do much good without transparency. As Genevieve Lakier has said, “Elucidating the rules that apply in jawboning cases thus can do only so much to prevent the private exercise of government power when it comes to online speech, absent much more robust transparency about the reasons why platforms take down or otherwise discriminate against individual speech acts or speakers.” She emphasizes the importance of a “public record of the government’s actions” to enforce whatever rules apply to government coordination with social media companies.

As I said in a commentary last October, if a government agency refers material that it thinks is illegal or violates a company’s terms of service, it should make that referral public, and not just transmit it to social media companies in secret. The agency should also publish regular summary reports of its activities. The reports and the underlying data should be available to independent researchers for review.

Private sector actors who pass on government referrals should also report on their activities in enough detail so that independent researchers can evaluate what they have done. These actors should also be transparent in real time and publish an after-the-fact summary of their activities.

On the social media side, the companies should reveal what referrals they receive directly or indirectly from government agencies and which ones were acted on and why. This too should be done in real time, with notification to the user whose posts were affected by the actions that were taken at the suggestion of a government agency and which agency was involved.

Of course, exceptions from disclosure to the public for vital national security, law enforcement, and other urgent public needs might have to be provided for. This might be especially important in connection with real-time disclosures, where publicity could interfere with the accomplishment of important aspects of an agency’s mission. But these exceptions should not be so open ended as to provide a loophole from genuine oversight and accountability.

In sum, these transparency measures could reassure the public that these programs do not operate on a partisan basis, which is widely believed today and was endorsed by the district court’s ruling. It could also indicate where even well-intentioned, public interest government interactions with social media companies go beyond communication and verge on compulsion. With a body of evidence before it, courts could then address the thorny question of whether particular programs go too far and thereby restrain speech that should properly be free of government censorship.

The next steps

The executive branch can put these internet referral rules in place, including the vital transparency mandate, through administrative rules governing the conduct of their agencies and departments. No action from Congress is needed. Many of the internet referral programs already have internal guidelines, policies, procedures, and reporting schedules all designed to make them effective and yet prevent mission creep and abuse. Under coordinated guidance from the current administration, these agencies can and should build on these existing protective measures; revise, streamline and update them as necessary; and make them more available to the public.

As to social media transparency, Michael W. McConnell, a professor of constitutional law at Stanford Law School, and also co-chair of Meta’s Oversight Board, has noted disclosure by social media companies is already within their own voluntary power. As he points out, the Meta Board has recommended that these companies “should be transparent and report regularly on state actor requests to review content.” The companies can take this action without waiting for any executive branch reform.

To be sure, transparency through executive branch rules and voluntary social media action might not be enough to reassure those skeptical of jawboning programs. One such skeptic, Arizona’s Derek E. Bambauer, notes, “[T]he executive is not likely to limit significantly its own enforcement powers and discretion… the political branches find jawboning too easy, attractive, and powerful to impose meaningful internal or interbranch checks on the practice.”

Ultimately, if a regulatory program for internet referral services reassures the public, it must be mandatory. In concert with the executive branch and social media companies, Congress should step up and mandate a regulatory program including transparency rules for all government interactions with social media companies aimed at downgrading content. In the absence of movement from Congress, however, the executive branch and social media companies need to act on their own.

It is important that reform should not be seen as an attack on these programs themselves or as a call for their elimination. These activities often further vital public objectives and should continue. They can be carried out in ways that protect internet users from abuse, but they must be designed and overseen properly to do that. The executive branch, Congress and social media companies should move ahead expeditiously in this reform effort.

Meta is a general, unrestricted donor to the Brookings Institution. The findings, interpretations, and conclusions posted in this piece are solely those of the author and are not influenced by any donation.