Sections

Commentary

We shouldn’t turn disinformation into a constitutional right

Darrell M. West
Darrell West
Darrell M. West Senior Fellow - Center for Technology Innovation, Douglas Dillon Chair in Governmental Studies

July 11, 2023


  • U.S. District Judge Terry Doughty granted an injunction against federal officials contacting tech companies about possible disinformation.
  • This ruling risks turning disinformation into a constitutional right by stymieing mitigation efforts.
  • The ruling creates a chilling effect on academic research on disinformation and could potentially endanger the 2024 elections by enabling a wave of disinformation.
Voters fill out ballots at Riverside University High School during the presidential primary election, held amid the coronavirus disease (COVID-19) outbreak, in Milwaukee, Wisconsin, U.S., April 7, 2020. REUTERS/Daniel Acker

Disinformation is material that is false, intentional, and malicious. The recent ruling by U.S. District Judge Terry Doughty of Louisiana granting a preliminary injunction against federal officials contacting tech companies about possible disinformation is problematic at many levels. With a few exceptions, it nearly turns disinformation into a constitutional right by stymieing certain types of mitigation efforts by individuals and organizations on First Amendment grounds.

The legal ruling

In recent years, there have been concerted efforts to fight disinformation by strengthening voluntary content moderation by social media platforms. The rationale is that technology is an accelerator of disinformation by spreading it quickly around the world and creating problems for democracy, governance, and public problem-solving.

In the Louisiana case, Judge Doughty offered a Free Speech rationale to a wide range of possible disinformation activities by saying federal officials could not contact tech companies for the “removal, deletion, suppression, or reduction of content containing protected free speech posted by social-media platforms.” His opinion also prohibited officials from “collaborating, coordinating, partnering, switchboarding, and/or jointly working with the Election Integrity Partnership, the Virality Project, the Stanford Internet Observatory, or any like project or group for the purpose of urging, encouraging, pressuring, or inducing in any manner removal, deletion, suppression, or reduction of content posted with social-media companies containing protected free speech.”

These are research projects at leading academic institutions that are examining disinformation and seeking to mitigate its deleterious consequences. As noted below, his ruling is very broad in its potential impact on areas, such as election security, public health, climate change, and race relations and poses a number of problems for the fight against disinformation.

Stopping disinformation mitigation

The immediate impact of this injunction is to stop the ability of federal officials to inform social media platforms about content that they see as false, intentional, and malicious to election security, climate change, public health, and race relations, among many other topics. Each of those areas have been plagued by disinformation that spreads lies, damages public health, and harms our country’s ability to address important topics.

To the judge’s credit, he did specify several exemptions from his Order. For example, federal officials could still contact tech companies in cases of illegal activity, national security risk, public safety, and efforts to mislead voters about electoral requirements or procedures, to name a few. Those exceptions are helpful in the sense that the judge recognizes government officials have access to valuable information that can aid public safety and national security threats, and therefore have a legitimate right to relay that material to businesses which are able to deal with those threats.

Limiting content moderation

Another problem with this ruling is that it will likely limit the ability of social media firms to moderate the content that appears on their sites. This is not just a legal side-effect of his ruling, but a risk that arises from two other developments in the technology area. One is that several prominent platforms already have indicated possible pullbacks in their efforts to moderate content. For example, Twitter has laid off a number of its public safety and trust staff, which has weakened that staff’s ability to moderate content. And YouTube, which previously was removing unsubstantiated claims about 2020 electoral fraud, now says it will let at least some of that material remain on its site.

The second development is the rise of generative AI tools that bring sophisticated capabilities to ordinary people. Today, nearly anyone can use AI to generate fake videos and audiotapes and put people in compromising positions—even when they are completely false. These technologies have become democratized in the sense that they no longer require much technical expertise to deploy. Instead, they are readily accessible consumer tools that are available to nearly anyone who wants to use them. And, thanks to this decision, the government’s ability to act in the face of AI-generated falsehoods is significantly hobbled.

Chilling academic research

In conjunction with other developments, this ruling could chill academic research. For several months, under the guise of combatting disinformation mitigation, the House Judiciary Committee has requested emails, personal communications, and research materials from academics at Stanford University, the University of Washington, New York University, Clemson University, and elsewhere. These rather extraordinary requests sometimes are accompanied by subpoenas demanding personal documents.

Whether intended or not, the court ruling combined with the aggressive Judiciary Committee actions raises the legal and political risks for researchers because it potentially turns their activities to fight disinformation into prohibited, illegal actions. Continuing to identify disinformation and ask tech companies to remove harmful content exposes scholars to legal penalties and could endanger their professional careers.

Endangering the 2024 elections

Finally, all these side-effects of the court ruling, platform content moderation pullbacks, new AI tools, and efforts at legislative intimidation likely could create a “tsunami of disinformation” in the 2024 elections. The weakening of content moderation, the high stakes of the next presidential election, and the possible closeness in the election outcome create tremendous incentives for bad political behaviors. We saw considerable disinformation in 2016 and 2020, but 2024 could be a Wild West of lies and deception where anything goes. Candidates, reporters, and voters in general should be prepared for an onslaught of AI-generated falsehoods with the outcome hanging in the balance. It would be a disaster if this very crucial election is decided based on fake videos and disinformation.

Author