Sections

Commentary

How Biden can take the high road on misinformation

U.S. President-elect Joe Biden speaks about  health care and the Affordable Care Act (Obamacare) at the theater serving as his transition headquarters in Wilmington, Delaware, U.S. November 10, 2020.  REUTERS/Jonathan Ernst


Editor's note:

This article originally appeared on the Lawfare blog.

In his last weeks in office, President Trump has dug in his heels on the annual defense spending bill, promising to veto the legislation over Congress’s refusal to rescind legal protections for internet platforms. As absurd as Trump’s insistence is, it shouldn’t be surprising. Throughout his term, Trump has threatened the platforms through tweets and executive actions, angling for more favorable treatment of the right-wing media and politicians—and undermining efforts to clean up the online information ecosystem in the years since 2016.

While President-elect Biden will certainly take an approach distinct from President Trump’s, it is not yet clear what that will be. “I’ve never been a fan of Facebook,” Biden told the New York Times editorial board during the campaign. Recently, senior Biden campaign staffer Bill Russo harshly criticized Facebook for its handling of posts by former Trump adviser Steve Bannon. While this frustration is understandable, Biden should refrain from narrowly focusing on individual grievances. Instead, the better approach is to take a broader, principled view. Biden should look to the idea of a systemic duty of care, which says that the platforms are dependent on their users’ social connections and, thus, are obliged to reduce online harms to those users. Rooted in this theory, he can pressure the platforms to take bolder steps against misinformation.

Trump has not been subtle in his denunciation of social media companies for purported bias against the right. He has claimed repeatedly that technology platforms “totally silence conservatives voices” and that there are “Big attacks on Republicans and Conservatives by Social Media.” Of course, this is not true. An analysis done jointly by Politico and the Institute for Strategic Dialogue suggests that “a small number of conservative users routinely outpace their liberal rivals and traditional news outlets in driving the online conversation.” A nine-month analysis from left-leaning media watchdog Media Matters similarly found no evidence of conservative censorship.

But Trump is not completely wrong. He and his supporters are receiving different treatment—because they are responsible for an overwhelmingly disproportionate share of online misinformation. An academic study of more than 1 million articles suggested that Trump himself was the “largest driver of the COVID-19 misinformation ‘infodemic.’” A working paper from the Berkman Klein Center similarly found that Trump was especially responsible for spreading disinformation about mail-in voter fraud. Similar research also argues that the right-wing media is increasingly more isolated from journalistic standards and facts. This stream of misinformation has led to modest interventions by platforms, including Twitter’s lukewarm “disputed” label tagging claims about election fraud and Facebook’s takedown of pages for several groups dedicated to election disinformation (though many more groups remain).

Trump has responded by denouncing the platforms’ modest interventions as evidence of further bias. In addition to his tweets, he signed an executive order on “Preventing Online Censorship” and instructed the Federal Communications Commission to clarify federal protections for social media companies. These actions may have influenced the platforms’ decision-making, though the economic incentives of allowing high-engagement political content also play a role. For instance, in late 2017, Facebook executives considered, but then rejected, algorithmic changes that would have reduced the spread of hyperpartisan junk news websites like the Daily Wire—instead tweaking the algorithm in a manner that harmed traffic to websites like Mother Jones, a progressive media site that also produces traditional journalistic work. This fits into an apparent pattern of behavior in which Facebook avoids interfering with right-wing content that breaks the platform’s stated rules, even over the protests of its own employees.

Clearly, Trump has pursued this strategy for political reasons. But his suggestions of how the platforms should behave roughly map on to a new theory of internet speech regulation called the online fairness doctrine, proposed most prominently by Republican Sen. Josh Hawley. This concept would be an adaptation of the original fairness doctrine, which from 1949 to 1987 required broadcasters to devote airtime to opposing views on controversial issues of public importance. In Hawley’s proposal, online platforms would lose legal liability protections unless they were certified by the Federal Trade Commission as politically neutral. As my colleague John Villasenor has written, the online fairness doctrine “suggests a view that private entities should be compelled to serve as viewpoint-neutral vehicles for dissemination of ‘government speech.’”

This line of reasoning might sound like a solution to the current political fights over the role of the platforms, but in practice it would benefit Trump and his supporters. Requiring platforms to be neutral to all political speech, regardless of its veracity and potential for harm, would systematically enable the more dishonest political party. For instance, it would prevent the platforms from taking any actions around Trump’s false claims of electoral victory—a dangerous limitation in an environment where 48 percent of Trump supporters said after the election that they expected Trump to be sworn in to a second term, despite President-elect Biden’s decisive victory in the Electoral College.

But the notion of an online fairness doctrine isn’t the only intellectual framework available for the government’s relationship with platforms. Biden has the opportunity to take a different approach from his predecessor, working with, and putting pressure on, the platforms to foster a healthier online discourse. Biden has not yet said much on these issues, though he has signaled an openness to repealing legal protections for the platforms. But a repeal of Section 230 of the Communications Decency Act requires congressional action and would be a dramatic intervention potentially out of step with the more prudent approach that Biden has promised for his administration. Limiting Section 230 would affect only illegal content anyway, not misinformation, and so Biden needs both a more tempered and effective approach.

The major platforms are now de facto public squares of the national conversation. They are entrenched in that role, and thus able to earn a profit, in part because they are built on real-world social relationships and user-contributed data and content. Just as private supply chains depend on public roads and bridges, the private platforms depend on the public’s connections. The central importance of the platforms, as well as their structural dependence on the relationships of their users, comes with a responsibility to the national discourse.

This approach is analogous to a systemic duty of care—a legal concept proposed by U.K. regulatory experts Lorna Woods and William Perrin, according to which platforms are evaluated by their holistic efforts to improve content moderation. The word “systemic” here is important. It underscores that the platforms are not to be held unduly responsible for preventing every piece of misinformation, hate speech or threat of violence—unfortunately an unreasonable standard for any combination of algorithmic and human content moderation. Instead, the platforms should be held responsible based on how well they are able to control the dissemination of harmful content at a broad, systemic level.

In theory, a systemic duty of care is backed by a regulator with authority to level fines for insufficient action against illegal content. This was spelled out most explicitly in the U.K.’s white paper on online harms, which will be put forth in legislation in 2021. Regardless, there is currently no such regulator in the United States. Former government officials have suggested empowering the Federal Trade Commission, while others are promoting a new agency, but no action looks imminent. Further complicating the idea is that while the vast majority of misinformation is harmful, it is also protected by the First Amendment.

But that’s not a reason to set aside a systemic duty of care altogether. Even without legal enforcement mechanisms, Biden could still try to advance the systemic duty of care mandate through good-faith collaborative efforts with technology companies. During the presidential campaign, the Biden team proposed a new national taskforce around online abuse, especially aimed at harms against women, such as deepfakes, revenge porn and sexual assault. A consensus-driven approach may also yield results for misinformation, especially with the more proactive platforms, such as Pinterest, Twitter and Google Search.

Other platforms may need more prompting. One rough estimate suggests that health misinformation generated billions of views on Facebook in one year. In November alone, YouTube videos supporting claims of election fraud in the 2020 presidential elections have garnered nearly 200 million views.

There is no shortage of actions the platforms can take to improve their systems. They can provide additional context from authoritative sources, such as the Centers for Disease Control and Prevention on health issues, the Cybersecurity and Infrastructure Security Agency and state election officials for voting information, the FBI for messaging from terrorist organizations, and independent fact-checking for conspiracy theories. When websites repeatedly and flagrantly promote dangerous misinformation, they should be downranked in searches and algorithmic dissemination.

The platforms should also make more data available for social science research, which has the potential to better identify problems and offer new solutions. If the platforms are hesitant to do this for fear of privacy liability, Biden could offer the Census Bureau as an interlocutor to safely hold and enable researcher access to those private datasets.

Platforms should also employ tools that aren’t overly specific to content, such as inoculation, education and friction. Inoculation works by using warning messages to put people on guard for misinformation. Twitter users saw friction in action around the election, when the site encouraged users to read articles before retweeting. While there is clear evidence that these strategies work, they also modestly reduce user engagement, and so social media companies have been reticent to employ them. Targeted pressure from the Biden administration can help change this calculation, making small changes in engagement seem like the easiest option.

Biden, who is inheriting an antitrust investigation of Facebook by the Federal Trade Commission and a lawsuit from the Department of Justice against Google, will certainly have the attention of the major platforms. Congress may yet pass legislation to rein in Big Tech, and so technology companies may be eager to show their willingness to cooperate with the new administration. Finally, public opinion does matter: Consumer boycotts, employee organizing, and the ability to attract computer science graduates as new employees all affect how these companies can function.

The U.S. is in the midst of an informational crisis made all the more dire by the coronavirus pandemic and the weakening of democracy. While internet platforms are not solely responsible for this plight, they have the capacity to improve the information ecosystem through their websites. Through the lens of the systemic duty of care, President Biden should make it clear that they have an obligation to do so.