If platforms and policymakers are to devise effective solutions to the proliferation of fabricated news stories online, they must first establish an understanding of why such material spreads in the first place. From misinformation around the COVID-19 pandemic to disinformation about the “Brexit” vote in Great Britain in 2016, fabricated or highly misleading news colloquially known as “fake news” has emerged as a major societal concern. But a good understanding of why such material spreads has so far remained somewhat elusive. Elite actors often create and spread fabricated news for financial or political gain and rely on bot networks for initial promotion. But mounting evidence (e.g., here, here and here) suggests that lay people are instrumental in spreading this material.
These findings give rise to a question we examine in a recent study: Why do some ordinary people spread fake news while others do not? The answer to this question has important practical implications, as solutions to the spread of fake news rest on assumptions about the root cause of the problem. The use of fact-checking efforts to reduce the proliferation of fake news, for example, rests on the assumption that citizens want to believe and share true information but need help to weed out falsehoods. If citizens are sharing news on social media for other reasons, there is good reason to believe counter-measures such as this will be less effective.
By examining the Twitter activity of a large sample of U.S. users, we found that the sharing of false news has less to do with ignorance than with partisan political affiliation and the news available to partisans for use in denigrating their opponents. While Republicans are more likely to share fake news than Democrats, the sharing of such material is a bipartisan phenomenon. What differs is the news sources available to partisans on either side of the political spectrum. In a highly polarized political climate, Democrats and Republicans both search for material with which to denigrate their political opponent, and in this search, Republicans are forced to seek out the fake-news extreme in order to confirm views that are increasingly out of step with the mainstream media. Seen from this perspective, the spread of fake news is not an endogenous phenomenon but a symptom of our polarized societies—complicating our search for policy solutions.
During her speech before the World Economic Forum’s virtual meeting in Davos in January, European Commission President Ursula von der Leyen invited the United States to join Europe in writing a new set of rules for the internet: “Together, we could create a digital economy rulebook that is valid worldwide. It goes from data protection and privacy to the security of critical infrastructure. A body of rules based on our values: human rights and pluralism, inclusion, and protection of privacy.”
This invitation to collaboration comes at a time of remarkable diversity in how states are approaching internet governance. Beijing is advancing new internet standards to replace the global, open, interoperable ones. Moscow is continuing to clamp down on the web through a combination of online and offline coercive measures. India is advancing a data-protection framework with large carve-outs for state data collection against the backdrop of the Modi government’s repressive internet shutdowns. A Brazilian official who authored the country’s data localization proposal recently called data flows abroad a violation of the country’s sovereignty. In Europe, the previous watchword of “digital sovereignty” may be giving way to talk of “strategic sovereignty” in the digital sphere, but the underlying premise remains the same: creating an internet environment where European values proliferate. In the United States, the Biden administration is grappling with how to reinvigorate U.S. global engagement on technology while simultaneously managing new regulatory proposals for American tech giants.
This fractured policy landscape has prompted hope that Europe and a United States led by President Joe Biden might collaborate to facilitate a more consistent and predictable approach in internet governance, one that seeks to uphold the fundamental values of the internet. But the United States and the European Union are not as aligned on this question as some might claim. American internet governance has been described as everything from a privatized model to a hands-off-the-internet approach. In the EU, however, varying understandings of “sovereignty” online both reflect and shape the different political contexts in which member states are designing their internet governance models, which have historically been far more willing to embrace regulation than in the United States.
This divide matters for EU-U.S. technology cooperation, as recent shifts in the EU toward digital self-determination heighten the potential divergence between the Washington and Brussels in internet policymaking. How the visions of the internet in the United States and the EU bloc play out in the coming years, particularly under the Biden administration, will matter greatly for shaping the future of the internet, its freedom, and openness.
Four months after the assault on the U.S. Capitol that prompted Facebook and a slew of other platforms to ban then-President Donald Trump, the Facebook Oversight Board this week delivered its long-awaited ruling on whether the former president can return to the platform. Its decision was a complicated one, faulting the company for imposing an indefinite suspension that wasn’t in its rulebook and ordering the plaftorm to decide on a time-bound penalty. In a special edition of Lawfare‘s Arbiters of Truth, a miniseries about the online information ecosystem, Editor in Chief Benjamin Wittes speaks with Evelyn Douek, Quinta Jurecic and Lawfare Deputy Managing Editor Jacob Schulz about the ruling and what comes next.
At the heart of the health-care, decisionmaking, and supply-chain problems exposed by SARS-CoV-2 (COVID-19) is an unquenchable thirst for data—data to inform health professionals caring for COVID-19 patients and data to guide the decisions of policymakers and the public. Such data is available and used today in dashboards, surveillance systems, disease forecasting models, and at the point of care. However, anyone who works with or relies upon it would acknowledge that the methods for collecting, sharing, and transforming that data into actionable information are incomplete, fragile, lack standardization, and are vulnerable to cyber threats and disinformation. In many cases, such data is late, incomplete, and error-prone.
We—the health community—must strengthen our ability to leverage data in managing the health of individuals and populations. We have to do this in order to predict, prevent, detect, and respond to health threats and achieve global health security. We needed this ability before the pandemic; it became imperative during the pandemic; and we assuredly need it after this pandemic—to better protect lives during the next one. The health industry must apply systems engineering principles and best practices to the development of a comprehensive data ecosystem—a holistic, requirements-driven, risk-based approach in contrast to today’s reductionist and siloed approach to health data. The goal is to establish a health-data ecosystem with security and privacy requirements designed in from the start that continuously and efficiently collects and distributes timely, accurate, and comprehensive data among interdependent entities spanning all levels of society, leaving the world better prepared to tackle the next health crisis.
On this week’s edition of Arbiters of Truth, Lawfare‘s podcast miniseries on our online information ecosystem, Evelyn Douek and Quinta Jurecic interview Fady Khoury and Rabea Eghbariah, two lawyers involved in the legal challenge of Israel’s so-called “Cyber Unit.” That governmental body actively engages with online platforms and requests content be taken down. Such bodies, known as “internet referral units,” are spreading around the world, pose novel challenges to free speech, and represent a key tool in attempts by governments to assert greater authority over online speech.
Throughout the pandemic, social media’s role in shaping public-health discourse has been a source of frequent debate. But given data limitations and methodological challenges, assessing its impact has been difficult.
The U.S. Centers for Disease Control and Prevention (CDC) and the Federal Drug Administration’s (FDA) recent decision to suspend Johnson & Johnson’s Janssen COVID-19 vaccine provided a rare opportunity to observe how important new information travels through social media platforms and changes the tenor of online conversations. According to conventional wisdom, as news of the suspension spread across social media it should have driven greater engagement around vaccine-related content and spurred more negative discussion of vaccines.
A preliminary assessment of more than two million vaccine-related posts across Facebook, Instagram, Reddit, and Twitter reveals little evidence in support of this claim. Although the announcement registered on social media platforms, its effect on both the tenor of vaccine discourse and interest in vaccine-related posts was either short-lived or largely inconsequential. While the pause in J&J vaccinations appears to have negatively impacted public opinion regarding the vaccine, that shift likely occurred through more traditional media pathways or private channels.
On this week’s episode of Lawfare‘s Arbiters of Truth, a podcast miniseries on the online information ecosystem, Evelyn Douek and Quinta Jurecic interview Sean Li, the former head of trust and safety at Discord, the audio and text chat platform. Audio-based social media is experiencing massive growth but poses novel problems for the teams charged with keeping harmful content off the platform. As increasing numbers of social media platforms launch or prepare audio products, what should their content moderation teams be anticipating?
When we’re faced with a video recording of an event—such as an incident of police brutality—we can generally trust that the event happened as shown in the video. But that may soon change, thanks to the advent of so-called “deepfake” videos that use machine learning technology to show a real person saying and doing things they haven’t.
This technology poses a particular threat to marginalized communities. If deepfakes cause society to move away from the current “seeing is believing” paradigm for video footage, that shift may negatively impact individuals whose stories society is already less likely to believe. The proliferation of video recording technology has fueled a reckoning with police violence in the United States, recorded by bystanders and body-cameras. But in a world of pervasive, compelling deepfakes, the burden of proof to verify authenticity of videos may shift onto the videographer, a development that would further undermine attempts to seek justice for police violence. To counter deepfakes, high-tech tools meant to increase trust in videos are in development, but these technologies, though well-intentioned, could end up being used to discredit already marginalized voices.
Around the world, governments are rushing to develop so-called “vaccine passports.” The state of New York, the United Kingdom, and the European Union have all announced initiatives aimed at enabling people with proof of vaccination to engage in some activities that had been prohibited by COVID-19 restrictions. While each initiative has its idiosyncrasies, they are generally attempts to certify that a person has been vaccinated for COVID, through a range of objects broadly referred to as “passports.” This can be as simple as a paper record but typically refers to an electronic record that can be used to verify that a person received a vaccine.
But in the rush to roll out passports and allow their citizens to return to some semblance of pre-pandemic life, governments are failing to establish how to govern systems to verify vaccination status and how to resolve the disputes that will inevitably arise from them. Rather than focusing on technical methods for verifying vaccination status, governments should be working to establish clear guidance on how vaccine status information can and can’t be used in the first place; who can compel its disclosure; under what conditions it can be used to restrict a persons’ rights; and how to resolve conflicts over its use.
Without well-defined policies on how public institutions will allow vaccination to affect access to services and resources, clear articulations of private discretion to use vaccination status to impact peoples’ fundamental rights, and a system for resolving disputes arising out of abuse of these systems, few people will have confidence or trust in the equity of the system. Without clarity on our rights, or how we can enforce them when they’re violated, it’s hard for the public to have anything other than concern about digital systems used to verify immunization.
On this week’s edition of Lawfare‘s Arbiters of Truth miniseries about online information ecosystems, Evelyn Douek and Quinta Jurecic speak with Jameel Jaffer and Ramya Krishnan of the Knight First Amendment Institute about their lawsuit seeking an answer to the question of whether the president violates the First Amendment by blocking Twitter users.