Skip to main content

Tomorrow’s tech policy conversations today

pensive ethnic shopper with debit card and laptop on bed
pensive ethnic shopper with debit card and laptop on bed
Photo by Liza Summer on Pexels.com

Following the Jan. 6 assault on the U.S. Capitol, social media companies came under intense scrutiny for their role in incubating the mob attack. The CEOs of Facebook, Google, and Twitter were hauled before Congress to testify, and congressional critics of the company cited the assault as the latest reason why these companies should be stripped of their liability protections. But amid this focus on the role of online platforms in fueling conspiracy theories about a stolen election, e-commerce platforms such as Amazon, Etsy, and eBay have escaped scrutiny. While Etsy banned QAnon-related merchandise in October and Amazon and eBay made the same commitment in late January, the success of these efforts, much like those of other social media platforms, has been mixed at best. 

E-commerce sites present novel challenges to tech companies, policymakers, and researchers concerned with the proliferation of mis- and disinformation online. The complex ecosystem of online commerce makes it comparably easy for individuals looking to sell merchandise related to conspiracy theories—QAnon shirts, supplements, vaccine-skeptic books—and difficult for companies to root out bad actors. The issue is exacerbated by the fusion of conspiratorial content with lifestyle branding on social media platforms and e-commerce sites. While social media platforms have made progress in reducing the amount of mis- and disinformation online, e-commerce platforms are relatively more laissez-faire, continuing to provide support to the conspiracy economy. 

The e-commerce misinformation challenge

Recent research suggests that serious mis- and disinformation problems persist on e-commerce platforms. In a study published earlier this year, Prerna Juneja and Tanushree Mitra, scholars at the University of Washington, searched nearly 4 dozen terms related to vaccine misinformation on Amazon. Their query produced 36,000 results and more than 16,000 recommendations. Of these search results, 10.47% (nearly 5,000 unique products) contained misinformation. Users who clicked on products containing misinformation or who added these products to their cart then received recommendations for misinformative products on their “Amazon homepages, product page recommendations, and pre-purchase recommendations.” A separate research consortium identified 20 books that questioned the origin or nature of the COVID-19 pandemic being sold by Amazon. As recently as last month, searches on Amazon revealed that QAnon-related merchandise is still available for purchase on the platform. An Amazon search for the QAnon slogan “wwg1wga” (an abbreviation for “Where we go one we go all”) turned up a shirt reading “Question Everything,” masks with “wwg1wga” printed on them, and “I took the Red Pill” stickers. The same query on eBay produced a list of products unrelated to QAnon, except for a pair of blue “wwg1wwa” wristbands. On Etsy, the phrase produced zero results.

While quantifying the value of the market for disinformation is difficult, it is clearly a major source of income for conspiracy theorists, hate groups, and disinformation media moguls. According to a July 2020 report by the Center for Countering Digital Hate (CCDH), the online audience for anti-vaccination content may be generating nearly $1 billion in revenue for social media companies and reaching an audience as large as 58 million people. A joint report published in October by the Institute for Strategic Dialogue (ISD) and the Global Disinformation Index (GDI) found that online retail is one of the most common funding sources for hate groups. The conspiracy theorist Alex Jones has made a fortune selling supplements and other merchandise on his own website, as well as e-commerce platforms. As of this month, Infowars supplements were still available for sale on Amazon. These examples, though somewhat anecdotal, suggest how valuable and nebulous the economy for conspiratorial merchandise is.

These mixed results, which could be repeated for any number of online retailers like Teespring, Zazzle, or Poshmark, demonstrate the unique difficulties of trying to scrub QAnon from e-commerce sites. Bracketing the preponderance of QAnon and other conspiratorial information still readily available on Amazon, retailers such as Etsy and eBay have for the most part successfully removed merchandise directly referencing QAnon. But QAnon sellers are clever, rebranding their goods with seemingly benign phrases. Last year, the QAnon community began circulating the hashtag #savethechildren (sometimes #saveourchildren), a reference to the community’s belief that high-profile Democrats and celebrities are secretly involved in child sex trafficking, to avoid algorithmic filters on social media sites. The hashtag has the benefit of sharing the name of a prominent international charity, and QAnon vendors on e-commerce sites use the phrase to hide conspiratorial content in plain sight. Search Etsy for this term, and many items appear. Are they connected to QAnon? Figuring that out requires a bit of digging.

Consider, for example, the seller WalkTheLineDesignUS who sells a digital download of a design for a “saveourchildren” t-shirt. While the item appears benign enough, its description contains the following: “***Please be informed that the Clintons own the hashtag #SaveOurChildren, these are the people we need to be saving them from and that is down right infuriating!” When searching “saveourchildren”, WalkTheLineUS’ design appears alongside other items bearing similar logos and graphics. While most of these items contain no overt references to QAnon or any other conspiracy theory, are their aesthetic similarities enough to connect these products to QAnon? If not, how can we identify them as problematic content? This is why scrubbing e-commerce platforms is so difficult. Oblique references, multiple text fields, and a diverse array of products render algorithmic identification imprecise and ineffective. Systemic identification of problematic content on e-commerce sites requires item-level analysis conducted by human teams.

However, like the major social-media platforms, e-commerce firms are moving toward AI and machine learning to identify problematic content. In their 2020 Transparency Report, Etsy notes 80% percent of the 4 million flags for problematic content received by the company came from internal tools. (The remaining 20% came from users.) According to the report, the company plans to expand its digital content-moderation tools, including the use of “auto-suppression, image recognition, and the ability to suppress listings geographically based on local requirements.” In an April blog post, the company committed to building “a dedicated trust and safety machine learning engineering team and exploring computer vision technology, with the goal of using powerful algorithms to drive improvements in the precision of automated risk detection.” While these commitments are laudable, they raise questions about how effective AI-driven approaches to content moderation can be when cracking down on ambiguous branding and rapidly fluctuating aesthetics in conspiratorial content. 

As long as conspiratorial, disinformative content persists on e-commerce platforms, these companies’ recommendation systems are likely to push such material to their users. As detailed in a recent study by the Institute for Strategic Dialogue, Amazon recommends products in three main ways: by displaying items that customers who viewed or bought a particular item also viewed or bought; by engaging in what is called “item-to-item collaborative filtering,” which uses an algorithm to elaborate relationships between products based on user preferences; and by recommending products through auto-complete search functions. For auto-complete suggestions for the word “vaccines,” ISD found that Amazon provides phrases such as “vaccines are dangerous” and “vaccines are the biggest medical fraud in history.” Together, these recommendations encourage users who show a slight interest in conspiracy-oriented material to become immersed in a commercialized, conspiratorial environment. 

Recommendation systems allow conspiratorial worldviews to cross-pollinate. For example, users viewing the Kindle edition of THE HAMMER is the Key to the Coup “The Political Crime of the Century”: How Obama, Brennan, Clapper, and the CIA spied on President Trump, General Flynn … and everyone else will find recommendations for books about the New World Orderhealth conspiracies, and the Illuminati.Site features such as product reviews can transport a budding conspiracist to a larger, unregulated economic space. Curious users might click a link in a customer review of a book containing vaccine misinformation that leads to a blog with its own online shop that sells QAnon content banned on Amazon. In a recent interview, the researcher Marc Tuters described the review spaces on Amazon “as a kind of social media in its own right.”

The interconnected nature of the conspiracy economy creates new mis- and disinformation problems, and nowhere is this process of economic interpenetration more evident than on Instagram, where influencer models, celebrities, and so-called “mommy bloggers” sell lifestyle products—alongside QAnon ideas. Over the course of the last year, the Instagram influencer Kim Cohen (@hofitkimcohen, 158,000 followers) began posting QAnon content. Cohen easily monetized her Instagram presence, with links to her blog, online store, and YouTube page. The influencer Rose Henges’ (@roseuncharted, 156,000 followers) uses her Instagram page to link to her Amazon storefront, where she recommends Miller’s Review of Critical Vaccine Studies, a book noted in Juneja and Mitra’s study for containing vaccine misinformation. 

Deplatforming these bad actors does not appear particularly successful in pushing them to the fringes of the web. Last year, Instagram deleted the account of influencer Rebecca Pfeiffer (formerly @luvbec, 160,000 followers) after she began promoting QAnon related content. She is now back on Instagram under the handle @luvbecstyle, with nearly 13,000 followers and a line in her bio announcing that Instagram deleted her previous account. 

These anecdotal accounts illustrate two important features of the mis- and disinformation economy. First, the universe of e-commerce is deeply interconnected. Social media influencers, recommendation systems, personal websites, and third-party sites act as the underlying infrastructure that allows the misinformation and disinformation industry to flourish. Second, rarely do these influencers openly peddle misinformation and disinformation through their online storefront (except for anti-vaccination books). These influencers market seemingly benign lifestyle products using calm, natural aesthetics, while espousing conspiratorial views in their Instagram stories. Enmeshing violent, hateful conspiratorial rhetoric in a peaceful, soft-palate web environment waters down that rhetoric, naturalizing it and making it seem a mundane facet of domestic bliss. “The original function of influencers was to be more relatable than mainstream media,” the scholar Sophie Bishop told the Atlantic. “They’re supposed to be presenting something that’s more authentic or more trustworthy or more embedded in reality” and by combining this mode of communication with problematic content they effectively launder “disinformation and dangerous ideas.”

Taking on disinformation in e-commerce

Addressing the commercial aspects of QAnon and other forms of mis- and disinformation, especially COVID-19-related material, requires a robust response. A full sketch of that response is beyond the scope of this piece, but two broad recommendations stand out.  

The first is a heightened focus on research and transparency. Good policy rests on high-quality information, yet when it comes to e-commerce and misinformation that remains in short supply. Policymakers and the tech sector should therefore invest in more academic research. Studies like those by Juneja and Mitra and Infodemic need to be repeated on other sites; online marketplaces won’t get a handle on the issue without greater insight and clarity into the nature and extent of the problem. Policymakers and regulators should also push for greater transparency for the same reason. In particular, there needs to be increased transparency into the content moderation practices of online retailers, including how human teams are deployed to report item-level content and the volume of material removed. Likewise, there are also needs to be greater transparency into e-commerce recommendation systems, especially those that cross-pollinate between different conspiratorial communities.

The second is a greater emphasis on the broader ecosystem in which e-commerce systems operate. In part that means recognizing that de-platforming bad actors in the e-commerce environment does not simply push them to the edge of the internet. It is all too easy to rebrand and resurface in the dense, multiplatform world of commodified conspiracy. Yet it also means recognizing the linkages between on-line and off-line behavior. While e-commerce sites do not possess the organizational affordances needed to mobilize violent mobs, they do have offline effects. Durable consumer products often become part of the everyday lives of consumers, which is particularly problematic when goods associated with hateful, violent and conspiratorial ideologies masquerade as benign lifestyle products. Finally, an ecosystem approach also means recognizing the unique role of Amazon within that ecosystem. While this article has focused on a variety of e-commerce sites, it would be foolish to conflate Etsy or eBay with Amazon. As the 10,000-pound behemoth in the room, Amazon holds the key to e-commerce reform. While Amazon has banned a fair amount of content in the aftermath of 2020, hateful misinformation still appears on the site with alarming frequency, such as the recent anti-vaccination t-shirt featuring a yellow star similar to the one worn Jews were forced to wear during the Holocaust.  

To be sure, no magic bullet exists that will fully resolve the challenge of mis- and disinformation on online platforms. But the broad recommendations above at least offer a path toward progress. And given the real-world impacts of the conspiracy economy, inaction is no longer an option. 

Patrick Jones is a doctoral candidate in the School of Journalism and Communication at the University of Oregon.

Amazon, Facebook, Google, and Twitter provide financial support to the Brookings Institution, a nonprofit organization devoted to rigorous, independent, in-depth public policy research. 


a workerin a semiconductor factory is refelected in a 300-milimeter wafer.
An associate is reflected in a 300-millimeter wafer in the clean room during a press tour of Bosch's new semiconductor factory. The chip factory will officially go into operation on June 7, 2021.
An worker is reflected in a 300-millimeter wafer in the clean room of a Bosch semiconductor factory on June 1, 2021. (Robert Michael/dpa-Zentralbild/dpa)

Though born in the United States, the semiconductor industry has long since gone global. As semiconductors now make up a key part of a wide variety of critical goods, the chipmaking industry has now become a central field of geopolitical competition. On this episode of Mitre Engenuity’s Circuit Talk podcast, Pavneet Singh, a nonresident fellow in the John L. Thornton China Center at Brookings, and Nadia Schadlow, a senior fellow at the Hudson Institute, sit down with Willy Shih, a professor at Harvard Business School, to understand how capital markets have shaped the industry and how chip design has been separated from manufacturing.

Read More

A French soldier is seen operating a laptop computer while seated.
A French soldier is seen operating a laptop computer while seated.
Paratrooper soldiers from the 8th Marine Infantry Parachute Regiment train near Castres, southern France. (Fred Marie / Hans Lucas via Reuters Connect)

“If you think any of these systems are going to work as expected in wartime, you’re fooling yourself.”

That was Bruce’s response at a conference hosted by U.S. Transportation Command in 2017, after learning that their computerized logistical systems were mostly unclassified and on the internet. That may be necessary to keep in touch with civilian companies like FedEx in peacetime or when fighting terrorists or insurgents. But in a new era facing off with China or Russia, it is dangerously complacent.

Any 21st century war will include cyber operations. Weapons and support systems will be successfully attacked. Rifles and pistols won’t work properly. Drones will be hijacked midair. Boats won’t sail, or will be misdirected. Hospitals won’t function. Equipment and supplies will arrive late or not at all.

Our military systems are vulnerable. We need to face that reality by halting the purchase of insecure weapons and support systems and by incorporating the realities of offensive cyberattacks into our military planning.

Read More

The U.S. Capitol building is seen through fencing set up after the Jan. 6 assault on the building.
The U.S. Capitol building is seen behind security fencing that has been up around the building since shortly after the January 6, 2021 siege, after legislation to create a bipartisan commission to investigate the January 6 assault by supporters of former President Donald Trump failed by a vote of 54-35 due to a Republican filibuster, on Capitol Hill in Washington, May 28, 2021.  REUTERS/Evelyn Hockstein
The U.S. Capitol building is seen behind fencing set up around the building after the Jan. 6 assault on the building. REUTERS/Evelyn Hockstein

When it comes to mitigating online harms, the U.S. Congress is at least united on one point: There is a need for greater transparency from tech companies. But amid debate over how to reform the liability protections of Section 230 of the Communications Decency Act, the exact shape of proposals to mandate transparency remains uncertain at best.

While “transparency” means different things to different people, it speaks to the desire among lawmakers and researchers for more information about how social media platforms work internally. On the one hand, the desire to impose transparency requirements runs the risk of becoming a catch-all solution for online harms. On the other, if lawmakers are ever to arrive at “wise legal solutions” for these harms, they will need better data to diagnose them correctly, as my Stanford colleague Daphne Keller has argued.

In order for platform transparency to be meaningful, scholars argue that these companies need to be specific with the type of information disclosed. This means platforms cannot just increase the amount of information made public but also need to communicate that information to stakeholders in a way that empowers them to hold platforms to account. Currently, tech platforms oversee themselves and are not legally obligated to disclose how they regulate their own domain. Without mandatory regulation, we are left with self-regulatory efforts, which have no teeth. Congress is considering transparency requirements with bills such as the Platform Accountability and Consumer Transparency Act (PACT), which would require platforms to publish transparency reports, and the Online Consumer Protection Act (OCPA), which would require platforms disclose their content moderation policies. These proposals, as Mike Masnick and others have argued, embrace social media platforms’ model of transparency, but rather than improve it they add further restrictions that may be more harmful than helpful

A well-formulated transparency regime might provide a measure of oversight that is currently sorely lacking, but if lawmakers are to craft such a regime, they need to first understand that transparency reports as they are currently structured with aggregated statistics aren’t helpful; that community standards aren’t static rules but need space to evolve and change; and that any transparency regulation will have important privacy, speech, and incentive tradeoffs that will need to be taken into consideration.

Read More

Member of All India student Federation seen teaching farmers about social media, for counter attack on government through social media during the demonstration.
Farmers continue with their demonstration for the 83rd day. thousands of farmers protest against the new three farm laws at Ghazipur border. (Photo by Pradeep Gaur / SOPA Images/Sipa USA)No Use Germany.
Member of All India Student Federation teaches farmers about social media as part of demonstrations against proposed reforms to Indian agriculture laws. (Pradeep Gaur / SOPA Images/Sipa via Reuters Connect)

Every Thursday, the TechStream newsletter brings you the latest from Brookings’ TechStream and news and analysis about the world of technology. To sign up and get this newsletter delivered to you inbox, click here.


The stand-off between the Indian government and Western technology companies over online speech sharply escalated this week. On Monday, Indian police raided the offices of Twitter after the company flagged a government spokesman’s tweets for sharing forged material. On Wednesday, WhatsApp sued the Indian government in an attempt to block new internet regulations that threaten to undermine the security and privacy of Indian internet users.

This week’s events come against the backdrop of an attempt by the Modi government to expand its ability to control online speech and block dissent as it struggles to contain a massive outbreak of COVID-19. New regulations announced earlier this year that came into force on Wednesday—and which WhatsApp is now challenging in court—require online platforms to promptly respond to government takedown requests of content deemed inappropriate, to appoint in-country representatives, and to be able to trace the so-called “first originator” of content on a platform.

Read More

A screen grab of a promotional video from the Chinese artificial intelligence company SenseTime's facial-recognition product.
February 05, 2020:  Illustration of China's facial-recognition technology with the silhouette of a mock video camera in front of screen grabs taken from a promotional video by SenseTime. SenseTime is a Chinese artificial intelligence company whose SenseFace software is specialised in facial recognition.
Photo d'illustration de la technologie de reconnaissance faciale en Chine.NO USE FRANCE
A screen grab of a promotional video from the Chinese artificial intelligence company SenseTime’s facial-recognition product. (Mehdi Chebil / Hans Lucas via Reuters Connect)

As an emerging technology, artificial intelligence is pushing regulatory and social boundaries in every corner of the globe. The pace of these changes will stress the ability of public governing institutions at all levels to respond effectively. Their traditional toolkit, in the form of the creation or modification of regulations (also known as “hard law”), require ample time and bureaucratic procedures to properly function. As a result, governments are unable to swiftly address the issues created by AI. An alternative to manage these effects is “soft law,” defined as a program that creates substantial expectations that are not directly enforceable by the government. As soft law grows in popularity as a tool to govern AI systems, it is imperative that organizations gain a better understanding of their current deployments and best practices—a goal we aim to facilitate with the launch of a new database documenting these tools.

Read More

People wearing face masks walk past an electronic board showing currency exchange rates.
People wearing face masks as a preventive measure against the spread of Covid-19 walk past an electronic board showing currency exchange rates at a securities firm in Tokyo. (Photo by James Matsumoto / SOPA Images/Sipa USA)No Use Germany.
People walk past an electronic board showing currency exchange rates at a securities firm in Tokyo. (James Matsumoto / SOPA Images/Si via Reuters Connect)

Despite the economic damage wrought by the novel coronavirus over the past year, an analysis published by The Economist in December 2020 argues that the COVID-19 pandemic may have made a boom in productivity more likely to happen because “new technologies are clearly able to do more than has generally been asked of them.” This would be welcome news to observers who have scratched their heads about why supposedly innovative technologies like cloud computing and artificial intelligence have struggled to materially affect topline productivity growth numbers or the rate of overall GDP growth.

Read More

A researcher looks at a small humanoid robot standing among red and green plastic objects.
Marjon Blondeel, AI R&D developer, looks at a robot equipped with artificial Intelligence at the AI Xperience Center at the VUB (Vrije Universiteit Brussel) in Brussels, Belgium February 19, 2020.  REUTERS/Yves Herman
Marjon Blondeel, AI R&D developer, looks at a robot equipped with artificial Intelligence at the AI Xperience Center at the Vrije Universiteit Brussel in Brussels, Belgium, on February 19, 2020. (REUTERS/Yves Herman)

Much of artificial intelligence, and particularly deep learning, is plagued by the “black box problem.” While we may know the inputs and outputs of a model, in many cases we do not know what happens in between. AI developers make choices about how to design the model and the learning environment, but they typically do not determine the value of specific parameters and how an answer is reached. The lack of understanding about how an AI system works, in some cases even by the people who have developed it, is one of the reasons AI poses novel safety, ethical, and legal considerations, and why oversight and governance are especially important. Black box deep learning models are vulnerable to adversarial attacks and prone to racial, gender, and other demographic biases. Opacity is especially problematic in high-stakes settings such as health care, lending, and criminal justice, where significant harms have already been reported.

Explainable AI (XAI) is often offered as the answer to the black box problem and is broadly defined as “machine learning techniques that make it possible for human users to understand, appropriately trust, and effectively manage AI.” Around the world, explainability has been referenced as a guiding principle for AI development, including in Europe’s General Data Protection Regulation. Explainable AI has also been a major research focus of the Defence Advanced Research Projects Agency (DARPA) since 2016. However, after years of research and application, the XAI field has generally struggled to realize the goals of understandable, trustworthy, and controllable AI in practice.

This gap stems largely from divergent conceptions of what explainability is expected to achieve and unequal prioritization of various stakeholder objectives. Studies of XAI in practice reveal that engineering priorities are generally placed ahead of other considerations, with explainability largely failing to meet the needs of users, external stakeholders, and impacted communities. By improving clarity about the diversity of XAI objectives, AI organizations and standards bodies can make explicit choices about what they are optimizing and why. AI developers can be held accountable for providing meaningful explanations and mitigating risks—to the organization, to users, and to society at large.

Read More

A protester holds a sign reading "stop the steal" at a pro-Trump protest.
Diane "Feisty" Feist holds up a sign that says "stop the steal" during a pro-Trump protest outside Oakes Farms Seed to Table in North Naples on Wednesday, January 6, 2021.Ndn Best Of January 001
Diane Feist holds up a sign that says “stop the steal” during a pro-Trump protest outside Oakes Farms Seed to Table in North Naples on Wednesday, January 6, 2021. (USA TODAY NETWORK via Reuters Connect)

If platforms and policymakers are to devise effective solutions to the proliferation of fabricated news stories online, they must first establish an understanding of why such material spreads in the first place. From misinformation around the COVID-19 pandemic to disinformation about the “Brexit” vote in Great Britain in 2016, fabricated or highly misleading news colloquially known as “fake news” has emerged as a major societal concern. But a good understanding of why such material spreads has so far remained somewhat elusive. Elite actors often create and spread fabricated news for financial or political gain and rely on bot networks for initial promotion. But mounting evidence (e.g., here, here and here) suggests that lay people are instrumental in spreading this material.

These findings give rise to a question we examine in a recent study: Why do some ordinary people spread fake news while others do not?  The answer to this question has important practical implications, as solutions to the spread of fake news rest on assumptions about the root cause of the problem. The use of fact-checking efforts to reduce the proliferation of fake news, for example, rests on the assumption that citizens want to believe and share true information but need help to weed out falsehoods. If citizens are sharing news on social media for other reasons, there is good reason to believe counter-measures such as this will be less effective.

By examining the Twitter activity of a large sample of U.S. users, we found that the sharing of false news has less to do with ignorance than with partisan political affiliation and the news available to partisans for use in denigrating their opponents. While Republicans are more likely to share fake news than Democrats, the sharing of such material is a bipartisan phenomenon. What differs is the news sources available to partisans on either side of the political spectrum. In a highly polarized political climate, Democrats and Republicans both search for material with which to denigrate their political opponent, and in this search, Republicans are forced to seek out the fake-news extreme in order to confirm views that are increasingly out of step with the mainstream media. Seen from this perspective, the spread of fake news is not an endogenous phenomenon but a symptom of our polarized societies—complicating our search for policy solutions.

Read More

U.S. President Joe Biden and President of the European Council Charles Michel are seen on a large television screen attending a virtual summit.
U.S. President Joe Biden (on screen) attends virtual EU Leaders' Summit chaired by President of the European Council Charles Michel (L), in Brussels, Belgium on March 25, 2021. Photo by EU Council.
Le président américain Joe Biden (à l'écran) participe au sommet virtuel des dirigeants de l'UE présidé par le président du Conseil européen Charles Michel (à gauche), à Bruxelles, en Belgique, le 25 mars 2021. Photo du Conseil de l'UE.
U.S. President Joe Biden (on screen) attends a virtual EU Leaders’ Summit chaired by President of the European Council Charles Michel (L), in Brussels, Belgium on March 25, 2021. (EU COUNCIL / Pool / Hans Lucas via Reuters Connect)

During her speech before the World Economic Forum’s virtual meeting in Davos in January, European Commission President Ursula von der Leyen invited the United States to join Europe in writing a new set of rules for the internet: “Together, we could create a digital economy rulebook that is valid worldwide. It goes from data protection and privacy to the security of critical infrastructure. A body of rules based on our values: human rights and pluralism, inclusion, and protection of privacy.”

This invitation to collaboration comes at a time of remarkable diversity in how states are approaching internet governance. Beijing is advancing new internet standards to replace the global, open, interoperable ones. Moscow is continuing to clamp down on the web through a combination of online and offline coercive measures. India is advancing a data-protection framework with large carve-outs for state data collection against the backdrop of the Modi government’s repressive internet shutdowns. A Brazilian official who authored the country’s data localization proposal recently called data flows abroad a violation of the country’s sovereignty. In Europe, the previous watchword of “digital sovereignty” may be giving way to talk of “strategic sovereignty” in the digital sphere, but the underlying premise remains the same: creating an internet environment where European values proliferate. In the United States, the Biden administration is grappling with how to reinvigorate U.S. global engagement on technology while simultaneously managing new regulatory proposals for American tech giants.

This fractured policy landscape has prompted hope that Europe and a United States led by President Joe Biden might collaborate to facilitate a more consistent and predictable approach in internet governance, one that seeks to uphold the fundamental values of the internet. But the United States and the European Union are not as aligned on this question as some might claim. American internet governance has been described as everything from a privatized model to a hands-off-the-internet approach. In the EU, however, varying understandings of “sovereignty” online both reflect and shape the different political contexts in which member states are designing their internet governance models, which have historically been far more willing to embrace regulation than in the United States.

This divide matters for EU-U.S. technology cooperation, as recent shifts in the EU toward digital self-determination heighten the potential divergence between the Washington and Brussels in internet policymaking. How the visions of the internet in the United States and the EU bloc play out in the coming years, particularly under the Biden administration, will matter greatly for shaping the future of the internet, its freedom, and openness.

Read More