Sections

Commentary

An agenda for US-EU cooperation on Big Tech regulation

Belgium, Brussels, 2021/06/15. European Council President Charles MICHEL and European Commission President Ursula VON DER LEYEN meet US President Joe BIDEN prior to the EU-US summit at the European Council. Photograph by Valeria Mongelli / Hans Lucas.Belgique, Bruxelles, 2021/06/15. Le President du Conseil Europeen Charles MICHEL et la Presidente de la Commission Europeenne Ursula VON DER LEYEN rencontrent le President des Etats Unis Joe BIDEN avant le sommet UE-US au Conseil Europeen. Photographie de Valeria Mongelli / Hans Lucas.

During President Joe Biden’s first six months in office, his administration has made a priority of revitalizing American alliances and intensifying scrutiny of the technology industry. In Europe, policymakers are also examining the influence of tech companies. These efforts on both sides of the Atlantic crystallized in June with the formation of the EU-US Trade and Technology Council (TTC), which Biden announced at a summit alongside his EU counterpart, European Commission President Ursula von der Leyen. This new body represents an opportunity for policymakers in the United States and Europe to strengthen efforts to improve the online information ecosystem. With politicians and antitrust investigators in both Washington and Brussels scrutinizing the market power of major tech companies, the TTC gives officials in the United States and Europe a venue to make sure that their respective efforts are aligned.

The TTC should be a catalyst for policy that works to govern the myriad information and content-related problems online and a venue for policymakers to answer pressing questions regarding how to regulate online ecosystems. Democracies will fail in their mandate to produce both safe and open communication spaces if they do not confront questions about what makes some types of content manipulation unacceptable while others—including influence operations carried out in democratic states—are embraced. The TTC ought serve as a springboard for resolving these problems and broader concerns around of informational manipulation, digital hate, and influence operations.

In 2016, Russia opened the world’s eyes to just how easy it is to use informational flows on social media platforms to undermine democratic politics. Since then, we’ve seen modest momentum in policy work to curb the panoply of information operations carried out by both state and non-state actors. On the U.S. side, both partisan and procedural gridlock have resulted in a lack of relevant federal legislation. Meanwhile, California, Colorado and Virginia have passed their own comprehensive data privacy laws. This state-to-state fragmentation of policy may pose problems should Washington finally decide to act. The EU has done comparatively better, but it still has a lot of work to do. In the absence of national reforms, platforms have stepped up their efforts to discover information operations on their systems. But while these companies are growing increasingly adept at discovering such operations, much work remains to be done to safeguard the integrity of online platforms. We need governmental policy that mandates transparency and continued action in this space.

On the upside, we are starting to see the beginnings of serious tech policy reform in both the United States and the EU. In the last month or so the U.S. House of Representatives has begun to develop a semblance of direction in its efforts to regulate Big Tech, with the introduction of five legislative proposals aimed at curbing power and corporate consolidation in the industry. Comparatively, the EU has taken a more robust approach to regulating the online information space, with the European Commission putting forward a number of promising policy proposals aimed at shaping the information space. The Digital Services Act (DSA) aims to get rid of illegal online content, such as hate speech or incitement to violence, by increasing platform liability for failures to remove such content, more regular reports and auditing, and improved cooperation between service providers and government and civil society. The EU’s AI Act categorizes the artificial intelligence systems according to the risk they pose to online users and proposes to prohibit, among others, the systems used for user manipulation. Later in 2021, legislation on transparency in political advertising will be introduced, and the strengthened final form of the Code of Practice on Disinformation will be released.  

Though European states are taking the initiative, there is still a need to tie their efforts to those in the United States. Given that Facebook, Google, Microsoft, Amazon, and Apple are all U.S.-based, laws passed in the United States, in collaboration with efforts in the EU, are likely to have greater impact in correcting the myriad informational problems that currently exist online. But as transatlantic democracies come together to build a more democratic internet, they need clear, systematic, and solution-oriented principles to guide their policy generation. Too often, the debate around these issues returns to stale proposals. For instance, it is time that discussions about confronting disinformation move beyond everyone’s least favorite 26-word section of the Communications Decency Act—Section 230. Policy action confronting digital informational problems must transcend any one law while considering both the functionality of the technology and the firms at the root of such issues. Moreover, lawmakers and companies practicing self-regulation have to consider underlying, associated, and long-term social problems. They should ask: How are the policies we create to address problems like disinformation online simultaneously working to repair our social fabric?

Recent progress in digital regulation has been accompanied by a range of setbacks. In recent weeks, a federal judge in the United States dismissed two anti-trust lawsuits against Facebook due to non-updated definitions of monopolies. Germany’s Network Enforcement Act (NetzDG), which seeks to respond to the threat of online disinformation, has been repeatedly assailed by detractors who feel it was rushed and consequently flawed. The majority of a similar law in France has been deemed unconstitutional by the country’s high court. While the strengths of the EU’s DSA are grounded in what the Electronic Frontier Foundation called “a bold, evidence based” approach that prioritizes systematization of policy over event-driven panic, it still faces problems of scale, efficacy, applicability, and legality. Its success depends heavily upon broad access to data from social media companies—something firms have had major issues facilitating for outside researchers.

The failures of laws in France, the fragmented policies of various U.S. states, and the lack of action at the federal level in the United States point to a lack of basic principles, grounded in empirical research, guiding these policies. Rushed and untested approaches hurt not only the digital platforms, but also users and the quality of the information space as such. Given these considerations, it is crucial that the TTC sets tangible principles aimed at guiding multi-sector, cross-platform efforts to manage the digital information ecosystem.

In collaboration with GLOBSEC’s Alliance for Healthy Infosphere, and several other organizations and individuals from both Europe and North America, we have crafted a set of standards aimed at building a healthy online information space. They are grounded in research and past regulatory experience and speak to issues across the transatlantic online information space. These simple standards include transparency, accountability (for regulators, digital platforms, media, and civil society alike), freedom (for users to choose and decide about their own data), and proportionality (what is illegal offline should be illegal online).

Many of the principles we list are not new. Our intention, rather, is to bring them together in one space in order to cohesively present them to policy and technology communities. Too often, legislators and firms have been derailed by a myopic focus on one or another of these issues and have failed to take a more systematic approach—or, in D.C., pass relevant laws full stop. It is crucial that new regulations in the digital information space are holistic rather than piecemeal. The TTC presents a possibility of providing such a broad approach. Its multi-national stature could enable it to rise above the partisan politics of any one country while its underlying goal of rebuilding transatlantic alliances could make implementing any recommendations it makes an easier sell in various legislatures.   

With the political climate receptive to regulatory efforts on both sides of the Atlantic, the TTC is the initiative currently best positioned to initiate real policy changes. But the diversity of states included provides opportunity for conflicts with a potential to hamper any results. Prior to launching the working groups, an agreement on a basic set of principles that would guide the TTC’s work and decision-making would help to limit conflicts. The transatlantic principles that consider and combine positions from both sides of the Atlantic aim to address this need.

Improving policy requires greater transparency from digital platforms regarding what is happening in the online information spaces they operate. Collectively, policymakers and tech firms know too little—or take too narrow a view—to make informed decisions. U.S. congressional hearings with Big Tech executives and unsuccessful attempts to regulate disinformation illustrate that few in government properly understand the problem. In the rare moments they convene around these issues, this lack of comprehension makes them even less able to agree on what needs to happen.

The transparency offered by companies, meanwhile, is difficult to evaluate. Facebook, for example, claims its AI detects 95% of the hate speech that is removed from the platform. But do we know how much it misses? Is it more successful in the United States than elsewhere? Are there specific languages or cultural contexts it has problems with? While some platforms’ extant transparency reports are helpful, reporting and identification of hate speech and incitement to violence, for example, should be done with independent oversight and not via organizations that the platforms themselves create and (however tangentially) control.

The TTC’s next step should be to map the digital communication space to provide a comprehensive picture of the scope of information manipulation, hate speech, and other harmful content across the transatlantic area. This mapping should focus on several parameters: the nature of the manipulative content (i.e. whether it is organic or inorganic), the amount of it, the speed at which it is spread, who spreads it (including super-spreaders and cooperation hubs), the role of algorithms in spreading content further (or curbing it), and—finally—subsequent actions undertaken by platforms. In short, we must understand what is being spread, who is spreading it, and how they are doing so.

Platforms ought to be willing and eager participants in this endeavor in order to make concrete their recent moves toward transparency and action regarding influence operations and other harmful content. We must more clearly understand what is being done in response to these issues, and companies should be mandated to publicly document their efforts.The mechanism to put this into practice can be borrowed from the EU Code of Practice on Disinformation mentioned above. Its co-regulatory principle depends on close cooperation between platforms, governments, and other vetted stakeholders, such as research institutions. These institutions would be best positioned to carry out the mapping project, which would be done together with platforms in order to ensure objectivity and to protect platform data from going public.

This type of monitoring must go hand-in-hand with independent audits by third-party experts—not partisan or tech-created entities—of platforms’ efforts to manage disinformation. These focused examinations should pay specific attention to implementation, effectiveness, and how effectiveness is determined in every country the platforms operate. Researchers with deep cultural understandings of particular locales should oversee the development of country-specific measures, efficacy tests, and mechanisms to share results.

TTC-initiated mapping and audits would facilitate more informed decision-making on both a regional and national level and help North American and EU allies be more proactive in confronting digital efforts to undermine democracies. If countries and platforms involved contributed to a joint fund paying for the monitoring and audits, it would be a small price to pay for the improvements they could bring to the information space in the transatlantic space.

The problems we face in the digital information space are complex. The steps we take to tackle them should be like any other problem-solving strategy: first map, then understand, then address. At the same time, we must shift from a reactive to a more proactive approach. The digital space is a constantly evolving environment, and the policy solutions must remain flexible to respond accordingly. Basic policy-enforced mechanisms to start monitoring and mapping the information space and the effectiveness of the measures taken against problems like disinformation will provide us a more comprehensive picture of the scale of problems in the digital information space and will provide solid ground for transatlantic cooperation.

Samuel Woolley is the director of the propaganda research program at the Center for Media Engagement, the research director of disinformation work at the Good Systems initiative, and an assistant professor in the School of Journalism—all at the University of Texas at Austin.
Dominika Hajdu is a research fellow and the program manager of the Strategic Communication Program at GLOBSEC.