Rather than just playing defense, how can democracies can go on offense against disinformation? That’s a question Alina Polyakova, the CEO of the Center for European Policy Analysis, and Daniel Fried, the Weiser Family Distinguished Fellow at the Atlantic Council, try to answer in their new paper, “Democratic Offense Against Disinformation.” On this episode of Lawfare‘s Arbiters of Truth series on platforms and disinformation, Quinta Jurecic sits down with Polyakova and Fried to consider what it would mean for democracies to take the initiative in combating foreign purveyors of disinformation.
Beginning in the 1990s and accelerating after the inclusion of China in the World Trade Organization in 2001, many companies globalized their sourcing and production and embraced lean manufacturing techniques to reduce costs. As supply chains moved abroad, global trade jumped from 39% of global GDP to 58% between 1990 and 2019. But this move toward globalization exposed companies to a plethora of supply chain risks, such as extreme weather events, labor disputes, cyberattacks, and supplier disruptions. Growing awareness of these risks slowed globalization—a phenomenon sometimes called “slowbalisation”—and between 2008 and today global trade as percentage of GDP shrunk from 61% to 58%. The COVID-19 pandemic and the economic crisis accompanying it has only accelerated these trends and revealed additional supply chain risks.
COVID-19 is not the first epidemic to disrupt supply chains—SARS, measles, swine flu, Ebola, and avian flu all resulted in business interruptions—but none of these epidemics disrupted global trade and domestic supply chains as much as COVID-19. The ongoing pandemic has highlighted structural problems in global supply chains. Chinese manufacturing of essential medical goods and equipment has revealed what some regard as a dangerous over-reliance on products critical to national health and economies. Surging customer demand for some goods (healthcare products and equipment, groceries, and household products) that often shifted geographically—moving from hotspot to hotspot—and dramatic decreases in demand for other goods (essentially everything non-healthcare) exposed the inability of supply chains to quickly shift production and logistics in response.
These stresses revealed the fragility of the modern supply chain and require a reset in the design of supply chain networks to improve resilience and agility. Companies and governments alike are realizing that efficiency at the expense of resilience cannot be the sole criterion around which supply chains are designed. Now more than ever, a new paradigm for competitive resilience is necessary in order for companies to redesign their supply chains for the long haul without reverting to their pre-pandemic practices.
With ridesharing services making up a crucial part of U.S. transportation infrastructure, American policymakers face important questions about how to best write rules for the industry. On Oct. 23, Sanjay Patnaik, the director of the Center on Regulation and Markets and the Bernard L. Schwartz Chair in Economic Policy Development in Economic Studies at Brookings, sat down with Loni Mahanta, the vice president of policy development and research at Lyft, to discuss the shape of the industry, the sharing economy, and data and safety issues.
As Chinese companies like TikTok gain access to U.S. markets and ever more data on American citizens, many observers have argued that federal privacy legislation has become a national-security imperative. Yet concerns about China and national security are only two of several reasons for the United States to enact such legislation. When it comes to strengthening privacy, digital trade, and U.S. national security, it’s important to recognize what privacy legislation would and would not accomplish on its own—and why additional steps are needed.
A federal privacy law would provide consumers with overdue protections and establish a more consistent framework for the U.S. government to answer difficult questions about the American relationship with China. But such a law is only a first step toward advancing U.S. security and addressing differences between the United States and other countries—particularly America’s European allies—in their approaches to data governance. To improve U.S. data security while retaining the openness required for innovation and competitive strength, the Biden administration will also need to prioritize cybersecurity liability reform, bolstering U.S. responses to malicious cyber activity, and reforming certain surveillance procedures to address the concerns of American allies and trading partners.
Online violent extremist material represents a major challenge to our digital public sphere, both for the threat it poses to an open internet and in inciting further violence. To combat the presence of such material on their platforms, internet companies banded together to form the Global Internet Forum to Counter Terrorism, and on this week’s episode of Lawfare‘s Arbiters of Truth series on platforms and disinformation, Evelyn Douek and Quinta Jurecic speak with the organization’s executive director, Nicholas Rasmussen. The GIFCT works to facilitate efforts across internet platforms to prevent the spread of terrorist and extremist material, but that work comes with thorny questions: How to best balance free-speech concerns with restrictions on content and how to address the accountability problems associated with its work.
Ahead of the U.S. election and in anticipation of a flood of misinformation around the vote, Alex Stamos, who directs the Stanford Internet Observatory, helped set up the Election Integrity Partnership to detect and mitigate election-related misinformation. This week on Lawfare‘s Arbiters of Truth series on disinformation, Stamos talks with Evelyn Douek and Quinta Jurecic about what he and his team observed during the election and how the information ecosystem coped with massive amounts of mis- and disinformation.
In the digital economy, platforms require us to rethink the economics of exchange. Platforms such as Uber and Airbnb and the app stores run by Apple and Google don’t provide their customers with any tangible good. Rather, they create marketplaces for consumers and businesses to exchange goods. In building hugely successful platforms, these companies have built massive communities in which apps are bought and sold, rides are hailed, and apartments are rented out.
Successful platforms also create points of control. Take Google’s Android operating system, which has allowed the company to dominate the smartphone software industry. On the one hand, the software is entirely open source: Anyone can review it and write apps for it. But to achieve effective distribution to the full ecosystem of phones running Android, apps have to go through Google’s app store review. By controlling Android and the app store, Google sets the standards for how the ecosystem works and what apps appear in it.
As platforms continue to grow, control over the trade in goods and services is shifting from countries to digital platforms. And as trade, labor, and money grow increasingly digitized and are exchanged on platforms, countries need to rethink their positions in the global flow of these goods. If they are to gain a competitive advantage, countries need to increasingly pursue a platform strategy.
No country is doing this as effectively as China, which in recent years has set up a concerted country-as-a-platform strategy, aggressively exporting its digital infrastructure, playing a critical role in the development of technical standards, and developing unique points of control in the digital economy. Much like Google established itself as a dominant player in the smartphone ecosystem, China is attempting to do the same in an increasingly digital geopolitical landscape. Understanding this dynamic will be key to a future Biden administration getting the U.S. relationship with China right.
On June 24th the New York Times reported the frightful story of Detroit resident Robert Julian-Borchak Williams. Williams, who is African American, lives in the wealthy Detroit suburb of Farmington Hills and was contacted in January by the Detroit Police Department to turn himself in. After ignoring what he assumed was a prank, Williams was arrested by two police officers in front of his wife and two young daughters as he arrived home from work. Thirty hours after being detained, Williams was released on bail after it became clear the police had arrested the wrong man.
As the Times put it, Williams’ case is noteworthy because it may be the first known example of an American wrongfully arrested on the basis of a flawed match from a facial recognition algorithm. Williams’s story brings facial recognition technologies (FRT) squarely into the ongoing conversation in the United States around racial injustice. In May of this year, Stanford’s Institute for Human-Centered Artificial Intelligence convened a workshop to discuss emerging questions about the performance of facial recognition technologies. Although the workshop was held before the nationwide upheaval sparked by the killing of George Floyd, the issues covered are central to the ongoing reckoning with systemic inequities, discrimination, and technology.
Facial recognition technologies have grown in sophistication and adoption across American society: Consumers now use facial recognition tech to unlock their smartphones and cars, retailers use these systems for targeted advertising and to monitor stores for shoplifters, and law enforcement agencies turn to them to identify suspects. But as the popularity of facial recognition tech has grown, significant anxieties around its use have emerged—including declining expectations of privacy, worries about the surveillance of public spaces, and algorithmic bias perpetuating systemic injustices. In the wake of the public demonstrations denouncing the deaths of George Floyd, Breonna Taylor, and Ahmaud Arbery, Amazon, Microsoft, and IBM all announced they would pause their facial recognition work for law enforcement agencies. Given the potential for facial recognition algorithms to perpetuate racial bias, we applaud these moves. But the ongoing conversation around racial injustice also requires a more sustained focus on the use of these systems.
To that end, we want to describe actionable steps that regulators at the federal, state, or local level (or private actors who deploy or use FRT) can take in order to build an evaluative framework that ensures that facial recognition algorithms are not misused. Technologies that work in controlled lab settings may not work as well under real world conditions, and this includes both the data dimension and the human dimension. The former entails what we call “domain shift,” namely when models perform one way in development settings and another way in end-user applications. The latter refers to differences in how the output of an FRT model is interpreted across institutions using the technology, which we refer to as “institutional shift.”
Policymakers can ensure that responsible protocols are in place to validate that facial recognition technology works as billed and to inform decisions about whether and how to use FRT. In building a framework for responsible testing and development, policymakers should empower regulators to use stronger auditing authority and the procurement process to prevent facial recognition applications from evolving in ways that would be harmful to the broader public.
No history of China and the internet would be complete without reference to what in retrospect must be one of history’s poorest metaphors. In March 2000, President Bill Clinton famously noted that “the internet has changed America” and that it would likely do the same to China regardless of the “Great Firewall” it had built: “There’s no question China has been trying to crack down on the internet. Good luck! That’s sort of like trying to nail Jell-O to the wall.”
With the benefit of hindsight, Clinton failed to appreciate the real intent of Beijing’s approach to internet governance. The Chinese Communist Party’s objective in controlling public opinion is not to nail it to the wall but rather, like Jell-O, to mold it. Ever since the bloody crackdown on pro-democracy demonstrations in 1989, the CCP has conceived of information control as a process of “guiding public opinion,” of applying the vast apparatus of the propaganda department and the party-state press to mold the individual’s sense of truth, thereby maintaining the stability of the regime. And ever since the dawn of the internet, the Party has safeguarded the domestic project of “guidance” by restricting access to the global internet by means of the “Great Firewall,” a vast system of human and technical controls.
Today, twenty years after Clinton’s quip, the CCP’s grip on information is perhaps more assured than it has been at any time in the reform era. Thanks to new tools of control and surveillance, in addition to the reconsolidation of media and internet oversight, the Chinese internet has proved an exceptionally moldable medium. This experiment in technology-empowered social and political molding has been so successful that the Party now appears to be experimenting with technologies to allow Chinese internet users to access the web beyond the Great Firewall—while maintaining key features of the Party’s censorship regime.