On June 24th the New York Times reported the frightful story of Detroit resident Robert Julian-Borchak Williams. Williams, who is African American, lives in the wealthy Detroit suburb of Farmington Hills and was contacted in January by the Detroit Police Department to turn himself in. After ignoring what he assumed was a prank, Williams was arrested by two police officers in front of his wife and two young daughters as he arrived home from work. Thirty hours after being detained, Williams was released on bail after it became clear the police had arrested the wrong man.
As the Times put it, Williams’ case is noteworthy because it may be the first known example of an American wrongfully arrested on the basis of a flawed match from a facial recognition algorithm. Williams’s story brings facial recognition technologies (FRT) squarely into the ongoing conversation in the United States around racial injustice. In May of this year, Stanford’s Institute for Human-Centered Artificial Intelligence convened a workshop to discuss emerging questions about the performance of facial recognition technologies. Although the workshop was held before the nationwide upheaval sparked by the killing of George Floyd, the issues covered are central to the ongoing reckoning with systemic inequities, discrimination, and technology.
Facial recognition technologies have grown in sophistication and adoption across American society: Consumers now use facial recognition tech to unlock their smartphones and cars, retailers use these systems for targeted advertising and to monitor stores for shoplifters, and law enforcement agencies turn to them to identify suspects. But as the popularity of facial recognition tech has grown, significant anxieties around its use have emerged—including declining expectations of privacy, worries about the surveillance of public spaces, and algorithmic bias perpetuating systemic injustices. In the wake of the public demonstrations denouncing the deaths of George Floyd, Breonna Taylor, and Ahmaud Arbery, Amazon, Microsoft, and IBM all announced they would pause their facial recognition work for law enforcement agencies. Given the potential for facial recognition algorithms to perpetuate racial bias, we applaud these moves. But the ongoing conversation around racial injustice also requires a more sustained focus on the use of these systems.
To that end, we want to describe actionable steps that regulators at the federal, state, or local level (or private actors who deploy or use FRT) can take in order to build an evaluative framework that ensures that facial recognition algorithms are not misused. Technologies that work in controlled lab settings may not work as well under real world conditions, and this includes both the data dimension and the human dimension. The former entails what we call “domain shift,” namely when models perform one way in development settings and another way in end-user applications. The latter refers to differences in how the output of an FRT model is interpreted across institutions using the technology, which we refer to as “institutional shift.”
Policymakers can ensure that responsible protocols are in place to validate that facial recognition technology works as billed and to inform decisions about whether and how to use FRT. In building a framework for responsible testing and development, policymakers should empower regulators to use stronger auditing authority and the procurement process to prevent facial recognition applications from evolving in ways that would be harmful to the broader public.
Online violent extremist material represents a major challenge to our digital public sphere, both for the threat it poses to an open internet and in inciting further violence. To combat the presence of such material on their platforms, internet companies banded together to form the Global Internet Forum to Counter Terrorism, and on this week’s episode of Lawfare‘s Arbiters of Truth series on platforms and disinformation, Evelyn Douek and Quinta Jurecic speak with the organization’s executive director, Nicholas Rasmussen. The GIFCT works to facilitate efforts across internet platforms to prevent the spread of terrorist and extremist material, but that work comes with thorny questions: How to best balance free-speech concerns with restrictions on content and how to address the accountability problems associated with its work.
Ahead of the U.S. election and in anticipation of a flood of misinformation around the vote, Alex Stamos, who directs the Stanford Internet Observatory, helped set up the Election Integrity Partnership to detect and mitigate election-related misinformation. This week on Lawfare‘s Arbiters of Truth series on disinformation, Stamos talks with Evelyn Douek and Quinta Jurecic about what he and his team observed during the election and how the information ecosystem coped with massive amounts of mis- and disinformation.
In the digital economy, platforms require us to rethink the economics of exchange. Platforms such as Uber and Airbnb and the app stores run by Apple and Google don’t provide their customers with any tangible good. Rather, they create marketplaces for consumers and businesses to exchange goods. In building hugely successful platforms, these companies have built massive communities in which apps are bought and sold, rides are hailed, and apartments are rented out.
Successful platforms also create points of control. Take Google’s Android operating system, which has allowed the company to dominate the smartphone software industry. On the one hand, the software is entirely open source: Anyone can review it and write apps for it. But to achieve effective distribution to the full ecosystem of phones running Android, apps have to go through Google’s app store review. By controlling Android and the app store, Google sets the standards for how the ecosystem works and what apps appear in it.
As platforms continue to grow, control over the trade in goods and services is shifting from countries to digital platforms. And as trade, labor, and money grow increasingly digitized and are exchanged on platforms, countries need to rethink their positions in the global flow of these goods. If they are to gain a competitive advantage, countries need to increasingly pursue a platform strategy.
No country is doing this as effectively as China, which in recent years has set up a concerted country-as-a-platform strategy, aggressively exporting its digital infrastructure, playing a critical role in the development of technical standards, and developing unique points of control in the digital economy. Much like Google established itself as a dominant player in the smartphone ecosystem, China is attempting to do the same in an increasingly digital geopolitical landscape. Understanding this dynamic will be key to a future Biden administration getting the U.S. relationship with China right.
No history of China and the internet would be complete without reference to what in retrospect must be one of history’s poorest metaphors. In March 2000, President Bill Clinton famously noted that “the internet has changed America” and that it would likely do the same to China regardless of the “Great Firewall” it had built: “There’s no question China has been trying to crack down on the internet. Good luck! That’s sort of like trying to nail Jell-O to the wall.”
With the benefit of hindsight, Clinton failed to appreciate the real intent of Beijing’s approach to internet governance. The Chinese Communist Party’s objective in controlling public opinion is not to nail it to the wall but rather, like Jell-O, to mold it. Ever since the bloody crackdown on pro-democracy demonstrations in 1989, the CCP has conceived of information control as a process of “guiding public opinion,” of applying the vast apparatus of the propaganda department and the party-state press to mold the individual’s sense of truth, thereby maintaining the stability of the regime. And ever since the dawn of the internet, the Party has safeguarded the domestic project of “guidance” by restricting access to the global internet by means of the “Great Firewall,” a vast system of human and technical controls.
Today, twenty years after Clinton’s quip, the CCP’s grip on information is perhaps more assured than it has been at any time in the reform era. Thanks to new tools of control and surveillance, in addition to the reconsolidation of media and internet oversight, the Chinese internet has proved an exceptionally moldable medium. This experiment in technology-empowered social and political molding has been so successful that the Party now appears to be experimenting with technologies to allow Chinese internet users to access the web beyond the Great Firewall—while maintaining key features of the Party’s censorship regime.
In recent years, researchers have developed medical robots and chatbots to monitor vulnerable elders and assist with some basic tasks. Artificial intelligence-driven therapy apps aid some mentally ill individuals; drug ordering systems help doctors avoid dangerous interactions between different prescriptions; and assistive devices make surgery more precise and more safe—at least when the technology works as intended. And these are just a few examples of technological change in medicine.
The gradual embrace of AI in medicine also raises a critical liability question for the medical profession: Who should be responsible when these devices fail? Getting this liability question right will be critically important not only for patient rights, but also to provide proper incentives for the political economy of innovation and the medical labor market.
Every week, the TechStream newsletter brings you the latest from Brookings’ TechStream and news and analysis about the world of technology. To sign up and get this newsletter delivered to you inbox, click here.
Widespread misinformation around the US election
With no immediate winner to be declared in the U.S. presidential election, the coming days and weeks will not only present a major test for American democracy, but also a significant challenge to the election officials and internet platforms struggling to contend with misinformation—including false claims by President Trump and his surrogates about rampant voter fraud.
The good news is that tech sector and government officials appear better prepared for misinformation than they were four years ago. Twitter, Facebook, and YouTube all rolled out at least some measures to label or remove patently false information about the election, and the impact of foreign influence operations appears to have been minimal.
The bad news is that the White House is now the biggest vector for creating and amplifying electoral misinformation. With vote counting still under way on Wednesday, Trump took to Twitter to baselessly claim that the election was being stolen. Although Trump’s premature declaration of victory and claims of voter fraud isn’t a surprise—he signaled he planned to do so well ahead of Election Day—his online supporters are likely to spread those claims widely online, setting up a major challenge for the platforms.
Since May 2020, Operation Warp Speed has been charged with producing and delivering 300 million doses of a COVID-19 vaccine by January 2021. While the scientific and technical challenges are daunting, the public-health challenge may be equally so. In the last two decades, anti-vaccination groups disseminating misinformation about vaccines have proliferated online, far outstripping the reach of pro-vaccine groups. Viruses that had become rare, such as measles, have now experienced outbreaks because of declining vaccination rates. The anti-vaccination movement was already poised to sabotage the COVID-19 vaccine uptake.
And now, as with all aspects of COVID-19, politics has crept into the vaccine conversation in ways that threaten to derail public confidence in a potential treatment key to halting the pandemic. Without trust in the efficacy of a vaccine, it is far less likely that society will reach herd immunity through vaccination.
So how can politicians convince large swathes of the American public to take a vaccine once it becomes available? The answer may be counterintuitive, but simple: Keep mum, and let the scientists and public-health experts share the facts with the American people.