Skip to main content

Tomorrow’s tech policy conversations today

Visitors check their phones behind the screen advertising facial recognition software during Global Mobile Internet Conference (GMIC) at the National Convention in Beijing, China April 27, 2018. REUTERS/Damir Sagolj
Visitors check their phones behind the screen advertising facial recognition software during Global Mobile Internet Conference (GMIC) at the National Convention in Beijing, China April 27, 2018. REUTERS/Damir Sagolj

On June 24th the New York Times reported the frightful story of Detroit resident Robert Julian-Borchak Williams. Williams, who is African American, lives in the wealthy Detroit suburb of Farmington Hills and was contacted in January by the Detroit Police Department to turn himself in. After ignoring what he assumed was a prank, Williams was arrested by two police officers in front of his wife and two young daughters as he arrived home from work. Thirty hours after being detained, Williams was released on bail after it became clear the police had arrested the wrong man.

As the Times put it, Williams’ case is noteworthy because it may be the first known example of an American wrongfully arrested on the basis of a flawed match from a facial recognition algorithm. Williams’s story brings facial recognition technologies (FRT) squarely into the ongoing conversation in the United States around racial injustice. In May of this year, Stanford’s Institute for Human-Centered Artificial Intelligence convened a workshop to discuss emerging questions about the performance of facial recognition technologies. Although the workshop was held before the nationwide upheaval sparked by the killing of George Floyd, the issues covered are central to the ongoing reckoning with systemic inequities, discrimination, and technology.

Facial recognition technologies have grown in sophistication and adoption across American society: Consumers now use facial recognition tech to unlock their smartphones and cars, retailers use these systems for targeted advertising and to monitor stores for shoplifters, and law enforcement agencies turn to them to identify suspects. But as the popularity of facial recognition tech has grown, significant anxieties around its use have emerged—including declining expectations of privacy, worries about the surveillance of public spaces, and algorithmic bias perpetuating systemic injustices. In the wake of the public demonstrations denouncing the deaths of George Floyd, Breonna Taylor, and Ahmaud Arbery, Amazon, Microsoft, and IBM all announced they would pause their facial recognition work for law enforcement agencies. Given the potential for facial recognition algorithms to perpetuate racial bias, we applaud these moves. But the ongoing conversation around racial injustice also requires a more sustained focus on the use of these systems. 

To that end, we want to describe actionable steps that regulators at the federal, state, or local level (or private actors who deploy or use FRT) can take in order to build an evaluative framework that ensures that facial recognition algorithms are not misused. Technologies that work in controlled lab settings may not work as well under real world conditions, and this includes both the data dimension and the human dimension. The former entails what we call “domain shift,” namely when models perform one way in development settings and another way in end-user applications. The latter refers to differences in how the output of an FRT model is interpreted across institutions using the technology, which we refer to as “institutional shift.”

Policymakers can ensure that responsible protocols are in place to validate that facial recognition technology works as billed and to inform decisions about whether and how to use FRT. In building a framework for responsible testing and development, policymakers should empower regulators to use stronger auditing authority and the procurement process to prevent facial recognition applications from evolving in ways that would be harmful to the broader public.

Read More
An unloaded Twitter website is displayed in front of an Islamic State flag.
An unloaded Twitter website is seen on a phone without an internet connection, in front of a displayed ISIS flag in this photo illustration in Zenica, Bosnia and Herzegovina, February 3, 2016. Iraq is trying to persuade satellite firms to halt Internet services in areas under Islamic State's rule, seeking to deal a major blow to the group's potent propaganda machine which relies heavily on social media to inspire its followers to wage jihad. Picture taken February 3, 2016. To match Insight MIDEAST-CRISIS/IRAQ-INTERNET REUTERS/Dado Ruvic
An unloaded Twitter website is displayed in front of an Islamic State flag in this photo illustration in Zenica, Bosnia and Herzegovina, February 3, 2016. REUTERS/Dado Ruvic

Online violent extremist material represents a major challenge to our digital public sphere, both for the threat it poses to an open internet and in inciting further violence. To combat the presence of such material on their platforms, internet companies banded together to form the Global Internet Forum to Counter Terrorism, and on this week’s episode of Lawfare‘s Arbiters of Truth series on platforms and disinformation, Evelyn Douek and Quinta Jurecic speak with the organization’s executive director, Nicholas Rasmussen. The GIFCT works to facilitate efforts across internet platforms to prevent the spread of terrorist and extremist material, but that work comes with thorny questions: How to best balance free-speech concerns with restrictions on content and how to address the accountability problems associated with its work.


Trump supporters cheer as they march to protest the election outcome
Trump supporters cheer for the President as they march along Atlantic Avenue during a Stop the Steal rally in Delray Beach, Florida on November 14, 2020. (Greg Lovett / The Palm Beach Post)Wpb 111520 Stop The St2 3
Trump supporters cheer march along Atlantic Avenue during a “Stop the Steal” rally in Delray Beach, Florida on November 14, 2020. (Greg Lovett / The Palm Beach Post)

Ahead of the U.S. election and in anticipation of a flood of misinformation around the vote, Alex Stamos, who directs the Stanford Internet Observatory, helped set up the Election Integrity Partnership to detect and mitigate election-related misinformation. This week on Lawfare‘s Arbiters of Truth series on disinformation, Stamos talks with Evelyn Douek and Quinta Jurecic about what he and his team observed during the election and how the information ecosystem coped with massive amounts of mis- and disinformation.


People visit a booth during the Huawei Connect conference in Shanghai, China.
People visit a booth during Huawei Connect in Shanghai, China, September 23, 2020. REUTERS/Aly Song
People visit a booth during the Huawei Connect conference in Shanghai, China, September 23, 2020. REUTERS/Aly Song

In the digital economy, platforms require us to rethink the economics of exchange. Platforms such as Uber and Airbnb and the app stores run by Apple and Google don’t provide their customers with any tangible good. Rather, they create marketplaces for consumers and businesses to exchange goods. In building hugely successful platforms, these companies have built massive communities in which apps are bought and sold, rides are hailed, and apartments are rented out.

Successful platforms also create points of control. Take Google’s Android operating system, which has allowed the company to dominate the smartphone software industry. On the one hand, the software is entirely open source: Anyone can review it and write apps for it. But to achieve effective distribution to the full ecosystem of phones running Android, apps have to go through Google’s app store review. By controlling Android and the app store, Google sets the standards for how the ecosystem works and what apps appear in it.

As platforms continue to grow, control over the trade in goods and services is shifting from countries to digital platforms. And as trade, labor, and money grow increasingly digitized and are exchanged on platforms, countries need to rethink their positions in the global flow of these goods. If they are to gain a competitive advantage, countries need to increasingly pursue a platform strategy.

No country is doing this as effectively as China, which in recent years has set up a concerted country-as-a-platform strategy, aggressively exporting its digital infrastructure, playing a critical role in the development of technical standards, and developing unique points of control in the digital economy. Much like Google established itself as a dominant player in the smartphone ecosystem, China is attempting to do the same in an increasingly digital geopolitical landscape. Understanding this dynamic will be key to a future Biden administration getting the U.S. relationship with China right.

Read More


--FILE--Chinese policemen of the China Network Management of the Public Security check an Internet cafe in Beijing, China, 22 February 2012.China looks poised to launch another crackdown on its social networks after two newspapers carried editorials defending restrictions on the Internet.The articles coincided with complaints from internet users that its so-called Great Firewall had been upgraded and was now able to automatically detect and block virtual private networks, or VPNs. In Fridays (21 December 2012) Global Times quoted officials saying foreign-run VPNs were illegal defended restrictions on the Internet in an editorial headlined. No Use China. No Use France.
--FILE--Chinese policemen of the China Network Management of the Public Security check an Internet cafe in Beijing, China, 22 February 2012.

China looks poised to launch another crackdown on its social networks after two newspapers carried editorials defending restrictions on the Internet.The articles coincided with complaints from internet users that its so-called Great Firewall had been upgraded and was now able to automatically detect and block virtual private networks, or VPNs. In Fridays (21 December 2012) Global Times quoted officials saying foreign-run VPNs were illegal defended restrictions on the Internet in an editorial headlined. No Use China. No Use France.

No history of China and the internet would be complete without reference to what in retrospect must be one of history’s poorest metaphors. In March 2000, President Bill Clinton famously noted that “the internet has changed America” and that it would likely do the same to China regardless of the “Great Firewall” it had built: “There’s no question China has been trying to crack down on the internet. Good luck! That’s sort of like trying to nail Jell-O to the wall.”

With the benefit of hindsight, Clinton failed to appreciate the real intent of Beijing’s approach to internet governance. The Chinese Communist Party’s objective in controlling public opinion is not to nail it to the wall but rather, like Jell-O, to mold it. Ever since the bloody crackdown on pro-democracy demonstrations in 1989, the CCP has conceived of information control as a process of “guiding public opinion,” of applying the vast apparatus of the propaganda department and the party-state press to mold the individual’s sense of truth, thereby maintaining the stability of the regime. And ever since the dawn of the internet, the Party has safeguarded the domestic project of “guidance” by restricting access to the global internet by means of the “Great Firewall,” a vast system of human and technical controls.

Today, twenty years after Clinton’s quip, the CCP’s grip on information is perhaps more assured than it has been at any time in the reform era. Thanks to new tools of control and surveillance, in addition to the reconsolidation of media and internet oversight, the Chinese internet has proved an exceptionally moldable medium. This experiment in technology-empowered social and political molding has been so successful that the Party now appears to be experimenting with technologies to allow Chinese internet users to access the web beyond the Great Firewall—while maintaining key features of the Party’s censorship regime.

Read More

A robot helping medical teams treat patients suffering from the coronavirus disease (COVID-19) is pictured at the corridor, in the Circolo hospital, in Varese, Italy April 1, 2020. REUTERS/Flavio Lo Scalzo
A robot helping medical teams treat patients suffering from the coronavirus disease (COVID-19) is pictured at the corridor, in the Circolo hospital, in Varese, Italy April 1, 2020. REUTERS/Flavio Lo Scalzo

In recent years, researchers have developed medical robots and chatbots to monitor vulnerable elders and assist with some basic tasks. Artificial intelligence-driven therapy apps aid some mentally ill individuals; drug ordering systems help doctors avoid dangerous interactions between different prescriptions; and assistive devices make surgery more precise and more safe—at least when the technology works as intended. And these are just a few examples of technological change in medicine.

The gradual embrace of AI in medicine also raises a critical liability question for the medical profession: Who should be responsible when these devices fail? Getting this liability question right will be critically important not only for patient rights, but also to provide proper incentives for the political economy of innovation and the medical labor market.

Read More

Journalists wait for news after early results in the 2020 U.S. Presidential election at the White House in Washington, U.S., November 4, 2020. REUTERS/Tom Brenner     TPX IMAGES OF THE DAY
Journalists wait for news after early results in the 2020 U.S. Presidential election at the White House in Washington, U.S., November 4, 2020. REUTERS/Tom Brenner     TPX IMAGES OF THE DAY
Journalists wait for news after early results in the 2020 U.S. presidential election at the White House in Washington, U.S., November 4, 2020. REUTERS/Tom Brenner

Every week, the TechStream newsletter brings you the latest from Brookings’ TechStream and news and analysis about the world of technology. To sign up and get this newsletter delivered to you inbox, click here.

Widespread misinformation around the US election

With no immediate winner to be declared in the U.S. presidential election, the coming days and weeks will not only present a major test for American democracy, but also a significant challenge to the election officials and internet platforms struggling to contend with misinformation—including false claims by President Trump and his surrogates about rampant voter fraud.

The good news is that tech sector and government officials appear better prepared for misinformation than they were four years ago. Twitter, Facebook, and YouTube all rolled out at least some measures to label or remove patently false information about the election, and the impact of foreign influence operations appears to have been minimal.

The bad news is that the White House is now the biggest vector for creating and amplifying electoral misinformation. With vote counting still under way on Wednesday, Trump took to Twitter to baselessly claim that the election was being stolen. Although Trump’s premature declaration of victory and claims of voter fraud isn’t a surprise—he signaled he planned to do so well ahead of Election Day—his online supporters are likely to spread those claims widely online, setting up a major challenge for the platforms.

Read More


A man wearing gloves and a mask works in a Chinese lab developing an experimental COVID-19 vaccine.
A man wearing gloves and a mask works in a Chinese lab developing an experimental COVID-19 vaccine.
A man works in a laboratory of the Chinese vaccine maker Sinovac Biotech, which is developing an experimental COVID-19 vaccine, during a government-organized media tour in Beijing, China, September 24, 2020. REUTERS/Thomas Peter

Since May 2020, Operation Warp Speed has been charged with producing and delivering 300 million doses of a COVID-19 vaccine by January 2021. While the scientific and technical challenges are daunting, the public-health challenge may be equally so. In the last two decades, anti-vaccination groups disseminating misinformation about vaccines have proliferated online, far outstripping the reach of pro-vaccine groups. Viruses that had become rare, such as measles, have now experienced outbreaks because of declining vaccination rates. The anti-vaccination movement was already poised to sabotage the COVID-19 vaccine uptake.

And now, as with all aspects of COVID-19, politics has crept into the vaccine conversation in ways that threaten to derail public confidence in a potential treatment key to halting the pandemic. Without trust in the efficacy of a vaccine, it is far less likely that society will reach herd immunity through vaccination. 

So how can politicians convince large swathes of the American public to take a vaccine once it becomes available? The answer may be counterintuitive, but simple: Keep mum, and let the scientists and public-health experts share the facts with the American people.

Read More