Sections

Research

5 questions policymakers should ask about facial recognition, law enforcement, and algorithmic bias

The Detroit Free Press takes a look at the preparedness, in light of mass shootings, of the Detroit Police department's Crime Intel Unit Friday, Aug. 9, 2019, which does counter terrorism threat assessments. A crime analyst looks over several different surveillance cameras positioned around Detroit.Detroitpolice 080919 01 Mw
Editor's note:

This report from The Brookings Institution’s Artificial Intelligence and Emerging Technology (AIET) Initiative is part of “AI and Bias,” a series that explores ways to mitigate possible biases and create a pathway toward greater fairness in AI and emerging technologies.

In the futuristic 2002 film “Minority Report,” law enforcement uses a predictive technology that includes artificial intelligence (AI) for risk assessments to arrest possible murderers before they commit crimes. However, a police officer is now one of the accused future murderers and is on the run from the Department of Justice to prove that the technology has flaws. If you think this fictional film could become true, then you should keep reading—because art has become reality.

Both the promise and peril of emerging technologies are upon us. For law enforcement, AI offers advancements over previously used methods to deter and solve crime, improve efficiency, reduce racial disparities, and potentially save resources related to human and financial capital. Currently, some of the law enforcement technologies that use machine learning algorithms include smart street lights, hot spot policing, facial recognition, drone surveillance, geofencing, risk assessments, and social media data to stop gang violence. Massachusetts State Police are now deploying robotic dogs in the field, raising comparisons to the dystopian 1987 film “RoboCop.” The functions of law enforcement and technology are also rapidly changing. In 2017, I conducted a study that showed substantial support from civilians and police officers for body-worn cameras being an alleged panacea to improve police-community relations. Now, AI can supposedly determine who will commit crime, predict where crime will occur, and identify potential threats—all of which amplifies its perceived viability related to body-worn cameras.

Tech companies already are creating commercial AI products with little oversight or regulation. Furthermore, they often solicit law enforcement agencies to participate in pilot studies and test trials in exchange for usage of the new technology. Since many law enforcement agencies are strapped financially, the opportunity to try a shiny new toy is enticing. The United Kingdom deployed a passport-recognition software despite knowing that it did not work well on people with darker and lighter skin tones. However, this is not just happening in Europe. As this paper demonstrates, the U.S. is riddled with these issues. In trying to compete with China, India, and other countries on criminal justice technology, the U.S. is compromising peak performance and potentially putting all Americans at risk.

“In trying to compete with … other countries on criminal justice technology, the U.S. is compromising peak performance and potentially putting all Americans at risk.”

But policymakers can intervene in constructive ways. There are meaningful safeguards that need to be put in place in order to protect people. It also could be argued that the potential for AI to determine who will be arrested by law enforcement, incarcerated, and released from prison are at the top of the list regarding the need to be cautious. What do policymakers need to know about AI deployment in law enforcement, and what are the central questions to ask as regulations and safeguards are implemented?

Facial recognition in policing

One main public concern around law enforcement use of AI and other emerging technologies is in the area of facial recognition. As of 2016, over half of the faces of American adults were part of facial recognition databases accessible to law enforcement. Yet, not everyone is worried about this deployment. Over 50% of people trust police’s use of facial recognition, and nearly 75% believe that facial recognition accurately identifies people. There are, however, important demographic differences. About 60% of white respondents compared to slightly over 40% of Black respondents trust police’s use of facial recognition. Age shows a similar gradient, but not in the expected direction. People under 30, compared to people over 65, are less trusting of the use of facial recognition in policing. Young adults’ skepticism may be because they have more knowledge about the capabilities of AI to manipulate actual video footage and alter what the person is saying and doing.

“As of 2016, over half of the faces of American adults were part of facial recognition databases accessible to law enforcement.”

Here is the big conundrum though: Only 36% of adults believe that facial recognition is being used responsibly by private companies, which are often selling facial recognition systems to law enforcement.

Though public opinion is split on the use of facial recognition for policing, research indicates that facial recognition suffers from algorithmic bias. The National Institute for Standards and Technology (NIST) released a paper showing that facial recognition resulted in lower accuracy rates for women compared to men, and for Black individuals relative to white individuals. One study showed that Black women’s gender was misclassified over 33% of the time. In 2019, Amazon’s facial recognition software, Rekognition, incorrectly labeled professional athletes in Boston as criminals. The software also incorrectly labeled one in five California lawmakers as criminals. The New York City Police Department (NYPD) manipulated blurry images with actors like Woody Harrelson to gain more clarity on potential suspects in surveillance footage.

For these and other reasons, San Francisco has banned facial recognition usage by police; Oakland and parts of Massachusetts soon followed suit. Conversely, cities like Detroit and Chicago have used facial recognition software with little oversight for the past few years. New regulations enacted in Detroit in 2019 restrict the use of facial recognition to still photographs related to violent crimes and home invasions. Though law enforcement in the city is applauding the ability to continue using facial recognition, members of the civilian oversight committee claim that the technology is a form of “techno-racism” in a predominately Black city that has a history of police brutality and problematic police-community relations. One key concern is the critique that law enforcement is using unreliable technology that misclassifies city residents, as mentioned previously.

“While it is important that law enforcement have the opportunity to experiment with new technologies, AI should not help make decisions in criminal cases until the technology improves its accuracy.”

While it is important that law enforcement have the opportunity to experiment with new technologies, AI should not help make decisions in criminal cases until the technology improves its accuracy. There should be a moratorium on full-scale implementation to analyze the data from pilot studies (potentially with analysis conducted by a third party, such as a university research center or firm) to evaluate policing outcomes using existing methods to those of using AI.

Surveillance with AI

In addition to facial recognition, there are other forms of AI deployment that policymakers should be mindful about. In the pursuit to be the first true smart city, San Diego deployed smart street lights in 2016. Pitched as a way to reduce energy consumption, the sensors on smart street lights are being used by law enforcement to monitor pedestrian, vehicle, and parking traffic, record video of the area, and solve crimes. For many city residents, the lights are not the issue. Rather, the deployment of the lights by police without the consent of the public is the larger problem. Three city council members asked for a moratorium on using the lights.

San Diego is also in the process of deploying a military-grade drone over the city in 2020 that has the potential to conduct surveillance and gather intelligence similar to the reconnaissance capabilities of the armed forces. In 2018, Rutherford County, Tennessee, became the first in the country to obtain federal approval to fly drones over people. City employees say that Rutherford County can use drones to combat a landfill crisis, assess storm damage, watch the crowd at a white nationalist rally, and track fugitives. On the East Coast, NYPD has used drones at a series of marches and parades, including the Pride March and Puerto Rican Day Parade. In this regard, the sci-fi show “Black Mirror” may become predictive—its “Hated in the Nation” episode featured advanced facial recognition and swarms of tiny, drone-like bees with lethal capabilities.

Several IT business executives in countries including the United States, France, Australia, and Canada are extremely concerned about the use of AI for autonomous weapons. Some people outright oppose using technologies in this way, including members of the United Nations. Some political and business leaders, academics, and nonprofits argue that fully autonomous weapons will actually lead to more conflict, inequality, and elevated potential for war. Interestingly, people seem to have less problems with the actual technology and more problems with the lack of regulation, transparency, privacy, and consent around it.

In 2017, law enforcement used geofencing to monitor anti-racism activists at the University of North Carolina-Chapel Hill who were protesting a Confederate statue known as “Silent Sam.” Geofencing is a virtual perimeter that allows for the monitoring of cell phone data, the collection of social media data (like the location of people who tweet at a march or protest), and the collection of website information for companies to make location-based ads for services and products. Many wondered why geofencing was used to monitor the social media activity of these anti-racism protesters, but wasn’t implemented to monitor white supremacists. The disparate deployment of these technologies elicits collective memories about how police force is used on marginalized communities. Considering the Carpenter v. United States Supreme Court decision, others questioned the legality of using geofencing in that situation. People have similar concerns about hot spot policing—in which law enforcement targets specific geographic areas where crime may be more concentrated—and wonder whether it is simply racial profiling under a predictive policing name.

“The disparate deployment of these technologies elicits collective memories about how police force is used on marginalized communities.”

Police departments are also gaining access to private homeowners’ cameras if a crime was committed in the area. The most-prominent doorbell-camera company, Ring, has entered into video-sharing partnerships with 400 police departments across the United States. Though police should have the ability to access a wide variety of resources to solve crimes, there should be a more regulated and transparent way to access video and data from private residences. For example, homeowners should be able to view the footage that was accessed from their homes and know how long the footage will be stored.

In line with “Minority Report,” some courts use algorithms to make risk assessments on recidivism before releasing people from prison. Chicago’s Strategic Subject List and Crime and Victimization Risk Model have used AI software to predict who might commit more crime after being arrested. A 2020 report from Chicago’s Office of Inspector General found that the software was unreliable, and the quality of the data were poor. In a city like Chicago, these findings are even more relevant given the level of racism discovered within the courts.

The assumption seems to be that technologies using algorithms are better at identifying threats and potential criminals. Some research using machine learning for bail release show how AI can be used to reduce recidivism, crime, and racial disparities in sentencing. At the same time, algorithms can actually replicate prejudicial decisions that occur in social life. This seemed to be the case with Chicago. One key point is that omitting race as a model attribute, which is the case with Chicago’s software, may lead to more bias than including it. Without more regulation and safeguards, the variability of these outcomes will continue to manifest.

“[O]mitting race as a model attribute … may lead to more bias than including it.”

Given these use cases, what should policymakers be focused on when it comes to fair use and deployment of AI systems?

Questions policymakers must ask to law enforcement and tech companies

Policymakers have a difficult job in trying to figure out how to regulate a constantly changing technology market with algorithms that use machine learning to build on themselves in real time. While policymakers may not have all of the answers or expertise to make the best decisions to balance free enterprise, transparency, privacy, and regulation, knowing the best questions to ask may be an important move in the interim. I list five below.

1) Has the community been informed and had the opportunity to ask questions and give suggestions?

A lack of transparency breaches community trust and makes technological advancements more difficult. Therefore, the answer to this question must go beyond a response that simply mentions a town hall or church visit. The public needs to be informed before the rollout of new technologies via local media, direct notices, and disclosures either through the mail, email, or social media. The public should have the right to ask questions and give suggestions about technology near their homes. The Public Oversight of Police Technology (POST) Act in New York City is a viable option for what this might look like. This website to provide information to the public about AI use in New York City, compiled by graduate students at Harvard University’s Kennedy School of Government, should be of note to law enforcement agencies and city governments on the importance of partnering with academics.

2) What safeguards are put in place to ensure that the technology is being used properly and is working as intended?

This question is about regulation. The Facial Recognition Technology Warrant Act is a good step forward and highlights the importance of being bipartisan. If law enforcement wants to use facial recognition for over 72 hours, the legislation would dictate that they have to get a judge’s approval, and the approval is limited to 30 days. With judge’s reports, the director of the Administrative Office of the United States Courts must provide information to the Judiciary Committee in the Senate and the House each year that includes the number of court orders, the offenses associated with the court orders, frequency of use of the technology, number of people observed, and number of people who were misidentified with the technology. While this bill is commendable, potential loopholes exist. For example, law enforcement could continuously operate under the 72-hour period, or claim that the facial recognition search is a concern of national security, a condition that seems to override the bill.

3) How will you guard against biases in your technology?

This question is about prejudicial data and discriminatory outcomes. The Algorithmic Accountability Act and the Justice in Forensics Algorithms Act of 2019 aim to help with this process by requiring companies to assess their algorithms for biased outcomes. In many regards, these acts are asking companies to perform racial impact statements. Racial impact statements are tools to help lawmakers evaluate whether proposed legislation may have embedded disparities before passing a bill. These statements help policymakers clarify and develop specific language to remove potential biases.

Using representative sampling techniques, companies should have to produce significant results in line with academic standards to show that their algorithms predict similarly for people across social identities, such as race, gender, and the intersection between the two. Companies and law enforcement agencies can partner with research university centers and think tanks to carry out these research tasks. NIST can develop an advisory panel that reviews reports similar to the ways that academic journals use editorial boards and external reviewers to verify new research. It also should be a priority to guard against studies that do not properly include minority groups. Furthermore, more diverse companies often mean more diverse ideas, which in turn may lead to more demographic data collection—something that can effectively reduce biases and disparities. Silicon Valley has much room to grow in this area. If these standards does not become normative, technology in law enforcement will have some of the same biases and flaws seen in medicine and other fields.

4) How will your technology move beyond consent to include privacy protections?

Privacy protection is important. Body-worn cameras teach us about consent. Officers have a prompt they use to inform the person of interest that they are being video and audio recorded. For facial recognition, we must move beyond consent. This question is about how law enforcement will ensure that the usage of facial recognition software and social media data do not collect meta- and micro-level information that further violate people’s privacy. As machine learning algorithms build, they link similarly defined data. At times, it is unclear what these outcomes will look like. Safeguards are needed to regulate this. People should also know how long their data will be stored and what their data are used for. Still, signs in public and private places that say facial recognition software is used here are useful.

5) Are there ways to use AI for law enforcement training?

This question is about pushing companies to think about ways to improve the efficiency of law enforcement training. The Lab for Applied Social Science Research, where I serve as executive director, works with major corporations and government entities to provide virtual reality training using advanced algorithms that give law enforcement officers a more realistic experience of encounters they have in the field. Companies like Flikshop are starting to use algorithms to provide job training to people who are incarcerated. These pursuits help to reduce bias in policing, improve police-community relations, and reduce recidivism.

Overall, technology can make policing more efficient—but efficiency does not necessarily mean fairness or lack of bias. Machine learning algorithms are far outpacing the public’s understanding of these technologies. Policymakers not only have to think through regulation but also penalties if law enforcement agencies and companies violate safeguards. If this is anything like the past, bias will continue to play a huge role in social outcomes unless the implementation of these technologies in law enforcement settings is drastically decelerated. Legislators and stakeholders should focus on smart policies that help make our society safer and ensure that privacy, consent, and transparency are equitable for everyone.


The Brookings Institution is a nonprofit organization devoted to independent research and policy solutions. Its mission is to conduct high-quality, independent research and, based on that research, to provide innovative, practical recommendations for policymakers and the public. The conclusions and recommendations of any Brookings publication are solely those of its author(s), and do not reflect the views of the Institution, its management, or its other scholars.

Microsoft provides support to The Brookings Institution’s Artificial Intelligence and Emerging Technology (AIET) Initiative, and Amazon provides general, unrestricted support to the Institution. The findings, interpretations, and conclusions in this report are not influenced by any donation. Brookings recognizes that the value it provides is in its absolute commitment to quality, independence, and impact. Activities supported by its donors reflect this commitment.