Sections

Commentary

Highlights: Addressing fairness in the context of artificial intelligence

A man walks past a poster simulating facial recognition software at the Security China 2018 exhibition on public safety and security in Beijing, China October 24, 2018.   REUTERS/Thomas Peter - RC195B4D37A0

When society uses artificial intelligence (AI) to help build judgments about individuals, fairness and equity are critical considerations. On Nov. 12, Brookings Fellow Nicol Turner-Lee sat down with Solon Barocas of Cornell University, Natasha Duarte of the Center for Democracy & Technology, and Karl Ricanek of the University of North Carolina Wilmington to discuss artificial intelligence in the context of societal bias, technological testing, and the legal system.

Artificial intelligence is an element of many everyday services and applications, including electronic devices, online search engines, and social media platforms. In most cases, AI provides positive utility for consumers—such as when machines automatically detect credit card fraud or help doctors assess health care risks. However, there is a smaller percentage of cases, such as when AI helps inform decisions on credit limits or mortgage lending, where technology has a higher potential to augment historical biases.

Facial analysis and facial detection

Policing is another area where artificial intelligence has seen heightened debate—especially when facial recognition technologies are employed. When it comes to facial recognition and policing, there are two major points of contention: the accuracy of these technologies and the potential for misuse. The first problem is that facial recognition algorithms could reflect biased input data, which means that their accuracy rates may vary across racial and demographic groups. The second challenge is that individuals can use facial recognition products in ways other than their intended use—meaning that even if these products receive high accuracy ratings in lab testing, any misapplication in real-life police work could wrongly incriminate members of historically marginalized groups.

Technologists have narrowed down this issue by creating a distinction between facial detection and facial analysis. Facial detection describes the act of identifying and matching faces in a database—along the lines of what is traditionally known as “facial recognition.” Facial analysis goes further to assess physical features such as nose shape (or “facial attributes”) and emotions (or “affective computing”). In particular, facial analysis has raised civil rights and equity concerns: an algorithm may correctly determine that somebody is angry or scared but might incorrectly guess why.

Legal uncertainties

When considering algorithmic bias, an important legal question is whether an AI product causes a disproportional disadvantage, or “disparate impact,” on protected groups of individuals. However, plaintiffs often face broad challenges in bringing anti-discrimination lawsuits in AI cases. First, disparate impact is difficult to detect; second, it is difficult to prove. Plaintiffs often bear the burden of gathering evidence of discrimination—a challenging endeavor for an individual when disparate impact often requires aggregate data from a large pool of people.

Because algorithmic bias is largely untested in court, many legal questions remain about the application of current anti-discrimination laws to AI products. For example, under Title VII of the 1964 Civil Rights Act, private employers can contest disparate impact claims by demonstrating that their practices are a “business necessity.” However, what constitutes a “business necessity” in the context of automated software? Should a statistical correlation be enough to assert disparate impact by an automated system? And how, in the context of algorithmic bias, can a plaintiff feasibly identify and prove disparate impact?

Defining the goals

Algorithmic bias is a multi-layered problem that requires a multi-layered solution, which may include accountability mechanisms, industry self-regulation, civil rights litigation, or original legislation. Earlier this year, Sen. Ron Wyden (D-OR), Sen. Cory Booker (D-NJ), and Rep. Yvette Clark (D-NY) introduced the Algorithmic Accountability Act, which would require companies to conduct algorithmic risk assessments but allow them to choose whether or not to publicize the results. In addition, Rep. Mark Takano (D-CA) introduced the Justice in Forensic Algorithms Act, which addresses the transparency of algorithms in criminal court cases.

However, this multi-layered solution may require stakeholders to first address a more fundamental question: what is the goal that we’re trying to solve? For example, to some individuals, the possibility of inaccuracy is the biggest challenge when using AI in criminal justice. But to others, there are certain use cases where AI does not belong, such as in the criminal justice or national security contexts, regardless of whether or not it is accurate. Or, as Barocas describes these competing goals, “when the systems work well, they’re Orwellian, and when they work poorly, they’re Kafkaesque.”

Authors