Sections

Commentary

Introducing the AI Equity Lab and the path toward more inclusive tech

Nicol Turner Lee and Renée Cummings
Renee Cummings
Renée Cummings Nonresident Senior Fellow - Governance Studies, Center for Technology Innovation, Professor of Practice in Data Science - University of Virginia (UVA)

February 29, 2024


  • Black History Month 2024 comes to a close at a time when rapid advances in AI technology pose undetermined risks to all aspects of our society.
  • The potential of generative AI to distribute misinformation at scale and disenfranchise voters looms large over the upcoming presidential election, and widespread use of facial recognition technology by law enforcement continues to lead to the false arrests of Black people.
  • The AI Equity Lab is a convening platform for a full view of the socio-technical design contexts and outcomes of evolving and emerging technologies in a manner that promotes increased interdisciplinary and diverse cooperation and collaboration. It’s also about unveiling other hidden figures and experts in AI.
Blue illuminated AI processor on the motherboard.
Blue illuminated AI processor on a motherboard. (IMAGO/Westlight via Reuters Connect)

On August 28, 1963, Dr. Martin Luther King, Jr. delivered his prolific “I Have a Dream” speech during the March on Washington for Jobs and Freedom. More than 250,000 people attended the gathering and took in both his words and sentiments around achieving a better democracy that provides Black Americans with untethered access to civil rights and the unimagined possibilities that come with having equal opportunities. The calls to action in Dr. King’s speech and his distinguished acts of service before his tragic death were to ensure Black Americans would be beneficiaries of American democracy. Yet, his proclamations are even more relevant today as racist vitriol, misinformation, attacks on diversity, equity, and inclusion, and the banning of books by some distinguished Black authors threaten (and not advance) past cultural experiences and future existences. One must wonder, like we often do, what Dr. King would say and feel about how emerging and evolving technologies, like artificial intelligence (AI) and more advanced generative models, are encroaching on individual and community freedoms. For example, some AI tools are aiding and abetting voter disenfranchisement by deceiving vulnerable voters through voluminous misinformation, while others are eroding the ability of Black people to effectively compete and exercise agency over predictive determinations which assess their eligibility for credit, housing, employment, health care, and fair justice.

As the quest for civil rights and equal opportunities remain high priorities for new and emerging leaders struggling for racial and economic justice, what Dr. King, and his peers said then should matter now to the developers and industries who build, license, and distribute AI models. From a public interest perspective, technology should be a tool to help solve a myriad of social problems, including those conditions spurred by structural discrimination. Yet, today’s AI, if left unchecked, loosely interrogated, and unregulated, may very well become the tool that further traumatizes and weaponizes the experiences and aspirations of Black communities.

Black communities are being traumatized by technology

Researchers have pointed to several use cases where much more must be done to ensure more appropriate and culturally relevant applications of AI, particularly when the data that trains the computational models are making critical, quality of life decisions for online users. More importantly, AI without full online access could be compared to voting rights without required identification, thus suggesting that being connected to the internet also matters.

AI technologies, when improperly fitted for the real life experiences of Black communities, can also impose harsh penalties, which have both reputational and financial consequences for individuals and their communities. For example, the use of facial recognition technologies (FRT) by federal, state, and local law enforcement agencies, including border protection police. The shortcomings of the technology when applied to diverse faces are widely known and contribute to the misidentification of Black people of darker complexions and/or the insufficient application of the technology under less ideal photo capture conditions. Yet, law enforcement agencies continue to use FRT for both massive community surveillance and criminal investigations, often without any guidelines or training (even when the evidence is non-permissible as evidence in a trial). In February 2023, Porcha Woodruff, a Black woman from Detroit, was arrested by the Detroit Metropolitan police in front of her small children after being suspected of committing a recent robbery and carjacking. At the time, Porcha was eight months pregnant. The suspect in the FRT system was not, and her dated photo was perceived as a match. Porcha is currently suing the city of Detroit for wrongful arrest, detainment, reputational damage, and emotional distress. One can also imagine that she is dealing with the distress of her children who had to witness and experience the aftermath of that very traumatic encounter with police. What makes her allegations even more problematic is that she is the third person where FRT led to false arrest in Detroit, and one of six Black defendants across the United States at the time.

FRT is one of many AI applications that have differentially treated Black people, and other people of color. That is because these models are inherently biased and layered on top of existing systems of oppression. Because Black and Hispanic people are more likely to be overrepresented in criminal databases due to racially-motivated policing, they are more likely to signal patterns or trends in predictive decision-making. And even when variables of race, or other federally-protected characteristics are not accounted for in the computational model, inferences, which are described by authors Michael Kearns and Aaron Roth, serve as proxies, creating backdoors for algorithmic discrimination. In her research, Harvard researcher, Latanya Sweeney, contributed groundbreaking work, which suggested that Black- or Hispanic-sounding names were more likely to receive predatory ad surveillance and targeting.

Whether AI biases show up in FRT or in more widely accessed applications in financial services, human resource departments, or health care, it is important that these breakdowns are deeply interrogated, and tested for vulnerabilities in equity and access to equal opportunities. That is why the Brookings Center for Technology Innovation is embarking on a substantial project to identify and mitigate the discriminatory risks of AI that appear at the onset of the design phase through deployment and the life span of autonomous models and algorithms.

The launch of the AI Equity Lab at Brookings

The AI Equity Lab is a convening platform for a full view of the socio-technical design contexts and outcomes of evolving and emerging technologies in a manner that promotes increased interdisciplinary and diverse cooperation and collaboration. It’s also about unveiling other hidden figures and experts in AI.

Who is at the table really matters in the beginning to ensure that AI does not commoditize Black people, and other marginalized groups, and that such ecosystems support the full consideration of the realities of the past histories and current lived experiences of certain groups – whether federally protected, or intersectional in their identities.

For example, the fractured criminal justice system and related processes are part of a much broader discussion on criminal justice reform that must not be discounted as FRT technologies are being applied in these contexts. But without a fuller suite of experts with subject matter expertise, as well as empathy toward the concern, it will be impossible to envision and build more equitable models. Including experts in civil and human rights, alongside ethicists, and others who make up the social conscience of AI is both timely and urgent, particularly as congressional debates and White House executive actions intensify around guidance and guardrails.

That is why a new approach to identifying and mitigating the racial, economic, and political consequences of AI requires more transformational and inclusive thinking, as well as a new set of developers and implementers.

What to expect from the AI Equity Lab

In the next few months, the AI Equity Lab will gather experts in six high-risk areas, which include criminal justice, education, employment, health care, housing, and voting rights to create and design a more equitable AI future. From these convenings, we’ll uncover experts who were formerly hidden from such conversations.

The findings and an ongoing conversation around AI Equity Lab will be showcased on  The AI Equity Lab | Brookings website, including the “Hidden Figures Repository” which will share the names, and work of leaders in AI who are committed to anti-racist, and nondiscriminatory technical systems and applications.

Continuing the legacy of Dr. King

So, what would Dr. King say about this period of AI, and more advanced generative models, which teeter on the edge of erasing and/or homogenizing the experiences of more vulnerable populations? It’s still unclear. He could suggest that his dream just became a nightmare, or he could inspire technologists to work in tandem with civil society to develop more responsible and inclusive AI. Dr. King might also encourage Congress to consider the same actions that President Johnson took to codify voting rights, public accommodations, and other civil rights protections into law to avoid the upending of these and other achievements. And as with the Civil Rights Movement, new leaders could emerge as AI Freedom Fighters. The space is wide open for new ideas around equity and the preservation of civil and human rights.

For more information on the AI Equity Lab, please visit The AI Equity Lab | Brookings website. The Lab is co-directed by CTI Fellow Nicol Turner Lee and Renee Cummings.

Authors