The private and public sectors are increasingly turning to artificial intelligence systems and machine learning algorithms to automate decisionmaking processes. As a result, algorithms are becoming more sophisticated and pervasive tools in society. But what happens when algorithmic decisionmaking falls short of our expectations? When machines treat similarly situated people and objects differently, some algorithms run the risk of amplifying human biases in areas that include creditworthiness, employability, and sentencing. Given that public policies may not be sufficient to identify, mitigate, and remedy these harms, a credible framework is needed to reduce unequal treatment and avoid disparate impacts on certain protected groups.
On May 22, the Center for Technology Innovation at Brookings hosted a discussion on algorithmic bias featuring expert speakers. The event started with remarks from former hedge-fund quant, mathematician, and author Cathy O’Neil, whose acclaimed book, “Weapons of Math Destruction,” outlines the consequences of opaque, black-box algorithms. Following her remarks, a panel discussed a newly released Brookings paper around algorithmic bias detection and mitigation. The paper offers government, technology, and industry leaders a set of public policy recommendations, self-regulatory best practices, and consumer-focused strategies—all of which promote the fair and ethical deployment of these technologies.
Speakers answered questions from the audience after the discussion.