Toward reimagined global financial architecture: Progress and challenges

LIVE

Toward reimagined global financial architecture: Progress and challenges
Sections

Commentary

New Brookings report highlights potential sources of bias in algorithms

People work on their computers during a weekend Hackathon event in San Francisco, California, U.S. July 16, 2016. REUTERS/Gabrielle Lurie SEARCH "LURIE TECH" FOR THIS STORY. SEARCH "WIDER IMAGE" FOR ALL STORIES. - RC1E2E809C80

The increased use of algorithms to automate decisions promises to speed up operations for private companies and governments alike. However, it also raises concerns about how algorithms arrive at their decisions, especially ones with negative consequences for individuals or society as a whole. Many algorithms draw inferences from existing data when making decisions, and are susceptible to magnifying existing biases when they receive unrepresentative data. To explore these issues, the Center for Technology Innovation hosted an event on May 22 at the Brookings Institution. The event featured a keynote speech by Cathy O’Neil, author of the book “Weapons of Math Destruction,” and a panel discussion with the authors of a new Brookings report titled “Algorithmic bias detection and mitigation: Best practices and policies to reduce consumer harms.

O’Neil’s keynote speech laid out some of the individual and societal consequences of automated decisionmaking. Because algorithms are based on mathematical and statistical principles, they appear objective to many organizations that use them. However, a human must write the code, deciding which inputs were relevant and what success looks like. O’Neil emphasized the contrast between subjective judgments and the appearance of objectivity in her observation that “algorithms are opinions embedded in math.”

CTI Director Darrell West moderated the discussion with report co-authors Nicol Turner Lee, a CTI fellow, Paul Resnick, a professor at the University of Michigan’s School of Information, and Genie Barton, a member of the International Association of Privacy Professional’s Research Advisory Board. The report outlines a number of instances of algorithmic bias, from recruiting tools that favor male job candidates based on prior hires to online ads that showed credit card offers with higher interest rates to African Americans. These outcomes result from training data that reflects historical racial and gender inequality, which if not addressed may perpetuate these inequalities in the future.

Assessing data inputs for algorithms will not eliminate all cases of bias, which can occur even without the collection of data on protected characteristics. For example, zip codes can serve as proxies for income and race, and height as a proxy for gender. It is important in these cases to determine whether the outputs of algorithmic decisionmaking match the intentions of the organization using it. To detect bias, developers of algorithms can run simulations using different groups to check if there are unequal outcomes. However, detecting bias often requires collecting data on protected attributes to determine if an algorithm treats groups unequally. Ignoring protected attributes also ignores potential harm to these groups, whether intended or not.

The panelists concluded their discussion with the paper’s recommendations. Bias impact statements like the one modeled in the report ask a list of questions about who will be impacted by an algorithm’s design and deployment. Regulatory sandboxes allow companies to experiment with different solutions without fear of repercussions, while safe harbors clarify which activities follow existing discrimination laws. At a human level, hiring diverse teams to design algorithms can better anticipate how bias might affect different populations. Finally, improving literacy of how algorithms arrive at decisions can alert those affected when bias has occurred.

Automated decisionmaking can identify and mitigate entrenched biases, or worsen them by making biased decisions on a much greater scale. Going forward, businesses and governments alike should consider how their use of algorithms contribute to or alleviate historical inequalities. The new report offers recommendations so that organizations can anticipate harmful effects, identify them where they already exist, and prevent future instances.

Authors