Register
Register

March

12
2021

9:00 am EST - 10:15 am EST

Past Event

Should the government play a role in reducing algorithmic bias?

Friday, March 12, 2021

9:00 am - 10:15 am EST

Online only


On March 12, the Center for Technology Innovation hosted a panel discussion on how governments can address the problem of bias in algorithms. Alex Engler, a Brookings Rubenstein Fellow, asked about the elusive definition of fairness—and how to get there in artificial intelligence and other algorithms. Experts Ghazi Ahamat, Lara Macdonald, and Adrian Weller from the U.K.-based Centre for Data Ethics, and Nicol Turner Lee, director of the Brookings Center for Technology Innovation, gave their answers, thoughts, and suggestions.

Algorithmic bias is not an imaginary or theoretical notion. In industry and government, algorithms that provision healthcare, assist with hiring, direct police, and determine creditworthiness have exhibited differential treatment and disparate impact on similarly situated people, places, and objects. In recent years, examples have emerged of models of human language that manifest a bias against women and people with disabilities; the accuracy of some prominent speech recognition and facial recognition algorithms are skewed against African Americans. Even when unintended, algorithmic bias can arise from reliance on unrepresentative training data or prejudiced historical data or from failing to address statistical biases. In some cases, algorithmic bias is not only unethical, but could also result in illegal discrimination. But the trajectory of this emerging technology need not be negative. Government will play a key role in reducing algorithmic bias, both as a consumer and a regulator, to harness algorithms as a means for efficiency and positive change.

Government’s role as a user

As a consumer of algorithms, governments have large market power and control over many important algorithmic use cases. In this role, governments can set standards, provide guidance, and highlight practices to reduce algorithmic bias. The public sector should gain the public’s confidence in algorithmic use cases before deployment and disclose use of algorithms in decisions that significantly affect individuals. In the U.K., all agencies have a non-delegable duty to document anticipated and potential algorithmic discrimination prior to use. Like every policy decision, use of algorithms should be evaluated with impact assessments. Real world performance across demographic groups should be analyzed, since algorithms and inferences about data can lead to unexpected and unintended outcomes. Public sector use should balance locality-specific needs with a consistent national approach. As a best practice, public sector applications should collect data about race, sexual orientation, and similar categories when possible to monitor for bias, as U.K. law explicitly allows.

Criminal justice algorithms have large impacts on individuals’ lives, and governments should take care to get them right. Policing algorithms are not in widespread use in the U.K., but are being considered. Governments must remember that “predictive policing predicts policing, not crime.” For instance, in the U.S., Black and white individuals use drugs at approximately the same rates, but Black individuals are far more likely to be arrested and convicted. Algorithms based on data from past actions of police likewise predict policing, not crime, and may be biased. However, these algorithms can be adjusted; “bias in” does not always mean “bias out.” As governments roll out risk assessments to predict, among other things, recidivism rates of defendants in criminal trials, judges must learn the information and process that risk assessments use so they can place appropriate weight on the outputs. Some localities in the U.S. have banned certain criminal justice algorithms, particularly law-enforcement facial recognition algorithms. While understandable, this approach disregards possible efficiencies and ignores the prevalence of comparable technology in industry. Instead, governments should try to set guardrails to reduce the bias in criminal justice algorithms.

Government’s role as a regulator

In the U.K., regulators should adapt existing frameworks to incentivize ethical algorithms. Antidiscrimination as a legal requirement has a strong basis in the U.K. Equality Act of 2006, the Human Rights Act of 1998, and sector-specific antidiscrimination laws, such as those in finance and policing. The U.S. likewise has non-discrimination, civil rights, and sectoral laws that must be updated and connected to the digital world. U.K. data protection law, including the U.K. General Data Protection Regulation (GDPR), allows people to opt out of fully automated decisions, though some fine-tuning is needed. Unlike the U.K. the U.S. does not have a comprehensive federal privacy law, and therefore has no comparable baseline for basic digital rights like opt-out of automated decisionmaking. Finally, algorithms should disclose their errors so human oversight can accompany them in appropriate contexts.

The process of checking algorithmic bias will be a cycle of identifying algorithmic requirements, researching how to build compliant algorithms, and then enforcing those requirements. Nascent compliance and risk assurance tools can emulate other industries: For compliance, governments can use verification, audit, certification, and accreditation tools; for risk, impact assessments, audits, and ongoing testing. Requirements will vary by industry and context; for instance, in environments with high uncertainty and variation, some disparate impact may be tolerable, but in others it must be rigorously avoided. Guidelines must combine principles with precise thresholds. Algorithms need a basic fairness test (of which there are many mutually incompatible options) to avoid extreme disparate impact. Guiding principles can prevent discrimination in important contexts that may be overlooked by assessments of disparate impact. To develop and enforce regulations, governments must also develop their technical expertise.

Algorithms may exist in environments that are already unfair, and governments must consider working with companies (and similar organizations) to move forward productively in these scenarios. When a company tries to increase fairness but still ends up with a biased algorithm, resorting to human decisionmaking may not solve the problem. As a concrete example, Amazon trained an algorithm on historical, human-based hiring data that ultimately created a sexist algorithm. Even though the algorithm was biased, the human decision-makers were also biased. Creating an ethical algorithm is often easier than mitigating human biases. Society must decide what role algorithms play in correcting societal inequity, and emerging research in AI fairness can help reach that goal.

In the past, technological cadence has been prioritized over fairness; now it is time to shift that balance and take steps to reduce algorithmic bias. Deploying algorithms without establishing guardrails, appropriate trainings, or procedures—without considering politics—causes problems.

To learn more, explore literature from the Center for Data Ethics and Innovation, including the report that inspired many of today’s conversation topics, and the Brookings Artificial Intelligence and Emerging Technologies Initiative.

Event Recap By
Hattie Pimentel Research Intern, Center for Technology Innovation - Brookings Institution

Agenda