Register

January

14
2020

3:00 pm EST - 4:00 pm EST

Past Event

The ethical algorithm

A conversation with authors Michael Kearns and Aaron Roth

Tuesday, January 14, 2020

3:00 pm - 4:00 pm EST

Brookings Institution
Saul/Zilkha Auditorium

1775 Massachusetts Avenue N.W.
Washington, DC
20036

On January 14, Michael Kearns and Aaron Roth, authors of “The Ethical Algorithm: The Science of Socially Aware Algorithm Design”, joined Brookings scholar Nicol Turner-Lee to discuss the role, limits, and challenges of implementing ethical algorithms.

Defining ethical algorithms

From education to employment, algorithms are increasingly augmenting human decisionmaking in important sectors. Despite widespread implementation to streamline processes, reduce human prejudice, and cut costs, algorithms are far from neutral. In fact, algorithmic bias can lead to systematically discriminatory outcomes that have significant impacts on people’s lives. Under most circumstances, algorithmic bias is an unintentional side effect of machine learning. Training these algorithms involves the collection and analysis of enormous quantities of historical data that is used to inform decisionmaking and optimization. However, any historical biases embedded in the data can be absorbed and reproduced.

The emerging science of ethical algorithm design seeks to provide mathematical definitions of fairness and privacy and directly code algorithms to correct for existing biases. Because many definitions of fairness and privacy exist, ethical algorithm design takes a preventative approach: researchers first identify harmful algorithmic behaviors, and then design and implement a code that avoids those behaviors.

Value tradeoff

While ethical algorithms could represent a critical step forward in correcting a flawed feedback loop, technology alone cannot solve difficult social issues. Remedying entrenched social biases and defining concepts such as fairness and privacy sit firmly within the human domain including carefully balancing accuracy and fairness within algorithms.

Bias occurs when errors are disproportionately concentrated in one group, such as an algorithm that rejects employment applications from a minority group at a higher rate than background groups. When this occurs, developers can set a false rejection rate to be equal or fall within a certain percentage margin, meaning that the minority group is rejected at the same or similar rate as background groups.  However, the smaller the margin is, the higher the frequency of error. While fairness would improve the disproportionately affected group, overall accuracy of the data may be reduced. Quantitatively defining an acceptable balance between accuracy and fairness is a critical component in ethical algorithm design, and may differ depending on the application, requiring an iterative process.

Differential privacy

In many ways, data privacy can act as a model for ethical algorithm development. Traditional methods of data privacy involve anonymizing data through the removal of certain data points. However, information can be easily de-anonymized by combining multiple sets of data. As an alternative to de-anonymization, Kearns and Roth discussed the concept of “differential privacy,” which adds random noise to datasets to conceal individual pieces of information. This process helps support the confidentiality of individual information, but also leads to questions of accuracy and privacy.

Nearly 15 years since its inception, a wide range of institutions have adopted differential privacy, including the upcoming 2020 U.S. Census. Establishing and encoding a quantitative definition of privacy against other tradeoffs, including the value of the dataset to downstream users (particularly important for census data), sets a useful precedent for ethical algorithm design.

Technology regulation

Kearns and Roth call for an ideological shift from controlling algorithmic inputs, to focusing on desired outcomes. For example, omitting race as a variable in lending practices does not always lead to unbiased outcomes, because proxies for race such as zip codes can similarly lead to biased results. In an era of data abundance, simply controlling for inputs is insufficient in addressing discrimination. However, encoding concepts such as fairness and privacy might require complicated tradeoffs that can be difficult to define.

While technology companies and researchers could explore ethical algorithm design to efficiently catch algorithmic bias at its source, regulatory enforcement also plays an important role in ensuring compliance with federal anti-discrimination laws. Currently, technology regulators are ill-equipped to the task of monitoring and auditing technology companies. Kearns and Roth diagnose two main reasons and prescribe potential solutions. First, the resources gap between technology regulators and companies has grown substantially in recent years. To create a level playing field, regulators should increase their in-house technical expertise and capacity. This may involve compositional and organization changes, or the creation of new regulatory agencies.

Next,  technology regulators should pivot away from traditional regulatory methodologies to align more closely with industry realities. For example, regulators have traditionally viewed mergers based on the maturity of the markets and their sector. However, technology companies might see mergers not as a way to branch into a new market, but to access and integrate new consumers, data, and information into one combined entity.

Algorithms represent a technological breakthrough in efficient, accurate, and expedient decisionmaking, but if left unchecked can result in discriminatory outcomes that undermine social values. The lesson of ethical algorithm design is clear: with great algorithmic power comes great human responsibility.

 

Event Recap By
Lia Newman Research Intern - Center for Technology Innovation

Agenda