Sections

Research

Ethical algorithm design should guide technology regulation

A women interacts with 'Alter', a machine body with a human like face and hands who learns through interplaying with the surrounding world. Alter was created by roboticist Hiroshi Ishiguro and is on display at the 'AI: More Than Human' exhibition at the Barbican Centre in London. The major new exhibition explores the relationship between humans and artificial intelligence.
Editor's note:

This report from The Brookings Institution’s Artificial Intelligence and Emerging Technology (AIET) Initiative is part of “AI and Bias,” a series that explores ways to mitigate possible biases and create a pathway toward greater fairness in AI and emerging technologies.

Society expects people to respect certain social values when they are entrusted with making important decisions. They should make judgments fairly. They should respect the privacy of the people whose information they are privy to. They should be transparent about their deliberative process.

But increasingly, algorithms and the automation of certain processes are being incorporated into important decision-making pipelines. Human resources departments now routinely use statistical models trained via machine learning to guide hiring and compensation decisions. Lenders increasingly use algorithms to estimate credit risk. And a number of state and local governments now use machine learning to inform bail and parole decisions, and to guide police deployments. Society must continue to demand that important decisions be fair, private, and transparent even as they become increasingly automated.

“Society must continue to demand that important decisions be fair, private, and transparent even as they become increasingly automated.”

Nearly every week, a new report of algorithmic misbehavior emerges. Recent examples include an algorithm for targeting medical interventions that systematically led to inferior outcomes for black patients, a resume-screening tool that explicitly discounted resumes containing the word “women” (as in “women’s chess club captain”), and a set of supposedly anonymized MRI scans that could be reverse-engineered to match to patient faces and names.

In none of the previous cases were the root causes some malintent or obvious negligence on the part of the programmers and scientists who built and deployed these models. Rather, algorithmic bias was an unanticipated consequence of following the standard methodology of machine learning: specifying some objective (usually a proxy for accuracy or profit) and algorithmically searching for the model that maximizes that objective using colossal amounts of data. This methodology produces exceedingly accurate models—as measured by the narrow objective the designer chooses—but will often have unintended and undesirable side effects. The necessary solution is twofold: a way to systematically discover “bad behavior” by algorithms before it can cause harm at scale, and a rigorous methodology to correct it.

Many algorithmic behaviors that we might consider “antisocial” can be detected via appropriate auditing—for example, explicitly probing the behavior of consumer-facing services such as Google search results or Facebook advertising, and quantitatively measuring outcomes like gender discrimination in a controlled experiment. But to date, such audits have been conducted primarily in an ad-hoc, one-off manner, usually by academics or journalists, and often in violation of the terms of service of the companies they are auditing.

“[M]ore systematic, ongoing, and legal ways of auditing algorithms are needed.”

We propose that more systematic, ongoing, and legal ways of auditing algorithms are needed. Regulating algorithms is different and more complicated than regulating human decision-making. It should be based on what we have come to call ethical algorithm design, which is now being conducted by a community of hundreds of researchers. Ethical algorithm design begins with a precise understanding of what kinds of behaviors we want algorithms to avoid (so that we know what to audit for), and proceeds to design and deploy algorithms that avoid those behaviors (so that auditing does not simply become a game of whack-a-mole).

Let us discuss two examples. The first comes from the field of algorithmic privacy and has already started to make the transition from academic research to real technology used in large-scale deployments. The second comes from the field of algorithmic fairness, which is in a nascent stage (perhaps 15 years behind algorithmic privacy), but is well-positioned to make fast progress.

Data Privacy: Advancing to a better solution

Corporate and institutional data privacy practices unfortunately rely on heuristic and largely discredited notions of “anonymizing” or “de-identifying” private data. The basic hope is that, by removing names, social security numbers, or other unique identifiers from sensitive datasets, they will be safe for wider release (for instance, to the medical research community). The fundamental flaw with such notions is that they assume the dataset in question is the only one in the world, and are thus highly vulnerable to “de-anonymization” attacks that combine multiple sources of data.

The first high-profile example of such an attack was conducted in the mid-1990s by Latanya Sweeney, who combined allegedly anonymized medical records released by the state of Massachusetts with publicly available voter registration data to uniquely identify the medical record of then-governor William Weld—which she mailed to his office for dramatic effect. As in this example, anonymization techniques often fail because of the wealth of hard-to-anticipate extra information that is out there in the world, ready to be cross-referenced by a clever attacker.

“[A]nonymization techniques often fail because of the wealth of hard-to-anticipate extra information that is out there in the world, ready to be cross-referenced by a clever attacker.”

The breakthrough that turned the field of data privacy into a rigorous science occurred in 2006, when a team of mathematical computer scientists introduced the concept of differential privacy. What distinguished differential privacy from previous approaches is that it specified a precise yet extremely general definition of the term “privacy”: specifically, that no outside observer (regardless of what extra information they might have) should be able to determine better than random guessing whether any particular individual’s data was used to construct a data release. This implies that the observer cannot infer any properties of that individual’s data that are idiosyncratic to them.

Broadly speaking, differential privacy is achieved by carefully adding noise or randomness to data or computations in a way that obscures individual data points while still providing useful estimates of statistical quantities. For example, to privately release the average of a set of employee salaries, we first compute the average to numerical precision, but then add a randomly chosen number to it before release. Given enough data, the noisy version is still accurate, but it contains very little information about any particular employee’s salary.

The introduction of differential privacy sparked more than a decade of algorithmic research determining how to use data that is subject to privacy guarantees, and what the trade-offs are between the accuracy of estimates and privacy guarantees. In recent years, differential privacy has become mature enough for serious deployment. There are large-scale implementations in the tech industry by Google, Apple, and other companies. But the true “moonshot” application of the technology is just around the corner. The U.S. Census Bureau will apply the protections of differential privacy to all statistics released as part of the 2020 census. Here, the trade-offs are more than hypothetical, and census officials who are obligated to protect privacy are engaged in a vigorous debate with the downstream users of census data about how exactly to balance privacy and accuracy. Differential privacy (correctly) takes no position on how this balance should be chosen, but it provides a precise language in which to focus the debate.

Algorithmic fairness: A work in progress

In contrast to differential privacy, the study of algorithmic fairness is relatively nascent. There is no agreement on a single definition, and indeed, it is known that several appealing and reasonable measures of algorithmic fairness are in mathematical conflict with one another. It is thus already known that the study of algorithmic fairness will necessarily be nuanced and complex—practitioners will need to think about trade-offs not only between fairness and accuracy, but also between different notions of fairness.

Nevertheless, the field is off to a promising start. It is possible to quantify different kinds of harms that an algorithm can cause, such as denying a loan to a creditworthy applicant or overestimating an inmate’s risk of recidivism. One can then demand that such harms do not disproportionately fall on one group (such as a racial minority or gender) more than another. Recent research has developed algorithms that can enforce such demands even on relatively refined subgroups that combine multiple protected classes, such as race, gender, age, income, and disability status. For example, developers can enforce constraints demanding that the rate of false loan rejections for disabled Hispanic women over age 55 not be higher than the false rejection rate for the overall population. Such methods can provide progressively stronger fairness guarantees should stakeholders feel they are necessary.

Equally important is the fact that one can audit algorithms and predictive models for such harm imbalances. For example, a stark difference in false positive rates between black and white inmates in a recidivism prediction algorithm known as COMPAS was the subject of a well-publicized 2016 ProPublica article. Note that checking that an algorithm has similar false positive rates across two populations does not require the code of the algorithm; all that is necessary is black-box experimentation allowing us to compute a small number of averages.

Algorithmic approaches to technology regulation

To summarize, there are now operational definitions of algorithmic privacy and fairness, some understanding of how to design algorithms that satisfy those definitions, and methods to audit whether a given algorithm or model violates them (and by how much). We believe this emerging science of ethical algorithm design invites reconsideration of how large technology companies and their products and services are regulated.

“We believe this emerging science of ethical algorithm design invites reconsideration of how large technology companies and their products and services are regulated.”

The current technology regulatory framework is largely reactive. Consider the Federal Trade Commission’s (FTC) recent $5 billion fine against Facebook for data privacy violations, currently under review by a federal judge. While widely hailed as a harbinger of a newly aggressive regulatory era, the fine was, in fact, in response to violations of a previous 2011 consent agreement. These violations were uncovered by The New York Times and The Observer of London, not the FTC itself. And like the earlier agreement, the recent settlement contains virtually no technical mechanisms for enforcement, only human and organizational ones, such as new corporate procedures and the creation of oversight committees. This cycle is typical of U.S. technology regulation: The damage is discovered after the fact, not during the harm; a monetary fine is levied, and new guidelines imposed; but the regulator has no ongoing, real-time mechanisms to monitor that things have actually changed or improved.

An alternative approach is to enable tech regulators to be proactive in their enforcement and investigations. If there really is gender bias in the credit limits granted for Apple’s new credit card (as has been alleged anecdotally), it could be discovered by regulators in a controlled, confidential, and automated experiment with black-box access to the underlying model. If there is racial bias in Google search results or housing ads on Facebook, regulator-side algorithms making carefully designed queries to those platforms could conceivably discover and measure it on a sustained basis.

Let us anticipate and address some objections to such proposals, from the perspectives of both tech companies and their regulators. Large technology companies typically protest calls to make their algorithms, models, or data more openly accessible, on the grounds that it severely compromises their intellectual property. Google’s search and Facebook’s Newsfeed algorithms, as well as the details of their advertising platforms and predictive models, are claimed to be the “secret sauce” that drives the well-earned competitive advantages that such companies enjoy. Tech giants might argue that allowing unfettered, automated access to such proprietary resources would permit reverse engineering by competitors, as well as “gaming” by rogue actors on both the user and advertising sides.

We agree, which is why we do not propose such access for everyone—only for the appropriate regulators, and only for permitted legal and regulatory purposes. There is some precedent for such arrangements in the much more heavily regulated finance industry. The Securities and Exchange Commission, Commodity Futures Trading Commission, and the sector’s self-regulatory agency, FINRA, have direct and timely access to tremendously sensitive and granular trading data, which allows them to identify prices, volumes, and counterparties. Such data permits these agencies, for example, to infer the portfolios of large investors and the underlying strategies and algorithms of the most proprietary hedge funds. It also allows the agencies to monitor for insider trading and illegal market behaviors such as “spoofing.” And of course, these regulators are not permitted to use this data for nonregulatory purposes, such as starting a competing hedge fund. Similar restrictions would bind tech regulators.

A more technical objection is that algorithmic auditing cannot identify and fix all potential regulatory problems, and that what we refer to here as “algorithms” and “models” are often complex, interacting systems that might cross organizational or even corporate boundaries. For instance, a recent study demonstrated bias in Google search results toward showing STEM job advertisements to men more frequently than women, but at least part of the cause was the willingness of advertisers to pay more for female clicks. In this instance, the blame cannot be placed exclusively or even primarily on Google’s underlying algorithms. But we believe that auditing and measuring such bias is still an important regulatory goal, since its discovery is the first step toward understanding and solutions—even if there may not be simple fixes.

An objection or observation from the regulatory side is that the agencies are currently ill-equipped to engage in an algorithmic arms race with their subjects. Queries and experiments must be designed carefully and scientifically, A/B testing must become a standard tool, and deep understanding and practical experience in AI and machine learning will be a prerequisite. While some of the agencies are fortunate to have significant quantitative expertise (for example, in the form of economics doctorates that are prepared to consider theoretical questions about markets and competition), there are few leaders or staffers whose training is in artificial intelligence, computer science, mathematics and statistics—in other words, the areas of expertise of the companies they oversee. A nontrivial change in the composition of these agencies would be necessary.

“Regulators have been playing catch-up with their tech subjects for a couple of decades now, and the gap is getting wider.”

We would argue, however, that there is no viable alternative. The sooner these changes begin, the better for society as a whole. Regulators have been playing catch-up with their tech subjects for a couple of decades now, and the gap is getting wider. Legal and policy changes are required as well. For instance, in matters of acquisitions and mergers, tech regulators are often forced to view transactions through the lens of whether a given market is “nascent” or “mature.” The fluidity of technology, and the data that powers it, often renders such distinctions quaint at best, debilitating at worst. Tech giants often view an acquisition not from the perspective of what “market” it lies in, but what new source of consumer, advertising, logistic, or other data it will provide. They view their various products and services (search, advertising, browser, maps, email, etc. in the case of Google; shopping, advertising, video, Alexa, etc. in the case of Amazon) not as silos in separate markets, but of a single piece with integrated technology, data and strategy. The longer regulators are forced to decompose the world in ways that are at odds with industry reality, the bigger the gap between regulators and subjects becomes.

Decision-making driven by machine learning—because of its speed and scale, and because of the unanticipated side effects of its behavior—requires a new regulatory approach. It must be guided by the emerging science of ethical algorithm design, which can both shed light on the specific social properties we want from algorithms and give us guidance on how to audit and enforce these properties. Existing or new regulatory agencies must be able to automatically audit algorithms at scale. This will require sea changes at the organizational level, but it is already feasible at the scientific level.

Michael Kearns and Aaron Roth are authors of “The Ethical Algorithm: The Science of Socially Aware Algorithm Design,” a new book on how to embed human principles into machine code without halting the advance of data-driven scientific exploration.


The Brookings Institution is a nonprofit organization devoted to independent research and policy solutions. Its mission is to conduct high-quality, independent research and, based on that research, to provide innovative, practical recommendations for policymakers and the public. The conclusions and recommendations of any Brookings publication are solely those of its author(s), and do not reflect the views of the Institution, its management, or its other scholars.

Microsoft provides support to The Brookings Institution’s Artificial Intelligence and Emerging Technology (AIET) Initiative, and Amazon, Apple, Facebook, and Google provide general, unrestricted support to the Institution. The findings, interpretations, and conclusions in this report are not influenced by any donation. Brookings recognizes that the value it provides is in its absolute commitment to quality, independence, and impact. Activities supported by its donors reflect this commitment.

Authors

  • Footnotes
    1. Ziad Obermeyer, Brian Powers, Christine Vogeli, Sendhil Mullainathan, “Dissecting racial bias in an algorithm used to manage the health of populations,” Science Vol. 366, Issue 6464, pp. 447-453, Oct. 25, 2019. DOI: 10.1126/science.aax2342
    2. Christopher G. Schwarz, et al., “Identification of Anonmyous MRI Research Participants with Face-Recognition Software,” The New England Journal of Medicine, 2019; 381:1684-1686, Oct. 24, 2019. DOI: 10.1056/NEJMc1908881
    3. Michael Kearns and Aaron Roth, “The Ethical Algorithm,” Oxford University Press, Nov. 1, 2019.
    4. Latanya Sweeney, “Weaving Technology and Policy Together to Maintain Confidentiality,” The Journal of Law, Medicine & Ethics,” Vol. 25, Issue 2-3, June 1, 1997. https://doi.org/10.1111/j.1748-720X.1997.tb01885.x
    5. Cynthia Dwork, Frank McSherry, Kobbi Nissim, Adam Smith, “Calibrating Noise to Sensitivity in Private Data Analysis,” Journal of Privacy and Confidentiality, May 30, 2017. https://doi.org/10.29012/jpc.v7i3.405
    6. Ulfar Erlingsson, Vasyl Pihur, Aleksandra Korolova, “RAPPOR: Randomized Aggregable Privacy-Preserving Ordinal Response,” CCS ’14: Proceedings of the 2014 ACM SIGSAC Conference on Computer and Communications Security, pp. 1054-1067, November 2014. https://doi.org/10.1145/2660267.2660348
    7. “Learning with Privacy at Scale,” Apple Machine Learning Journal, Vol. 1, Issue 8, December 2017. https://machinelearning.apple.com/2017/12/06/learning-with-privacy-at-scale.html
    8. John M. Abowd, “The U.S. Census Bureau Adopts Differential Privacy,” KDD’ 18: Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pp. 2867, July 2018. https://doi.org/10.1145/3219819.3226070
    9. Census data is used to distribute federal funds to local communities, and is used by social scientists to study demographic trends.
    10. Jon Kleinberg, Sendhil Mullainathan, and Manish Raghavan, “Inherent Trade-Offs in the Fair Determination of Risk Scores,” Proceedings of Innovations in Theoretical Computer Science, 2017. arXiv:1609.05807
    11. For example, equalizing the rate at which a bank initiates loans across demographic groups will generally be incompatible with equalizing the “false negative” rate (i.e., the rate at which creditworthy applicants are denied loans across groups), which in turn will generally be incompatible with equalizing the positive predictive value of lending decisions (i.e., the rate at which people granted loans avoid default) across groups.
    12. Michael Kearns, Seth Neel, Aaron Roth, Zhiwei Steven Wu, “Preventing Fairness Gerrymandering: Auditing and Learning for Subgroup Fairness,” Proceedings of the 35th International Conference on Machine Learning, PMLR 80:2564-2572, 2018. http://proceedings.mlr.press/v80/kearns18a.html