Sections

Research

Why AI is just automation

A man walks past a display of hexadecimal code on the Telekom exhibition stand at the CeBIT trade fair, the world's biggest computer and software fair, in Hannover March 13, 2016. REUTERS/Nigel Treblin TPX IMAGES OF THE DAY - RTX28XH3
Editor's note:

This report from The Brookings Institution’s Artificial Intelligence and Emerging Technology (AIET) Initiative is part of “AI and Bias,” a series that explores ways to mitigate possible biases and create a pathway toward greater fairness in AI and emerging technologies.

Executive Summary

Work long performed by human decision-makers or organizations increasingly happens via computerized automation. This shift creates new gaps within the governance structures that manage the correctness, fairness, and power dynamics of important decision processes. When existing governance systems are unequipped to handle the speed, scale, and sophistication of these new automated systems, any biased, unintended, or incorrect outcomes can go unnoticed or be difficult to correct even when observed. In this article, I examine what drives these gaps by focusing on why the nature of artificial intelligence (AI) creates inherent, fundamental barriers to governance and accountability within systems that rely on automation. The design of automation in systems is not only a technical question for engineers and implementers, but a governance question for policymakers and requirements holders. If system governance acknowledges and responds to the tendencies in AI to create and reinforce inequality, automated systems should be built to support human values as a strategy for limiting harms such as bias, reduction of individual agency, or the inability to redress harmful outcomes.


It starts with the software

Software can interrupt existing governance processes, confusing the application of existing policy while obscuring the machinery of decision-making processes. As a result, both those affected and responsible for oversight often find automated systems opaque or risky. This is true in the private sector, where software mediates everything from hiring decisions, to how customers are treated (or even whether customers are taken on), to the flow of information in social media and advertising systems, and beyond. Here, traditional governance approaches for verifying and validating decision-making—audits, reviews by senior decision-makers, and compliance among them—are often stymied by the speed, scale, or even the sheer complexity of software systems. Such opacity is a choice, and one which comforts existing power structures—reinforcing the narrative that technology is inscrutable, when in fact the opposite is true. Every real-world piece of technology represents the choice by a decision-maker to imbue a tool or process with authority and fitness for purpose consistent with their remit.

Though social networks obfuscate answers to questions like “why was a user shown a particular post or advertisement?” in the complexity of ranking systems and matching algorithms, the behavior of these systems has been approved and intended by someone. Ask a technology executive at such a company whether the next version of the timeline or the ad placement algorithm will decrease user interactions or the company’s revenue, and you will be treated to a learned explication of how the system functions, all the ways it can be understood, as well as how its behaviors predicted, monitored, tuned, and controlled. If even the most complex automation is amenable to understanding, why does it give rise to so many valid concerns and why do these systems fail to measure up to their goals so often?

Gaps between the goals intended for an automated system and the outcomes realized can reinforce existing power asymmetries and biases that directly harm people while leaving insufficient paths either to legal remedies or practical redress in the form of corrected outcomes. Whether it is algorithms disproportionately denying kidney transplants to Black patients or the use of surveillance technologies such as facial recognition at a citywide scale when people of color experience higher error rates in detection, identification, or classification of activities, harms from these systems are not hypothetical future evils, but tangible and urgent problems that must be urgently managed.

Disparities in the design and execution of AI models

No matter how complex the AI system is or how it is implemented, problems arise any time a process is automated. Clearly defining automation, or the related concept of “artificial intelligence”, is difficult. But the simplest, yet most useful, understanding of automation is the relocation of a task traditionally performed by a human to a piece of technology. At a fundamental level, moving a function to a machine requires defining the operation of that function using a pre-set rule. That rule can be operated by the functionaries in a bureaucracy, the gears and cogs of a machine, or the code in a computer. The rule defines the way real-world situations lead to realized outcomes regardless. Indeed, the very word “computer” originally referred to teams of people organized to perform repetitive mathematical computations, often for military or navigational uses. Only later did the word come to mean the electronic machines that replaced these workers.

“[B]y their rule-driven nature, automated systems often fail to account for differences among individuals and populations, averaging the full richness of a problem into an unwavering precept.”

Automation can also take the form of mathematical rules. For example, simple point-based scoring systems are used in a variety of applications, including to automate credit decisions, rate recidivism risk, and make clinical medical decisions such as prioritizing vaccine administration in the COVID-19 pandemic response. But by their rule-driven nature, automated systems often fail to account for differences among individuals and populations, averaging the full richness of a problem into an unwavering precept. Two examples are in loan decisions and, more recently, in vaccination distribution.

Loan decisions

Consider, for example, the use of a credit or underwriting score to establish eligibility for a loan or other financial product. Replacing the judgment of a banker with a creditworthiness score and a cutoff threshold is an example of automation. The decision once made by a banker (whether to grant a loan or not) is replaced by the computation of data-derived numbers. Comparing the attributes of prior customers to a current applicant reveals patterns, whether that comparison happens through a banker’s experience or tools like credit scores and default risk prediction models. These attributes include customer payoff behaviors—whether existing customers are current on their payments is something the bank knows, while a new applicant’s propensity to repay a loan is unknown and must be guessed. The patterns provide a rule—a model of expected behavior—by which such guesses can be made repeatable.

Yet, these models may not be created to reproduce the decisions of either the bankers or the underwriters. Rather, they often aim to optimize the bank’s profits, which can override business logic understood by humans located in any specific function as well as broader socially desirable goals, such as equitable access to credit or avoidance of discriminatory practices (e.g., steering and redlining). And while an appropriately empowered banker can make a judgment call by granting a loan to a seemingly good applicant who doesn’t meet the bank’s formal criteria, thresholds and rules would disallow it. In practice, however, such flexibility often advantages the well-connected, existing customers who seem “appropriate” as targets of the banker’s discretion, adding subjectivity that reinforces bias inherent in the lending system.

Automation, which was originally introduced to make decisions more consistent and reproducible, has moved the authority of these decisions away from individual bankers who know and interact with clients to the bank’s underwriting department where each lending packet is judged based only on its contents and the bank’s policies. In other words, AI has imposed clear rules that can be applied more quickly and at a larger scale than bankers can develop relationships to evaluate trustworthiness or suitability for lending. Decisions can be evaluated relative to the rules rather than a banker’s subjective judgment, potentially providing a way to reduce bias in the credit evaluation process.

But rules come with a downside: Those who do not resemble prior successful customers will be underserved by their lower (or unavailable) credit scores, whether they be long-shot entrepreneurs or people whose backgrounds differ systematically from those served previously (such as members of historically disadvantaged populations). Marginalized groups identified by legally protected categories, such as race and gender, have a less favorable distribution of credit (and other) scores when they have sufficient traditional background to be scored at all. They are also historically subjected to stereotypes about their lack of collective creditworthiness.

COVID-19 vaccine distribution

As another example, the Stanford Hospital System in early 2021 introduced a “medical algorithm” to allocate its scarce doses of COVID-19 vaccines. This formula assigned points to eligible recipients in order to prioritize them based on the formula’s calculation of their risk from the pandemic. But the formula heavily privileged risk of death once infected over risk of infection, thereby prioritizing senior medical staff, faculty, and administrators ahead of younger front-line nurses and doctors. This led to only five of the hospital’s 1,300 residents being assigned a vaccine regimen in the first cohort to be administered. Even when this fact was raised with the team charged with allocating vaccine doses, a review of prioritization status added only two additional residents to the first cohort while retaining hundreds of senior staff working from home with little or no direct contact with infected patients.

Rules in bureaucracies and software-based automation

Automation always reduces the core of the task being pursued to a set of rules, where facts about the state of the world directly determine an outcome. Even when complex machine-learning tools are employed to discover a complicated decision rule from data, decisions are well described as a deterministic mapping of inputs to outputs—a rule. Such rules may be more complex than humans can create, as they turn sound waves into transcribed text or image pixels into object or face identifications. Nonetheless, they are still rules. They can also change over time based on updates learned from interactions with the world.

Bureaucracies are operated by teams of humans following rules, running functions like hiring, determining benefits eligibility, or investigating and prosecuting crimes. Criticisms of bureaucracies often prefigure and echo concerns about other forms of automation: Bureaucracies are indifferent to the specifics of cases which might include mitigating high-risk circumstances and decisions. Bureaucracies hew to their rules when an exception would make more sense to an unconstrained decision-maker. Bureaucracies also create structural benefits for those who set the rules, know what the rules are, or can afford an advocate who can make the rules apply most favorably. Advantages of bureaucracies benefit automation because prescribed rules enable increased scale and speed of decisions by clarifying the criteria for specific outcomes in both cases. Software use in bureaucratic systems is particularly adept at operating rule-based systems, as essentially any set of rules can be encoded into a progr­­­­am and then run faster and at a larger scale than a human or even a team of humans could apply.

To continue this analogy, the governance structures used to control the actions of bureaucracies are also the tools most suited to mitigating risks and harms in automated systems. These include: transparency about what rules are in force and how they were applied in particular cases; aggregated bias measures of whether the system works as well for underprivileged groups as for their privileged counterparts; paths to resolve situations where rules need new exceptions or don’t foresee a situation that has actually occurred; and oversight and accountability mechanisms that make individuals responsible when the system as a whole fails.

“Without adaptation, traditional tools for bureaucratic transparency and accountability translate poorly for software-based automation.”

In bureaucracies, errors can be made subject to challenge, exception, and rectification—a human can realize that the rule’s application is inappropriate and fix the issue or escalate the case to an appropriate and capable authority. At worst, additional structures such as ombudspersons, internal and external oversight bodies, or third-party arbiters can serve this function, but at a much higher cost to individual participants. But the analogy is imperfect: Without adaptation, traditional tools for bureaucratic transparency and accountability translate poorly for software-based automation.

Too often, the promise of new and automated versions of manual processes leads to a rush to deploy solutions without regard to how the new order of operations will affect the full range of stakeholders or how a new tool will integrate into systems and organizations. The mere existence of technology becomes a solution in search of a problem, with management setting implementation goals like “use AI” or “make decisions in a data-driven way” without first establishing a problem to which these tools may productively be applied. Worse still, automation is often cited as the cause of behavior for a system when outcomes instead reflect the system’s structure or choices made by the system’s controllers. “The computer decided” is its own special fallacy—humans determine what automation decides but might fail to determine whether and where decisions should be automated at all.

Yet, as we see in the Stanford vaccine example, automation can fail drastically as compared to human-made decisions. And blaming such failures on the rules themselves launders responsibility that should lie squarely on the shoulders of the decision-makers who either determined the rules or accepted them for use.

Who decides which rules apply?

Reorganizing decisions around rules carries risks—rules might not apply perfectly in every case, yielding new errors that systems must deal with. Re-orienting processes around rules increase the power of the rule makers and reduce the agency of those to whom the rules are applied. Further, when rule-driven decisions are mechanized, the machine does only what it is built to do (apply the rule). It has little capacity to recognize or rectify errors, making mistakes more pronounced and harder to manage. This is exacerbated by the forces which drive the transition to rules-oriented decision-making—the need to make decisions faster and at a larger scale (i.e., the scale of an entire city, an entire country, or the internet). Failures can propagate quickly and the risk of harming many people is much greater.

Technologists generally react to the limitations of rules by making the set of rules more complex and attempting to deal with every available case. Rules defined through AI or by the sophisticated analysis of data using machine learning or other methods in data science are an example of this. Rather than having humans explicitly decide a rule and encode it into a software tool, the rule is discovered by examining patterns in data instead of being deduced from the knowledge of a programmer (although the result is still a rule).

Yet, this too leads to problems. Proponents of automation often argue that rules make decisions more “objective” since their predictability implies that human discretion has been removed from the process. However, humans decide what rules are enforced, which rules apply in which cases, and what fact patterns are presented to rule-based decision processes. The drive toward “data-driven” decision-making often relies on claims that such decisions will be objective reflections of reality rather than the choices they are. As rules become more complex, they also become more difficult to understand, reason about, and apply than simpler ones. But complex rules retain all the inflexibility and downside of simple rules, while acquiring new problems. Errors may be fewer, but they will be less intuitive when they happen and harder to challenge for those seeking redress. And although complexity provides a veneer of objectivity, it obscures the truth that the design of the system is itself a discretionary choice.

“[A]lthough complexity provides a veneer of objectivity, it obscures the truth that the design of the system is itself a discretionary choice.”

Conversely, policymakers and others involved in the interpretive operationalization of non-rule-based decision guidance often punt their work to technologists, observing that it must be possible to develop tools that will sidestep or simplify hard decisions. Yet, those decisions remain, and choices are already operationalized in existing systems in many cases. No lawyer would argue that the law is a purely rule-driven decision structure rigidly determining when crimes have been committed or even when a contract is breached. It is clear that competent representation in the legal system affects the extent to which that system serves the broader interest of justice: Lawyers can argue over which laws or previous court decisions should apply, what facts should be considered in that application, and whether a novel application is proper. The law admits more flexible desiderata for decisions that are nonetheless binding, such as legal standards and moral and legal principles. For example, the legal test for whether using copyrighted information constitutes infringement is a standard, not a rule; principles provide more general but still-binding guidance, such as the principle that no one should profit from their misdeeds.

Automated decisions should be both solution-oriented and co-developed

Processes should be automated when the outcomes are extremely valuable to humankind or solve a problem, not just because the technology exists. Technology application is best poised for success when it is driven by a problem, rather than a solution. Since deep understanding of the problems is often widely separated from deep understanding of the available technologies, no single function is well-positioned to develop automated systems along with attendant policies, controls, and governance mechanisms. This impasse of responsibility outsourcing leads to a situation where no single function can resolve the risks of automation.

“Technology application is best poised for success when it is driven by a problem, rather than a solution.”

To realize the full benefits of automation, governments and technologists must co-develop governance structures to manage the new capabilities of computer-driven systems. The resulting sociotechnical control structures, or assemblages of technical design, implementation, documentation, and assurance along with policy, management, operations, maintenance, redress, and enforcement, bring humans into the system as active participants, not just passive recipients of technology. Viewing the full context and content of an application as a single system—including affected humans as well as the decision-makers who control the structure and function of the system—is the best way to determine which interventions and limitations will reduce harm. Building sociotechnical control structures require understandings of both the systems they control and the social and organizational context in which they operate. Thus, to be properly governed, automated systems must be co-developed with their governance structures, each designed to support the other.

What prevents such co-development? First, the fact that no single function has clear ownership of the problem means that successful projects are generally driven by visionaries who work across functions and disciplines to transcend existing paradigms and reshape entire systems. Yet, relying on visionaries demands enough qualified personnel who can assume the mantle of a project champion. Although capacity building can help here, ultimately the solution lies elsewhere.

Co-development of automation and its governance also depends on bridging the gap between the flexibility of requirements and the rigidity of rule-driven tools, such as software. Thus, automation alone cannot be the solution to most complex problems. Rather, what must be co-designed are governance structures and entire systems.

Errors must come with a path for redress; flexibility demands the introduction of paths for escalation and discretion; oversight must guarantee and encourage correct operation by auditing records of how the system operated. In these ways, the governance of automation looks very much like the governance of bureaucracies. Thus, we can draw inspiration from high-functioning sociotechnical governance structures when looking to establish governance for socially important applications of automation. Such high-functioning governance is needed for determinations of important social benefits or opportunities (health-care access, welfare benefits, jobs, etc.); surveillance, law enforcement, and the administration of justice; and safety-critical applications such as health care, vehicle operation, and military applications. These high-functioning governance structures can be found already in many safety-critical applications: aviation safety systems, space launches, the operation of industrial and utility plants, and safety in the health-care system. Adapting these structures to automated versions of the controlled system is a necessary transition already underway in many cases.

Conclusion

Automation provides an opportunity for better governance. While it might be difficult to discover what a human decision-maker is thinking or why a bureaucracy took a particular action, automated systems are driven by rules and those rules adjudicate concrete fact patterns into specific outcomes in predictable ways. The relationship between input fact patterns, decisional rules, and outputs of a component or outcomes in a system can be perfectly recorded for later review. Just as automation enables speed and scale for decisions, it can also enable a complete form of oversight, transparency, and enforcement. Realizing this vision requires developers to accept that merely embodying processes in rules, as automation does, is insufficient, and our view must turn to entire systems and attendant sociotechnical control structures.


The Brookings Institution is a nonprofit organization devoted to independent research and policy solutions. Its mission is to conduct high-quality, independent research and, based on that research, to provide innovative, practical recommendations for policymakers and the public. The opinions expressed in this paper are solely those of the author in their personal capacity and do not reflect the views of Brookings or any of their previous or current employers.

Microsoft provides support to The Brookings Institution’s Artificial Intelligence and Emerging Technology (AIET) Initiative. The findings, interpretations, and conclusions in this report are not influenced by any donation. Brookings recognizes that the value it provides is in its absolute commitment to quality, independence, and impact. Activities supported by its donors reflect this commitment.

Authors

  • Footnotes
    1. Kroll, Joshua A., Solon Barocas, Edward W. Felten, Joel R. Reidenberg, David G. Robinson, and Harlan Yu. “Accountable algorithms.” U. Pa. L. Rev. 165 (2016): 633.
    2. Kroll, Joshua A. “The fallacy of inscrutability.” Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences 376, no. 2133 (2018): 20180084.
    3. Grier, David Alan. When computers were human. Princeton University Press, 2013.
    4. Dixon, Pam, and Robert Gellman. “The Scoring of America.” In World Privacy Forum. 2014.
    5. Brauneis, Robert, and Ellen P. Goodman. “Algorithmic transparency for the smart city.” Yale JL & Tech. 20 (2018): 103.
    6. Ohm, P., “Breaking Felten’s Third Law: How Not to Fix the Internet”, 87 Denver University Law Review DU Process 50 (2010) (symposium).
    7. Leveson, N.G., 2016. Engineering a safer world: Systems thinking applied to safety (p. 560). The MIT Press.