Sections

Research

Why the government must help shape the future of AI

A small drone helicopter operated by a paparazzi records singer Beyonce Knowles-Carter (not seen) as she rides the Cyclone rollercoaster while filming a music video on Coney Island in New York August 29, 2013. REUTERS/Carlo Allegri
Editor's note:

This report is part of “A Blueprint for the Future of AI,” a series from the Brookings Institution that analyzes the new challenges and potential policy solutions introduced by artificial intelligence and other emerging technologies.

Rapid advances in artificial intelligence (AI) are raising serious ethical concerns. For many workers who have not seen significant wage growth in decades, AI represents a potential threat to the jobs on which they depend, and its potential interaction with the effects of globalization is alarming. Thoughtful observers worry about its capacity to intensify concentrations of public and private power, increase information asymmetries, and diminish transparency—all at the expense of citizens. In these circumstances, the significance of individual consent—one of the hallmarks of a free society—is called into question.

The developers of AI in the private sector are aware of these issues, and they have begun to develop codes to regulate their own activities. For example, Microsoft has laid out six principles for the AI systems it is creating: fairness, safety and reliability, privacy, inclusion, transparency, and accountability. Each of these principles, in turn, will need to be specified and applied to a range of cases. Google has done the same thing through its “Responsible Development of AI” process. Many other companies are considering ethics codes designed to guide their corporate decision-making.

But it is not a simple matter to apply these principles to artificial intelligence. Take fairness, as an example. It will require systematic efforts to ensure that the data from which AI programs can “learn” is representative of the relevant population. It will also require the capacity to distinguish between algorithm-driven decisions based on statistical regularities and individual determinations. Local bankers often make loan decisions relying on their knowledge of the character of individual borrowers, many of whom might not qualify for loans if they had to comply with the standards of regional and national financial institutions.

Some kinds of fairness cannot be reduced to rules; not even the combination of AI learning and rich, statistically representative data sets can exhaust this important norm. As the bank lending example shows, the universal application of probabilistic generalizations can generate its own kind of unfairness when they exclude individuals who don’t measure up on paper but can meet the performance standards the generalizations were intended to represent.

Other principles raise different problems. What does transparency mean as applied to autonomous systems whose creators cannot predict what these systems will do as they learn from new data, including feedback from previous conclusions? What does privacy mean when these systems can not only monitor individuals but prompt (or as the behavioral economists would say, nudge) new actions based on statistical inferences from their past behavior?

If the private and public sectors can work together, each making its own contribution to an ethically aware system of regulation for AI, we have an opportunity to avoid past mistakes and build a better future.

Beyond these unavoidable questions are even larger issues. How far can self-regulation go in the private sector? Under what circumstances should the public sector step in? And when it does, what are the relevant ethical principles? This policy brief will explore these issues using three case-studies: facial recognition systems, self-driving vehicles, and lethal autonomous weapons. I use these examples to illustrate the ethical challenges of AI and the need to clarify our thinking in this area.

Facial recognition

In 1791, the British philosopher and social reformer Jeremy Bentham published a proposal for a new prison he called the Panopticon, designed to allow a single guard to watch all the inmates without being visible to them. The idea was considered fanciful, and Bentham died embittered that the government had failed to accept it. But it became a potent symbol for the dystopian prospect of universal surveillance.

Today, with the development of computer-assisted facial recognition, this prospect has become all too real. The Chinese government has the capacity to track the movements of many individuals living under its jurisdiction. For societies that cherish liberty and privacy, this new capability raises deep ethical challenges—for the experts that create it, for the businesses that sell it, and for the governments that must decide how to use and regulate it.

In July of this year, Brad Smith, the president of Microsoft, responded with an urgent plea. On the one hand, he said, some emerging uses are positive: imagine the police being able to locate a kidnapped child by recognizing her as she is being hustled down the street by her abductor or being able to single out a terrorist from the crowd at a sporting event. But other potential applications are chilling, he adds: “Imagine a government tracking you everywhere . . . without your permission or knowledge. Imagine a database of everyone who attended a political rally, [an activity] that constitutes the very essence of free speech.”

“Facial recognition raises a critical question,” he insists: “What role do we want this type of technology to play in everyday society?” Critics of surveillance sometimes invoke a norm, the “reasonable expectation of privacy.” But the idea of privacy does not fit comfortably with inherently observable activities in spaces such as public streets. And individuals who seek and benefit from publicity can hardly complain if they are recognized in public places.

A more appropriate norm, I suggest, is a reasonable expectation of anonymity. If we are going about our business in a lawful way, public authorities should not use facial recognition systems to identify and track us without a justification weighty enough to override the presumption against doing so, and this process should be regulated by law. Identification of specific individuals should require the equivalent of a search warrant, which for most purposes is authorized only for probable cause. Mere suspicion is not enough.

Identification of specific individuals should require the equivalent of a search warrant, which for most purposes is authorized only for probable cause. Mere suspicion is not enough.

If a crime has been committed, the presumption shifts toward a generic search for the perpetrators. Facial recognition systems may be used, for example, to identify individuals fleeing the scene of a bank robbery. Some may turn out to be innocent victims fearing for their lives; others may be the robbers themselves. A similar standard—with a broader catchment area—governs the response to a terrorist attack. Because in such instances there are reasons to fear a conspiracy extending beyond the individuals who carried out the attack, the use of facial recognition systems would be legitimate well outside the scene of crime, as would monitoring the usual suspects. Sometimes the Casablanca standard makes sense.

These issues would be urgent even if the technology were perfect, but it is far from that. Recent studies show that as of now it works better for men than women and for people of lighter complexion than for people of color. The danger of false positives pervaded with systematic biases cannot be ignored.

As of now, facial recognition technology works better for men than women and for people of lighter complexion than for people of color. The danger of false positives pervaded with systematic biases cannot be ignored.

This risk matters because most of us tend to give great weight to technological innovations. Sometimes—with regards to DNA testing, for example—this deference makes sense, but often it doesn’t. As a historical illustration, phrenology was once widely used by both prosecutors and defense attorneys in criminal trials. Until it can be demonstrated that facial recognition systems are more accurate and less biased than human eyewitnesses, their use for legal and other official purposes is suspect, because the evidence they generate is too likely to enjoy excessive credibility.

Self-driving vehicles

You’re driving at the speed limit on a two-lane suburban road when a ball bounces into the street with a child in hot pursuit. There’s an oncoming car in the other lane, and there’s not enough time to stop short of the child. What do you do?

If you’re a normal human being, you want to do everything possible to avoid hitting the child, but not at the cost of your own life. Your options include: swerving left across the lane going in the opposite direction, minimizing the chance of hitting the child but at the risk being sideswiped if you don’t clear that lane in time; or swerving right, increasing the chances of hitting the child if your momentum carries you too far forward.

This assessment assumes that the other car will remain in its lane while braking. But it would be reasonable for the other driver to fear that the child might continue to run across the road. To minimize the risk to the child, this driver swerves right, increasing the chances of a collision if the other driver swerves left. The optimal strategy depends on the interaction between the drivers.

Now vary the example slightly: assume that you are the parent of the child who runs into the street. In this circumstance, you may well choose to risk sacrificing your own life to save your child. If so, the optimal strategy is the left swerve, whatever the risks of colliding with the oncoming car.

Vary the example again: you’re driving with your child buckled into a car seat in the rear of your car when a child you don’t know runs into the street. Are you morally required to be neutral between the life of your child and that of a stranger? If not, the optimal strategy is the right swerve.

One more example: you’re driving with your child when two other children run into the street. Do numbers affect the moral judgment? And even if they do, can they outweigh the special responsibility you have for your own child?

I have multiplied these examples to underscore the kinds of challenges facing the designers of autonomous vehicles. First, because interactions between and among vehicles matter, either government or an industry consortium must establish a protocol across models and makers that governs interactions in the widest possible range of cases. One possibility is would be a system of receptors and transmitters in every vehicle that instantly communicates responses to problematic situations and permits real-time coordination between vehicles.

Second, programming decisions necessarily will encode answers to the ethical choices I’ve posed. These answers must be explicit, not tacit. And car makers alone are not entitled to make these decisions, which instead must reflect public discussion and debate.

Third, because specific circumstances matter, autonomous vehicles must be able to receive and deploy relevant information to the greatest extent possible. If society decides that numbers or special relationships make an ethical difference, then the vehicle’s control system must be aware of them. This may require the installation of sensors and even facial recognition devices as elements of the autonomous driving package.

Finally, the deployment of autonomous vehicles will tilt the balance of liability away from car owners toward their manufacturers. Individuals cannot be held responsible for programming defects—or decisions—which they have no ability to diagnose. As Bryant Walker Smith has argued, the existing body of product liability law could easily be adjusted to accommodate the issues autonomous vehicles will raise. This must be done before these vehicles are deployed, as part of the regulatory process that governs their introduction. It would be burdensome and unfair to force individual plaintiffs to go to court to establish the standards that government has the responsibility to lay down in advance.

Autonomous weapons

As the laws of war have long recognized, the decision to deprive other human beings of life raises the gravest ethical questions and warrants the greatest degree of care. When human beings interact with technology as they make these decisions, new issues arise.

Consider the following case. In 1988, the U.S.S. Vincennes, a guided missile cruiser operating in the Persian Gulf, shot down an Iranian passenger jet, killing all 290 people on board. The plane’s course, speed, and radio signal all indicated, correctly, that it was a civilian aircraft. But the ship’s Aegis system, which had been programmed to target Soviet bombers, misidentified it. Despite the evidence from standard indicators, not one of the 18-member Aegis crew was willing to challenge the computer, and so they authorized the firing of the missile that brought down the Iranian plane. The result was a human tragedy that damaged the reputation of the U.S. military and drove the already poisonous relationship between the U.S. and Iranian governments to a new low.

As the laws of war have long recognized, the decision to deprive other human beings of life raises the gravest ethical questions and warrants the greatest degree of care. When human beings interact with technology as they make these decisions, new issues arise.

The Aegis system was not fully autonomous, of course. The Vincennes’ commanding officer had the ultimate responsibility to authorize the strike. But this case highlights the undue deference we tend to give the technology we create, even when the evidence of our senses contradicts it. Investing technology with the power to act in specific cases without human review and authorization would only heighten the danger.

Many current procedures recognize this risk. For example, not only do human operators direct unmanned drones, but also the Obama White House and the Department of Defense created an elaborate protocol to guide decisions about strikes on specific targets. The basic laws of war—distinction, proportionality, non-combatant immunity, etc.—were observed. The target had to be identified accurately beyond a reasonable doubt. In addition, domestic laws and norms had to be weighed—for example, the fact that Anwar al-Awlaqi, who influenced young people to become terrorists, was a U.S. citizen. Decision-makers had to balance the facts and circumstances of each case to reach an all-things-considered judgment.

There is a difference between lethal weapons and weapons directed against non-human targets such as unmanned weapons. But this distinction does not fully resolve the ethical issue, because the target must be accurately identified as unmanned. A fully autonomous anti-missile system must determine that an incoming object is a missile rather than (say) a friendly aircraft. As the Vincennes episode shows, the ability of a specific technology to do this cannot be taken for granted and must be demonstrated with high probability before non-lethal autonomous systems are deployed.

Four major reservations have been raised against the deployment of fully autonomous lethal weapons. The first is the broad claim that “machine programming will never reach the point of satisfying the fundamental ethical and legal principles required to field a lawful autonomous lethal weapon.” This would be true if human decisions in complex cases turn out to be non-algorithmic, as more than one moral theory suggests. If the weighing and balancing of often-competing factors, empirical and moral, occurs within a framework of rules but is not determined by them, then all-things-considered judgments would be irreducibly case-specific. If so, even programs capable of learning from feedback and other evidence would never fully replace human decision-making and, as Kenneth Anderson and Matthew Waxman put it, no autonomous system could ever pass an “ethical Turing Test.”

To the extent that this is an empirical question, it is safe to say that we are far from knowing the answer. Until we do, we should refrain from deploying fully autonomous weapons and ensure that a human being remains in ultimate command.

For some critics of these systems, however, the ultimate issue is not empirical but moral. It is per se wrong, they argue, to remove human beings from the decision. We are more than rational calculators. Our ability to experience pleasure and pain, to understand the sentiments of others, to feel empathy and compassions—these are features of our humanity that we must bring to bear on our practical judgments if they are to be adequate to the full range of moral claims. We have no reason to believe that any man-made system will ever share these aspects of our inner life. If not, it is a moral mistake to delegate life and death decisions to such a system.

Anderson and Waxman’s rejection of this moral argument is instructive but not dispositive. It is true, as they say, that we probably will turn over more and more functions with life and death implications to autonomous machines as their capacities increase and that our basic notions about decision-making will evolve accordingly. The correct moral question is not whether machines are just the same as humans but whether they can meet the appropriate standards of conduct—for lethal autonomous weapons, the laws of war, not some abstract moral theory. “What matters morally,” they conclude, “is the ability consistently to behave in a certain way and to a specified level of performance. The ‘package’ it comes in, machine or human, is not the deepest moral principle.”

To bolster their case, Anderson and Maxwell offer examples of activities—automatic robotic surgery and self-driving vehicles—where the concept of attaining a “specific level of performance” makes intuitive sense. If this technology attains better results in delicate operations ranging from brain and prostate surgery to reattaching severed limbs, then using it is certainly defensible. At some point, not using it might come to be considered malpractice. After all, measures such as post-operative survival, complications, and recovery of function are objective. And considering the bedside manners of many surgeons, their patients might prefer a mute machine.

As we have seen, self-driving cars raise complex moral issues. Nonetheless, as with surgical techniques, we can measure their performance against widely accepted standards. If these vehicles get into fewer accidents and kill or injure fewer people than their human-operated counterparts, the prima facie case for permitting or even preferring them would be strong. At some point, drivers with safety records worse than that of autonomous vehicles might be required to undergo further training or even to relinquish the wheel.

These examples raise a philosophical question that goes back to Plato’s “Republic:” is it correct to regard all human activities as technical skills? Plato offers powerful arguments for a negative answer. Moral agency may involve certain skills, such as the capacity to reason well, but it is more than the ensemble of these skills. In this respect, the practice of law is more like morality than surgery.

The application of broad legal principles to specific cases is far from a mechanical process and involves more than logical deduction. Often it involves choices between analogies: if the application of the law to cases A and B is settled, is the disputed case of C more like A or like B? Non-algorithmic insight may the best way of making such choices, even when it cannot persuade those who see things differently. And the capacity for seeing things clearly often requires the ability to feel as well as think. The distinctiveness of human agency is built into our understanding of moral conduct.

If mistakes occur that violate the laws of war or widely held moral norms, who is responsible—soldiers on the battlefield, commanders who chose to deploy the weapon, the designer who programmed it, the law-makers who authorized and funded it?

A third argument against autonomous weapons is that they weaken systems of accountability by defusing responsibility. If mistakes occur that violate the laws of war or widely held moral norms, who is responsible—soldiers on the battlefield, commanders who chose to deploy the weapon, the designer who programmed it, the law-makers who authorized and funded it? All of the above, or none of the above? Singling out any link in this chain seems unfair. But if everyone shares responsibility in theory, no one will be held responsible in practice.

Anderson and Waxman worry that focusing on individual accountability will end up blocking the development of systems that reduce actual harms to soldiers and civilians. And besides, they argue, the laws of armed conflict are enforced principally against state actors, not individuals. Analogies to criminal law are at best misleading.

The counterargument is that the most egregious violations of war conventions have been enforced against individuals to great effect. The Nuremberg trials held individuals responsible for specific decisions, not the entire German nation. The German people decided for themselves to accept a measure of collective responsibility, but as a moral not legal matter. The U.S. military holds individuals on the front line and up the chain of command responsible for acts of commission, and for acts of omission when the duty to act was clear or when the results of inaction were reasonably foreseeable. The military’s practice reflects not only moral intuitions about the nature of responsibility but also a pragmatic judgment of how best to deter unwanted activities.

Consider the analogy of corporations that paid large fines for misconduct during the run-up to the Great Recession. It did not escape citizens’ attention that the executives who authorized and presided over the culpable behavior did not pay an individual price, but most kept their jobs and many of them actually received generous bonuses. If the law had made it clear in advance that they would be held personally responsible, they would have had a powerful incentive to stay on the right side of what was, after all, a pretty bright line.

The final main argument against lethal autonomous weapons is that reducing the risks faced by human soldiers weakens an important disincentive to the use of armed force. Anderson and Waxman dismiss this claim on the ground that it treats soldiers as mere means to pressure political leaders. If this tactic fails and conflict ensues, soldiers will die whose lives could have been spared if autonomous weapons had been deployed.

This argument is not without force, but it cuts both ways. In the wake of the Vietnam war, the United States abandoned the military draft in favor of the All-Volunteer Forces (AVF). If we had adopted in AVF in the 1950s, the war in Vietnam might have lasted as long as the war in Afghanistan—seventeen years, with no end in sight. A war fought by draftees is sustainable if the American people remain united in its support, as they did throughout World War II. But when substantial portions of the people come to question a war’s practicality or morality, controversies about the draft will put pressure on civilian leaders to change course. Many people believe (I’m one of them) that this direct nexus between war and the people’s will is good for democracy. When the armed forces become remote from the experience of ordinary citizens and their elected representatives, leaders can afford to downplay the absence of popular authorization for the use of force.

Although there are many ethical reasons to proceed cautiously with the deployment of lethal autonomous weapons, there are practical considerations on the other side that may prove decisive. As retired USAF Gen. Charles Dunlap pointed out during a recent Brookings panel, the United States is not acting in a vacuum. We have adversaries, and they get a vote. If they rush to deploy these weapons, we may have no choice but to respond.

There may be a compromise that makes sense, all things considered. The arguments in favor of installing and using Israel’s Iron Dome are compelling, mostly because the system is entirely defensive. It kills missiles, not people, except by rare accident. Failing to develop the autonomous weapons that can protect our armed forces against those of an adversary makes no sense from either a moral or military point of view.

The line between offense and defense is not so clear, of course. A standard objection to anti-ballistic missile systems that they may encourage nations who deploy them to believe that they can undertake offensive actions with relative impunity. Still, leaders—especially in democracies—will have a hard time explaining their failure to take feasible steps to protect their armed forces and civilian populations. Appeals to abstruse deterrence theories will fall flat. If North Korea demonstrates the capacity to stage a ballistic missile attack on the United States, then accelerating the development of an effective ABM system is not optional.  Because promoting the public’s safety and security is the first duty of political leaders, failing to do so is a breach of their moral compact with the people.

Over the next decade, events will determine the extent to which the responsibility to defend the American people drives the development of defensive autonomous weapons. We will also find out the extent to which ethical reservations about the development and deployment of these weapons for offensive purposes shapes the next phase of our security strategy.

Conclusion

In another essay, Darrell M. West discusses the norms and practices corporations can use to help guide their development of new AI technologies. It is an impressive list, the adoption of which would reflect a high level of ethical self-awareness among some of America’s largest and most important companies.

Self-regulation is a necessary component of a system of ethical guidance for AI, but the case-studies discussed in this paper suggest that it will not be sufficient. National defense is a quintessentially public function, and the decision to deploy AI-directed weapons must be made through an accountable political process. Facial recognition systems raise policy issues that cannot be relegated to the private sector. Which uses of these systems breach norms of privacy and anonymity? What is their evidentiary status in criminal trials? Does their deployment constitute an unacceptable concentration of power, whether in the private or public sector? It is conceivable that the makers of autonomous vehicles and their associated guidance systems might adopt voluntary standards. But even here, history suggests that agreements of this sort will need a public backstop to be effective.

A thoughtful private-sector leader concurs. While acknowledging the progress the private sector has made toward developing principles and practices of self-regulation, Brad Smith regards this effort as inherently limited. “If there are concerns about how a technology will be deployed more broadly across society,” he declares, “the only way to regulate this broad use is for government to do so.”

In support of this conclusion, Microsoft’s chief executive offers three reasons. First, in a democratic republic, self-regulation addressing matters of broad public concern is an inadequate substitute for laws ratified by the people’s elected representatives.

Second, competitive dynamics are likely to undermine self-regulatory regimes. Even if some companies adhere to voluntary standards, the problem will remain if others refuse to go along or break ranks, which they will have a powerful incentive to do. Only the force of law can provide a level playing field so that the practices of ethical actors are not nullified by the self-interest of others.

Third, there are many markets—autos, air safety, foods, and pharmaceutical products, among others—where thoughtful regulation contributes to a healthier dynamic for consumers and producers alike. “A world with vigorous regulation of products that are useful but potentially troubling is better than a world devoid of legal standards,” Smith insists.

In the past, our society has allowed new technologies to diffuse widely without adequate ethical guidance or public regulation. As revelations about the misuse of social media proliferate, it has become apparent that the consequences of this neglect are anything but benign.

In the past, our society has allowed new technologies to diffuse widely without adequate ethical guidance or public regulation. As revelations about the misuse of social media proliferate, it has become apparent that the consequences of this neglect are anything but benign. If the private and public sectors can work together, each making its own contribution to an ethically aware system of regulation for AI, we have an opportunity to avoid past mistakes and build a better future.

 

Authors

  • Footnotes
    1. Microsoft, The Future Computed: Artificial Intelligence and its Role in Society, 2018.
    2. Google, “Responsible Development of AI,” 2018.
    3. Brad Smith, “Facial recognition technology: The need for public regulation and corporate responsibility, https://blogs.microsoft.com/on-the-issues/2018/07/13/facial-recognition-technology-the-need-for-public-regulation-and-corporate-responsibility/
    4. Bryant Walker Smith, “Automatic Driving and Product Liability,” 2017 Mich. St. L. Rev. 1
    5. Summary based on Shane Harris, “Out of the Loop: The Human-Free Future of Unmanned Aerial Vehicles,” Koret-Taube Task Force on National Security and Law, 2012.
    6. See Kenneth Anderson and Matthew C. Waxman, “Law and Ethics for Autonomous Weapon Systems: Why a Ban Won’t Work and How the Laws of War Can,” American University Washington College of Law Research Paper No. 2013-11, for discussion of this argument and the three that follow.
    7. For a forceful statement of this position, see Peter Asaro, “On banning autonomous weapons systems: human rights, automation, and the dehumanization of lethal decision-making,” International Review of the Red Cross 94:886 (Summer 2012).
    8. Darrell M. West, “How to Address AI Ethical Dilemmas,” Brookings Institution paper, September 13, 2018.