Sections

Commentary

How can international law regulate autonomous weapons?

Photo from the 2018 Breyer Lecture at Brookings.

Artificial intelligence (AI) and machine learning are rapidly entering the arena of modern warfare. This trend presents highly complex challenges for policymakers, lawyers, scientists, ethicists, and military planners, and also for society itself.

Some militaries are already far advanced in automating everything from personnel systems and equipment maintenance to the deployment of surveillance drones and robots. Some states have even deployed defensive systems (like Israel’s Iron Dome) that can stop incoming missiles or torpedoes faster than a human could react. These weapons have come online after extensive review of their conformity with longstanding principles of the laws of armed conflict, including international humanitarian law. These include the ability to hold individuals and states accountable for actions that violate norms of civilian protection and human rights.

Newer capabilities in the pipeline, like the U.S. Defense Department’s Project Maven, seek to apply computer algorithms to quickly identify objects of interest to warfighters and analysts from the mass of incoming data based on “biologically inspired neural networks.” Applying such machine learning techniques to warfare has prompted an outcry from over 3,000 employees of Google, which partners with Department of Defense on the project.

These latest trends are intensifying an international debate on the development of weapons systems that could have fully autonomous capability to target and deploy lethal force—in other words, to target and attack in a dynamic environment without human control. The question for many legal and ethical experts is whether and how such fully autonomous weapons systems can comply with the rules of international humanitarian law and human rights law. This was the subject of the fifth annual Justice Stephen Breyer lecture on international law, held at Brookings on April 5 in partnership with the Municipality of The Hague and the Embassy of The Netherlands.

Regulating the Next Arms Race

The prospect of developing fully autonomous weapons is no longer a matter of science fiction and is already fueling a new global arms race. President Putin famously told Russian students last September that “whoever becomes the leader in this sphere [of artificial intelligence] will become the ruler of the world.” China is racing ahead with an announced pledge to invest $150 billion in the next few years to ensure it becomes the world’s leading “innovation centre for AI” by 2030. The United States, still the largest incubator for AI technology, has identified defending its public-private “National Security Innovation Base (NSIB)” from intellectual property theft as a national security priority.

As private industry, academia, and government experts accelerate their efforts to maintain the United States’ competitive advantage in science and technology, further weaponization of AI is inevitable. A range of important voices, however, is calling for a more cautious approach, including an outright ban on weapons that would be too far removed from human control. These include leading scientists and technologists like Elon Musk of Tesla and Mustafa Suleyman of Google DeepMind. They are joined by a global coalition of nongovernmental organizations arguing for a binding international treaty banning the development of such weapons.

Others suggest that a more measured, incremental approach under existing rules of international law should suffice to ensure humans remain in the decisionmaking loop of any use of these weapons, from design through deployment and operation.

At the heart of this debate is the concept that these highly automated systems must have “meaningful human control” to comply with humanitarian legal requirements such as distinction, proportionality, and precautions against attacks on civilians. Where should responsibility for errors of design and use lie in the spectrum between 1) the software engineers writing the code that tells a weapons system when and against whom to target an attack, 2) the operators in the field who carry out such attacks, and 3) the commanders who supervise them? How can testing and verification of increasingly autonomous weapons be handled in a way that will create enough transparency, and some level of confidence, to reach international agreements to avoid worst-case scenarios of mutual destruction?

Beyond the legal questions, experts in this field are grappling with a host of operational problems that impinge directly on matters of responsibility for legal and ethical design. First, military commanders and personnel must know if an automated weapon system is reliable and predictable in its relevant functions. Machine learning, by its nature, cannot guarantee what will happen when an advanced autonomous system encounters a new situation, including how it will interact with other highly autonomous systems. Second, the ability of machines to differentiate between combatants and civilians must overcome inherent biases in how visual and audio recognition features operate in real time. Third, the ability of computers not just to collect data but to analyze and interpret them correctly is another open question.

The creation of distributed “systems of systems” connected through remote cloud computing further complicates how to assign responsibility for attacks that go awry. Given the commercial availability of sophisticated technology at relatively low cost, the ease of hacking, deceit, and other countermeasures by state and non-state actors is another major concern. Ultimately, as AI is deployed to maximize the advantage of speed in fighting comparably equipped militaries, we may enter a new era of “hyperwar,” where humans in the loop create more rather than fewer vulnerabilities to the ultimate warfighting aim.

Can Governments Build Consensus in Geneva?

This week, governmental experts and officials are meeting in Geneva under the auspices of the Convention on Certain Conventional Weapons (CCW) to continue trying to find consensus on next steps in regulating the next class of automated weapons. Unfortunately, intensifying geopolitical competition makes this harder. Many experts believe states like China and Russia would never abide by treaty obligations regulating AI and, therefore, the United States should aim instead for soft norms that would, at best, deter rather than outright prohibit weaponization of AI.

Professor Mary Ellen O’Connell, who delivered the keynote Breyer lecture, argued strongly that AI is leading us toward a new revolution in military and civilian affairs that makes it nearly impossible to determine the battlefield and, consequently, the applicability of the laws of armed conflict. Such a hybrid situation demands application of binding customary international law against actions that would violate “the principles of humanity and the dictates of public conscience,” otherwise known as the Martens Clause. Charles Dunlap, former deputy judge advocate general of the Air Force, argued for a more incremental approach that would lead toward new protocols under CCW for testing and evaluating new autonomous weaponry under its provisions for Article 36 reviews. Jeroen van den Hoven, professor of ethics and technology at Delft University, offered a European perspective in which ethical and legal precepts would be baked into the development of autonomous weapons throughout the design and development stages.

In light of the rapid pace of technological developments, in which substantive rules regulating specific innovations may become outdated too quickly to have effect, the priority should be to strengthen ongoing processes of review, testing, verification, inspections, transparency, and confidence-building. Many of these elements have proven effective in other fields of arms control. Formal and informal talks should proceed quickly at bilateral and multilateral levels to determine whether mutual, strategic restraint might help avert unintended, worst-case scenarios, and serve the interest of the key players.

But will this be enough? China’s position in the emerging race for technological superiority is particularly worrisome. It is strategically engaged in procuring the intellectual know-how to make leaps in AI, both overtly (through growing investments in early-stage technology companies in the United States) and covertly (through years of intellectual property theft and industrial espionage). Moreover, it is using its mass collection of data on its citizens’ behavior to develop and deploy new techniques of audio and video recognition that could be used to control both internal and external enemies.

This “authoritarian advantage” puts democracies’ more deliberative and public processes of decisionmaking to the test. To forestall an adversary’s first-mover advantage, like-minded states need to move much faster to adapt current rules of international law and arms control, and develop new ones, to constrain an all-out AI arms race.