Sections

Commentary

Can existing laws cope with the AI revolution?

A line of Lexus SUVs equipped with Google self-driving sensors await test riders during a media preview of Google's prototype autonomous vehicles in Mountain View, California September 29, 2015.  REUTERS/Elijah Nouvelage

As artificial intelligence spreads throughout society, policymakers face a critical question: Will they need to pass new laws to govern AI, or will updating existing regulations suffice? A recently completed study suggests that, for now, the latter is likely to be the case and that policymakers may address most of this technology’s legal and societal challenges by adapting regulations already in the books.

When a technology surpasses the ability of a law to govern a behavior, scholars refer to the resulting phenomenon as a “regulatory gap.” In such cases, the law fails to adequately account for the issues brought about by a technology. Consider, for instance, that in vitro fertilization allows for children to have distinct birth and biological mothers. The technology’s introduction caused a regulatory gap that forced policymakers to consider and manage new rights and responsibilities.

Given the increasing proliferation of AI, I recently carried out a systematic review of AI-driven regulatory gaps. My review sampled the academic literature on AI in the hard and social sciences and found fifty existing or future regulatory gaps caused by this technology’s applications and methods in the United States. Drawing on an adapted version of Lyria Bennett-Moses’s framework, I then characterized each regulatory gap according to one of four categories: novelty, obsolescence, targeting, and uncertainty.

Significantly, of the regulatory gaps identified, only 12 percent represent novel challenges that compel government action through the creation or adaptation of regulation. By contrast, another 20 percent of the gaps are cases in which AI has made or will make regulations obsolete. A quarter of the gaps are problems of targeting, in which regulations are either inappropriately applied to AI or miss cases in which they should be applied. The largest group of regulatory gaps are ones of uncertainty in which a new technology is difficult to classify, causing a lack of clarity about the application of existing regulations.

Novelty. In cases of novel regulatory gaps, a technology creates behavior that requires bespoke government action. Of the identified cases, 12 percent are novel. This includes, for example, the Food and Drug Administration’s (FDA) standard for certifying the safety of high-risk medical devices which is applicable to healthcare algorithms, also called black-box medicine.

AI-enabled analysis of x-rays, to take one example, holds great promise for improving the diagnosis of a wide variety of conditions. But when a black-box algorithm is constantly learning how to diagnose a given condition based on the characteristics of a particular patient, that makes it difficult, if not impossible, to conduct a randomized control trial, which is how the FDA has in the past validated medical devices. In addition, the agency’s validation procedures were not created to test the safety of technologies that use massive flows of data to update their findings on a daily basis. Resolving this problem likely requires that the FDA adapt its standards to certify this technology’s safety.

Obsolescence. In cases of obsolete regulatory gaps, a technology renders a regulation irrelevant or unenforceable, and 20 percent of the identified gaps fall in this category.

One example of the way in which AI can render current laws obsolete is in insurance incentives for safe drivers. In 1988, California mandated a 20 percent discount on car insurance for safe drivers based on a person’s safety record. But autonomous vehicles may complicate this incentive structure. If vehicles with autonomous capabilities become a significant portion of cars on California roads, this incentive will no longer serve its purpose since their owners could boast of a safe driving record without having “driven” a single mile.

What was once an effective way of improving driver behavior is no longer relevant due to computer algorithms that don’t respond to the incentive of insurance premiums.

Targeting. Of the regulatory gaps identified, 26 percent are targeting gaps, which are divided into two classes: over- and under-inclusion. Over-inclusion occurs when a technology is subject to a regulation despite the fact that its inclusion does not advance its goal. Under-inclusion is the opposite. It is observed when a technology’s inclusion in a regulation would further its goal, but it is excluded from it.

Consider, for example, how drunk driving laws over-include drivers of autonomous vehicles by failing to discriminate between a vehicle’s autonomous and non-autonomous capabilities. In Colorado, if an individual under the influence of alcohol uses their completely autonomous vehicle to transport themselves home and that vehicle malfunctions resulting in a fatality, the “driver” may be accountable for the person’s death and judged to have been impaired at the time of the accident. Despite the fact that the person was not controlling the vehicle, Colorado law appears to make no distinction between the autonomous and human decision-making process that preceded the accident.

Uncertainty. Cases of uncertain regulatory gaps arise when a new technology is difficult to classify and constitutes 42 percent of the gaps identified.

Autonomous weapons provide a stark example of the difficulty in applying current laws to new AI systems. In 2012, the U.S. Department of Defense defined what it considered to be an autonomous weapon system, via directive 3000.09. Scholars argue that several systems within the country’s inventory classify as autonomous under the government’s definition. But the government’s official position is that these types of weapons do not yet exist, which has opened a debate on the characteristics necessary for a weapon to qualify as autonomous. The lack of clarity about what constitutes an autonomous weapon has hampered efforts to control their use at the multilateral stage.   

The way forward

One of this study’s main findings is that few, if any, instances exist where AI’s challenges to policy require an overhaul of government. Even in gaps labeled as novel, where policymakers are forced to provide bespoke attention, adaptations to existing policies can serve as a solution to a gap. An implication of this work is that the status quo of U.S. policies is relatively well-suited to withstand the social challenges generated by AI. Based on this systematic review, it appears that methods and applications of AI will not overtake the policymaking process to the point of requiring completely new approaches for the administration of government.

The regulation of any technology is a process comparable to estimating the rules applicable to society in an unknown version of the future. In many respects, AI represents a new paradigm, one where governments are required to think about protecting society from breakthroughs that accomplish amazing feats. This research provides stakeholders reassurance that despite AI’s incredible capabilities, major changes to policy paradigms are not required to protect their constituents. But future applications and methods of this technology may tell a different story.

Dr. Carlos Ignacio Gutierrez is a Governance of Artificial Intelligence Fellow at the Sandra Day O’Connor College of Law.

Authors