Drones have revolutionized warfare and may soon transform civilian life, too. The machines have already been introduced into U.S. skies, patrolling the Mexican border and assisting with other law enforcement efforts. And Congress has voted to further expand the use of drones at home, directing the Federal Aviation Administration to unshackle restrictions on domestic drones by 2015.
But today’s drones are merely the “Model T” of robot technology. Today’s drones do not think, decide, and act on their own. In engineering speak, they are merely “automated.” Tomorrow’s drones are expected to leap from automation to “autonomy.” The difficult policy questions raised by today’s automated drones will seem pedestrian compared to the ones created by tomorrow’s technologies.
Today, humans are still very much “in the loop.” Humans generally decide when to launch a drone, where it should fly, and whether it should take action against a suspect. But as drones develop greater autonomy, humans will increasingly be “out of the loop.” Human operators will not be necessary to decide when a drone (or perhaps a swarm of microscopic drones) takes off, where it goes, and how it acts.
Regulations for today’s airborne drones should be crafted with an eye toward tomorrow’s technologies. Policymakers must better understand how the next generation of autonomous systems will change, compared to today’s merely automated machines. As we discuss, language useful to the policymaking process has already been developed in the same places as drones themselves — research and engineering laboratories across the country and around the globe. We introduce this vocabulary here to explain how tomorrow’s drones will differ and suggest possible approaches to regulation.
Autonomy is no longer solely a feature of humans. Whether it is a desirable quality for machines will be among the most important policy questions of the coming years.