One might say that “autonomous” vehicles are already in operation at most
airports. Pilots take care of takeoff and landing, but on the less challenging
portion of the trip, for which people have a harder time maintaining alertness,
an autopilot is often in charge unless something unusual happens. Japanese
subway systems also run on a standard timetable with sensing for humans
standing in the door. In both systems, there is the option for humans to take
over, reset or override machine decisions, so ultimately these are shared
autonomy systems with a sliding scale of human and machine decisionmaking.
I have heard people say that one of the reasons that flying robots have
become so popular is that they have so little to collide with in the sky. In
other words, machine perception is far from perfect, but if a system stays
far away from the ground, there are few obstacles and it can rely minimally
on human backup. But what happens in the case of autonomous cars
navigating obstacle-rich environments surrounded by cars full of people?
Such environments highlight the importance of design considerations that
enable—and regulatory policies that require—such vehicles to learn, follow
and communicate the rules of the road in a socially appropriate and effective
Autonomous cars have made rapid inroads over the past few years. Their
immediate benefits would include safety and convenience for the human
passenger; imagine not having to worry about finding a parking space while
running an errand, because the car can park itself. Their habitual use would
affect infrastructural changes, as parking lots could be further away from
an event and traffic rules might be more universally followed. In some ways,
semi-autonomous systems provide a clear shortcut for policymakers for
liability considerations. By keeping a human in the loop, the fault in the case
of bad decision-making leading to an accident becomes easier to assign.
What will become increasingly tricky, however, is the idea of changing the
distribution of decision making, such that the vehicle is not just in charge
of working mechanics (extracting energy from its fuel and transferring the
steering wheel motion to wheel angle), but also for driving (deciding when
to accelerate, or who goes first at an intersection). We already have cars
with anti-lock brakes, cruise control, and distance-sensing features. The next
generation of intelligent automobiles will shift the ratio of shared autonomy
from human-centric to robot-centric. Passengers or a human conductor
will provide higher-level directives (choosing the destination, requesting a
different route, or asking the car to stop abruptly when someone notices a
restaurant they might like to try).
Vehicle technology should be designed to empower people. It might be
tempting to look at the rising statistics of accidents due to texting while
driving and ban humans from driving altogether, but partnering with
technology might provide a better solution. If the human driver is distracted,
a robotic system might be able smooth trajectories and maintain safety, much
like a surgery robot can remove tremors of the human hand. The car could
use pattern recognition techniques or even a built in breathalyzer to detect
inebriation with a high probability and make sure the passengers inside make
it home safely. Even without that technology, they might disable car function
until they are in a better state to drive.
Additional safety considerations include the accuracy and failure modes for
vehicle perception systems. They must meet high standards to make sure
automated systems are actually supplying an overall benefit. Companies
manufacturing the vehicles should be regulated as with any consumer
technology, but customers may control local variables. If the passengers are
in a rush, will they be able to turn up the car’s aggression, creeping out into
the intersection, even though that Honda Civic probably arrived a second or so
With a user tweaking local variables or reprogramming certain driving
strategies, who the technology’s creator really is may become a bit of a moving
target. Accidents will happen sometimes, and although cars cannot be sued in
court, a manufacturer can. Thus, policymakers must rethink liability concerns
with an eye toward safety and societal benefit.
Another interesting consideration is the social interface between autonomous
cars and cars with human drivers. Particular robotic driving styles might
cater better to human acceptance and welcome. Would passengers be upset
if their cars insisted on following the posted speed limits instead of driving
at the prevailing speed of traffic? If we are sharing the road, would we want
them to be servile, always giving right of way to human drivers? These are
considerations we can evaluate directly using the tools of Social Robotics,
evaluating systems with varied behavioral characteristics using real humans
as study samples. Sometimes these studies have non-intuitive results. We
might find that what was meant to be polite hesitance might be interpreted as
lack of confidence and might cause other drivers to question the autonomous
car’s capabilities, feeling less safe in their vicinity. Traffic enforcement
for autonomous cars could be harsher in order to send a message to the
manufacturers, or more lenient because officers assume transgressions were
the result of computational errors, rather than intentional violations.
As the examples above highlights, when we put robots in shared human
environments, social attributions become relevant to the robot’s welcome and
effectiveness because of what they communicate and the reactions they evoke.
Pedestrians frequently make eye-contact with drivers before crossing a road.
An autonomous car should be able to signal its awareness too, whether by
flashing its lights or with some additional interface for social communications.
A robot should also be able to communicate with recognizable motion
patterns. If a driver cannot understand that a robotic vehicle wants to pass
her on the highway, she might not shift lanes. If the autonomous vehicle
rides too close behind someone, that person might get angry and try to
block its passage by traveling alongside a vehicle in the next lane. Such driver
behavior in isolation might seem irrational or overly emotional, but it actually
reflects known social rules and frameworks that machines will need to at least
approximate before they can hope to successfully share our roads.