Sections

Commentary

In the Loop? Armed Robots and the Future of War

Peter W. Singer
Peter W. Singer Former Brookings Expert, Strategist and Senior Fellow - New America

January 28, 2009

Something big is going on in the history of war, and maybe even humanity itself. The US military went into Iraq with just a handful of drones in the air and zero unmanned systems on the ground, none of them armed. Today, there are over 5,300 drones in the US inventory and another roughly 12,000 on the ground. And these are just the first generation, the Model T Fords compared to what is already in the prototype stage. This is what is happening now. Peering forward, one Air Force lieutenant general forecast that “given the growth trends, it is not unreasonable to postulate future conflicts involving tens of thousands.”

For my book Wired for War, I spent the last several years trying to capture this historic moment, as robots begin to move into the fighting of our human wars. The book features stories and anecdotes of everyone from robotic scientists and the science fiction writers who inspire them to 19 year old drone pilots and the Iraqi insurgents they are fighting. The hope wasn’t just to take the reader on a journey to meet this new generation of warriors—both human and machine, but also to explore the fascinating, and sometimes frightening, political, economic, legal and ethical questions that our society had better start facing in how our wars will be fought and who will fight them. In other words, “What happens when science fiction becomes battlefield reality?”

Despite all the enthusiasm in military circles for the next generation of unmanned vehicles, ships, and planes, there is one question, however, that people are generally reluctant to talk about. It is the equivalent of Lord Voldemort in Harry Potter, the issue That-Must-Not-Be-Discussed. What happens to the human role in war as we arm ever more intelligent, more capable, and increasingly more autonomous robots?


A Fact of Life

When this issue comes up, both specialists and military folks tend to either change the subject or speak in absolutes. “People will always want humans in the loop,” says Eliot Cohen, a noted military expert who served in the State Department under President George W. Bush. An Air Force captain similarly writes in his service’s professional journal, “In some cases, the potential exists to remove the man from harm’s way. Does this mean there will no longer be a man in the loop? No. Does this mean that brave men and women will no longer face death in combat? No. There will always be a need for the intrepid souls to fling their bodies across the sky.”

As Noah Shachtman, editor of Wired magazine’s military reporting, explains, people speak in such absolute terms and use the phrase that “man will always stay in the loop” so often that it sounds like more like brainwashing than actual analysis. “Their mantra is a bit like the line they repeat again and again in the movie The Manchurian Candidate. “Sergeant Shaw is the kindest, bravest, warmest most wonderful human being.”” But he laughs that the constant repetition of these claims is pretty understandable. “It helps keep people calm that this isn’t the Terminators.” More seriously, he explains, “The core competency in the military is essentially shooting and blowing up things. So, no one is eager to say outsource that to a bunch of machines.”

So how are we to weigh this issue if we were to treat it seriously? First, all the rhetoric ignores the reality that humans started moving out of “the loop” of war a long time before robots made their way onto battlefields. As far back as World War II, the Norden bombsight made calculations of height, speed, and trajectory too complex for a human to automatically decide when to drop a bomb on a B-17. By the time of the first Gulf War, Captain Doug Fries, a radar navigator, could write this description of what it was like to bomb Iraq in his B-52: “The navigation computer opened the bomb bay doors and dropped the weapons into the dark.”

The trend toward growing computer autonomy has also been in place at sea since the Aegis computer system was introduced in the 1980s. Designed to defend U.S. Navy ships against missile and plane attacks, the system operates in four modes:

  1. Semi-Automatic, in which humans work with the system to judge when and at what to shoot;
  2. Automatic Special, in which human controllers set the priorities, such as telling the system to destroy bombers before fighter jets, but the computer decides how to do it;
  3. Automatic, in which data goes to human operators in command but the system works without them; and
  4. Casualty, in which the system just does what it calculates is best to keep the ship from being hit.

Arguments With the Machine

Humans can override the Aegis system in any of its modes, but experience shows this is often beside the point, sometimes with tragic consequences.

The most notable of these was in July 3, 1988, when the U.S.S. Vincennes was patrolling in the Persian Gulf. The ship had been nicknamed “Robo-cruiser,” both because of the new Aegis radar system it was carrying and because its captain had a reputation for being overly aggressive. That day, the Vincennes’s radars spotted Iran Air Flight 655, an Airbus passenger jet. The jet was on a consistent course and speed and was broadcasting a radar and radio signal that showed it to be civilian. The automated Aegis system, though, had been designed for managing battles against attacking Soviet bombers in the open North Atlantic, not for dealing with skies crowded with civilian aircraft like those over the Gulf. The computer system registered the plane with an icon on the screen that made it seem to be an Iranian F-14 fighter (a plane half the size), and hence an “Assumed Enemy.”

Even though the hard data were telling the human crew that the plane wasn’t a fighter jet, they trusted what the computer was telling them more. Aegis was Semi-Automatic mode, giving it the least amount of autonomy. But not one of the 18 sailors and officers on the command crew was willing to challenge the computer’s wisdom. They authorized it to fire. (That they even had the authority to do so without seeking permission from more senior officers in the fleet, which any other ship would have had to, was again only because the Navy had greater confidence in Aegis than in a human-manned ship without it) Only after the fact did the crew members realize that they had accidentally shot down an airliner, killing all 290 passengers and crew, including 66 children.

The tragedy of Flight 655 was no isolated incident. Indeed, much the same scenario was repeated just a few years ago, when U.S. Patriot missile batteries accidentally shot down two allied planes during the Iraq invasion of 2003. The Patriot systems classified the craft as Iraqi rockets and there were only a few seconds to make a decision. So, machine judgment trumped any human decisions. In both of these cases, the human power “in the loop” was actually only veto power, and even that was a power that military personnel were unwilling to use against the quicker (and what they viewed as superior) judgment of a computer.

The point is not that the Matrix or Cylons are taking over, but rather that a redefinition of what it means to have humans “in the loop” of decision-making in war has long been under way, with the authority and autonomy of machines ever expanding. As we move towards more unmanned systems, there are myriad pressures to give war-bots greater and greater autonomy. The first is simply the push to make more capable and more intelligent robots. But as psychologist and artificial intelligence expert Robert Epstein notes, this comes with a built-in paradox.

“The irony is that the military will want it [a robot] to be able learn, react, etc., in order for it to do its mission well. But they won’t want it to be too creative, just like with soldiers. But once you reach a space where it is really capable, how do you limit them? To be honest, I don’t think we can.”

Human Super-Vision?

Simple military expediency also widens the loop. To achieve any sort of personnel savings from using unmanned systems, one human operator has to be able to “supervise” (as opposed to control) a larger number of robots. For example, the Army’s long-term FCS plan calls for having two humans jointly supervise a team of 10 land robots. In this scenario, the humans would delegate tasks to increasingly autonomous robots, but the robots would still need human permission to fire weapons.

Researchers are finding, however, that humans have a hard time controlling multiple units at once (imagine playing 5 different video games at the same time). Even having human operators control two UAVs at a time rather than one reduces performance levels by an average of 50%. As a NATO study concluded, the goal of having one operator control multiple vehicles is “currently, at best, very ambitious, and, at worst, improbable to achieve.”

And this is with systems that aren’t shooting or being shot at. As one Pentagon-funded report noted, “Even if the tactical commander is aware of the location of all his units, the combat is so fluid and fast-paced that it is very difficult to control them.” So, a push is made to given even more autonomy to the machine.

Then there is the fact that an enemy is involved. If the robots aren’t going to fire unless a remote operator authorizes them to, then any foe need only disrupt that communication. The loop is the vulnerability to cut. Military officers respond to this problem by saying that, while they don’t like the idea of taking humans out of the loop, there has to be an exception, a backup plan for when communications are cut and the robot is “fighting blind.” So another exception is then made to the once absolute concept of the loop.

Even if the communications link is not broken, there are combat situations in which there is not enough time for the human operator to react, even if the enemy is not operating at digital speed. The C-RAM system deployed in Iraq and Afghanistan already shoots down rockets and mortar fire without human command, while a number of robot makers have added “counter-sniper” capabilities to their machines, enabling them to automatically track down and target with a laser beam any enemy that shoots. But those precious seconds while the human decides whether to fire back or not could let the enemy get away. So, as one U.S military officer observes, there is nothing technical to prevent one from rigging the machine to shoot something more lethal than light. “If you can automatically hit it with a laser range finder, you can hit it with a bullet.”

This creates a powerful argument for another exception to the rule that humans must always be “in the loop,” giving robots the ability to fire back on their own. This kind of autonomy is generally seen as more palatable than other types. “People tend to feel a little bit differently about the counterpunch than the punch,” notes Noah Shachtman. As Gordon Johnson of the Army’s Joint Forces Command explains, such autonomy soon comes to be viewed as not only logical but quite attractive.

“Anyone who would shoot at our forces would die. Before he can drop that weapon and run, he’s probably already dead. Well now, these cowards in Baghdad would have to pay with blood and guts every time they shoot at one of our folks. The costs of poker went up significantly. The enemy, are they going to give up blood and guts to kill machines? I’m guessing not.”

Each exception, however, pushes one further and further from an absolute and instead down a slippery slope. And at each step, once robots “establish a track record of reliability in finding the right targets and employing weapons properly,” says John Tirpak, editor of Air Force Magazine, the “machines will be trusted.”

The reality is that the human location “in the loop” is already becoming, as retired Army colonel Thomas Adams notes, that of “a supervisor who serves in a fail-safe capacity in the event of a system malfunction.” Even then, he thinks the speed, confusion, and information overload of modern-day war will soon move the whole process outside of “human space.” He describes how the coming weapons “will be too fast, too small, too numerous, and will create an environment too complex for humans to direct.” As Adams concludes, the various new technologies “are rapidly taking us to a place where we may not want to go, but probably are unable to avoid.”

The final irony is that for all the claims by military, political, and science leaders that “humans will always be in the loop,” as far back as 2004 the U.S. Army was carrying out research on armed ground robots which found that “instituting a ‘quickdraw’ response made them much more effective than an unarmed variation that had to call for fires from other assets.” Similarly, a 2006 study by the Defense Safety Working Group, a body in the Office of the Secretary of Defense, discussed how the concerns over potential killer robots could be allayed by giving “armed autonomous systems” permission to “shoot to destroy hostile weapons systems but not suspected combatants.” That is, they could shoot at tanks and jeeps, just not the people in them. By 2007, the U.S. Army had solicited proposals for a system that could carry out ”[f]ully autonomous engagement without human intervention.” The next year, the U.S. Navy circulated research on a “Concept for the Operation of Armed Autonomous Systems on the Battlefield.”

Perhaps most telling is a report that the Joint Forces Command drew up in 2005, which suggested autonomous robots on the battlefield will be the norm within 20 years. Its title was somewhat amusing, given the official line one usually hears on the issue of ensuring absolute human control of armed robots: “Unmanned Effects: Taking the Human Out of the Loop.”

So, despite what one article called “all the lip service paid to keeping a human in the loop,” the cold, hard, metallic reality is that autonomous armed robots are coming to war. They simply make too much sense to the people that matter. A Special Operations Forces officer put it this way:

“That’s exactly the kind of thing that scares the shit out of me. . . . But we are on the pathway already. It’s inevitable.”