America has a new corps of warriors fighting on its behalf. They have saved thousands of lives, defusing hundreds of bombs and IEDs in Iraq and taking out scores of terrorist leaders hiding in Afghanistan and Pakistan, including most recently two key Al Qaeda leaders. But they also have no pulse and have killed at least three people who the United States thought were Osama bin Laden, but who later turned out not to be.
Science fiction is coming true on our battlefields. The U.S. military went into Iraq with just a handful of robotic drones in the air and zero unmanned systems on the ground, none of them armed. Today there are more than 5,300 drones in the U.S. inventory and about another 12,000 on the ground. These are just the first generation, the Model T Fords, compared with what is coming next. And, yes, the tech industry term of “killer application” doesn’t just describe what iPods did to the music industry. The prototypes of the next generation of unmanned systems don’t just pack a lethal armory of missiles, rockets and machine guns, they make more and more of their own decisions, such as taking out targets on their own.
In all the tumult over a new president and a crashing economy, it is easy to miss that something big is going on today in the overall history of war, and even humanity.
A robotics revolution is at hand. But it is not the type in which you need worry about the governor of California showing up at your door a la the Terminator. Instead, when historians look back at this period, they will likely conclude that we are living at the start of the greatest revolution in warfare since the introduction of atomic bombs, maybe even bigger. Our new, unmanned systems don’t just affect the “how” of fighting wars, they are also starting to change the “who” of the fighting at the most fundamental level.
But while robots are proving to be valuable in moving scores of U.S. soldiers out of danger, this revolution is not turning out to be the easy, clean triumph of technology over humanity. The age-old fog of war isn’t being lifted by technology, as the acolytes of former Defense Secretary Donald H. Rumsfeld once argued would happen. We are gaining amazing capabilities but also experiencing new confusions and facing complex dilemmas.
For instance, the U.S. has carried out 38 officially reported drone strikes into Pakistan over just the last five months, seeking suspected terrorist camps and hiding sites. It is a remarkable capability that can destroy a hidden, implacable enemy without putting U.S. soldiers’ lives at risk. But the unmanned strikes across a state border also have created an issue that Pakistani Prime Minister Yusaf Raza Gillani describes as the biggest problem in relations between our countries, highlighted by the strikes last week that killed a reported 22 people.
In the long term, robots even affect the very human “war of ideas” so crucial to winning the fight against radical movements. What is the message we are sending with our “unmanning” of war, compared to how it is being received by people around the world?
Some people, such as one senior State Department official I interviewed, believe that it all “plays to our strength. The thing that scares people is our technology.” But when you speak with people in Lebanon, for instance, many share the feelings of a leading news editor there who described the growing use of unmanned systems as “just another sign of coldhearted, cruel Israelis and Americans, who are also cowards because they send out machines to fight us. … They don’t want to fight us like real men. … So we just have to kill a few of their soldiers to defeat them.” Or, as one American military analyst put it, “The optics of the situation could look really freaking bad. It makes us look like the “evil Empire” [from “Star Wars”] and the other guys like the “Rebel Alliance,” defending themselves versus robot invaders.”
But the diplomatic issues of robotics are even more complex. How do 20th century international laws of war apportion out accountability with our 21st century technologies? Who is held to task when a machine mistakenly hits the wrong target? The commander, the programmer, the inventor? And, again, these attacks today are with the early versions of unmanned systems. What of the next generation being developed? Of course, one scientist working for the Pentagon whom I interviewed answered that he could see no legal or ethical problem unless the machine kills the wrong people repeatedly. “Then it is just a product-recall issue.”
Our wars remain driven by human failings, and even the most sophisticated fighting machines aren’t going to replace humans any time soon. But that doesn’t mean we will be able to avoid the science-reality of technology such as warrior-robots or the science-fiction-like dilemmas that will bring to our battlefields. The future is already upon us.