When RoboCop Replaces Private Jackson

Peter W. Singer
Peter W. Singer Former Brookings Expert, Strategist and Senior Fellow - New America

January 8, 2012

Editor’s Note: In an interview with the Canadian International Council, Peter Singer discusses the future of robotic warfare and how this tech-focused shift will affect rules and moral considerations on the battlefield.

CANADIAN INTERNATIONAL COUNCIL (CIC): Are robots replacing soldiers?

PETER SINGER: Humans have been waging war in various forms for 5,000 years, so I don’t see that changing. There are studies that show that maybe we are seeing lesser levels of conflict, or conflict moving from the inter-state to the domestic level, or however you want to put it. But war has been a reality of human existence from the very start, and I don’t see that fundamentally changing, whether it’s talking about war using sticks and stones or war using Predators and PackBots.

I don’t think the human role in war is disappearing, even with this new advanced technology. War is still caused by human failings, or human greed, or human anger, or human hubris … There’s always a human cause. How we decide to utilize these war systems are still decisions made by humans for human reasons. So yes, we’ve carried out more than 300 airstrikes in Pakistan using drones. But the drones didn’t decide on their own – people did.

Finally, even though we’re seeing more and more of this new technology used, it’s not an exact replacement of humans. There are certain things humans are good at and certain things that robots are turning out to be better at, but there’s not an exact overlap. For the most part, the plan moving forward seems to be teaming humans and robots together. When you look at the military development programs in robotics, you see that a lot of the ideas are parallel to the relationship of the policeman and the police dog: Each on its own is not as good as the team is together. That seems to be informing our plan to build new robotics systems. There will still be humans in the battle space instructing robots to do things, but the robots won’t just boringly follow – they will have the autonomy to react. That said, they won’t be completely autonomous – they won’t call their own plays.

CIC: How is this shift affecting the moral considerations that, throughout history, have limited the outbreak and potential devastation of warfare?

SINGER: Our technologies are evolving at a faster pace than our human institutions are reacting to them. And that’s actually nothing new. There have always been technologies that have come along that we haven’t had the law and ethics to figure out, that we haven’t known how best to regulate and/or use, and that we haven’t had the political institutions to help us understand in terms of the impact they were having on what we were deciding to do within our governments. There is a history to this: This has happened with technologies before.

Consider strategic bombing. The first vision we have of someone using a flying machine in war actually comes out of science-fiction. A.A. Milne was one of the first writers to talk about military airplanes. But then they became a reality. In the First World War, they were unarmed – they were just used for observation. Then people started to say, “Well, I can see the other side, and I want to do something about it.” So we ad-hoc armed them. After that, we started to specially design them to be armed – bomber planes, fighter planes, and the like. From that we got a whole host of new military doctrine questions – about how we could best use this new machine in war, but also about strategic, political, ethical, and legal issues that resonate beyond.

Everything changed. For instance, it used to be that the “home front” was the place behind the battle lines. The people there supported the war, but they weren’t involved in the war because they couldn’t be targeted. The development of military aircraft technology allowed people to target the home front – to take people in the home front and move them into the battle space. That raised ethical-legal questions about what you could target that no one had really explored before. It also raised strategic questions: Does having the option of strategic bombing make war more or less likely?

These debates were very intense in the 1920s and 1930s, and many of them are still not resolved. And, of course, the debates change not only because of what people decide is legal, but also depending on the capabilities of the technology. Our expectations of certain obligations in strategic bombing have changed greatly. During the Second World War, it took an average of 108 bomber missions to get one bomb to hit the intended target. As a result, we accepted a broader notion of collateral damage and civilian casualties than we would now. These days, one Predator can hit multiple targets with laser precision. Thus, when just a couple of people are killed accidentally, we consider it a tragedy. If the same number of casualties had been lost during the Second World War, we would have considered it an unimaginable success.

Expanding the battle space would mean that we could be at war when we are 7,000 miles away – we could fly a plane over Afghanistan but be sitting in Nevada. Right now, what the western nations are wrestling with is that we are not the only users of unmanned systems, so how we utilize them is creating precedence that we may or may not be happy with in the future.

And it’s not just an ethical question – it actually has strategic impact (not just in disputes between nations, but also in how our own institutions understand when and where we go to war). Thus, the debate over how to utilize drones is actually a debate that needs to be had not just in international law, but also within our governments. Our executive branches have started to argue that they don’t need congressional or parliamentary approval to use force as long as there are no humans going into harm’s way. That was, for example, how the Obama administration took its position this summer on the Libya operation. For the last part of it, we did not have human pilots going into harm’s way (we pulled out of that role after April 3), so we didn’t need congressional authorization. But even though we didn’t have human pilots going into harm’s way after April 3, we still worked doing the kinetic part of war. Our Predators actually struck 146 targets that summer.

So we have a new reality. For the last 5,000 years, the idea of engaging in combat and putting people at risk were one and the same. Now, we have a technology that disentangles the two. But our political system has not faced that reality.

Read the full interview at »