The British aircraft manufacturer BAE Systems promotes its Taranis drone with a video that focuses on the dramatic: images of the swept-wing stealth aircraft flitting through the clouds, dramatic background music and thunder, men in chemical suits amid futuristic control rooms.
Its mission is multifaceted, the website claims: conducting sustained surveillance, marking targets, gathering intelligence, deterring adversaries and carrying out strikes in hostile territory.
And, the manufacturer notes, in large type: “Controlled by a human operator.” With a photo of the man who was at the controls as the stealth drone made its inaugural test flight.
In the world of high-tech robotics, the idea that a human operator would be considered a selling point seems anachronistic. But a growing movement of diplomats, arms control campaigners and international humanitarian law experts have begun pressing the United Nations to move now to ban what they fear is the next step in mankind’s pursuit of ways to destroy his fellow man: killer robots that can be programmed in advance to recognize a target, then pull the trigger on their own without any human intervention.
Digital Access for only $0.99
For the most comprehensive local coverage, subscribe today.
It wouldn’t take much, said Thomas Nash, director of the London-based advocacy group Article 36, to turn the next generation of Taranis aircraft into autonomous killers with the addition of some software.
“All that would be needed is to add an algorithm to fire missiles on its own at targets recognized. We’re not far away from this,” Nash said. The same might be said, he added, of the X-47B, a stealth drone being developed in the United States by Northrop Grumman.
There’s no indication that such a program is being contemplated for drone aircraft, which have been used for decades to take out al Qaida figures in the unreachable backwaters of Pakistan, Afghanistan and Yemen – always with pilots, sometimes thousands of miles away, making the final decisions on whether to fire.
Still, the prospect of a machine that can make those judgments on its own makes arms control specialists nervous. Who would be held accountable for an attack gone awry when no human being pulled the trigger?
“Existing international humanitarian law was created and is based on human judgments,” said Stephen Goose, executive director of the arms division of Human Rights Watch, the international advocacy group. “It certainly gave no consideration to the fact that it might be applied to a weapons system that starts making life-and-death decisions on its own.”
That concern is the driving force of a push within the United Nations for a global accord that would ban or restrict the use of lethal autonomous weapons systems, perhaps as soon as 2018. Unlike nuclear or chemical and biological weapons, which became the subject of restrictions only after they’d claimed tens of thousands of lives, killer robots are the subject of prohibition efforts before they become reality.
“We need to get a declaration that they will not be developed,” said Nash, a former New Zealand diplomat who also led the global campaign that resulted in the 2008 Dublin Convention on Cluster Munitions, which prohibits the production, use and stockpiling of cluster bombs – explosives that scattered small anti-personnel “bomblets” across a wide area. The weapons were developed to deny a battlefield to enemy soldiers by making it dangerous to enter an area, but the bomblets often lay unexploded until hapless civilians, children or farm animals wandered by and triggered them.
Despite the global ban, human rights groups often charge that cluster bombs are still in use, most recently by Saudi Arabia in Yemen.
A recent five-day gathering in Geneva under U.N. auspices to discuss restrictions on killer robots showed surprising unanimity among world diplomats over doing something to prevent the creation of fully autonomous weapons systems. But as the cluster-bomb experience shows, or the world’s global ban on land mines, unanimity of thought doesn’t always lead to unanimity of action.
Even so, diplomats went away from the meeting optimistic. Even the representatives of China and Russia seemed to favor meaningful human control of killer robots, according to diplomats who took part in the informal discussions.
Among the countries whose statements seemed to favor a ban on were major U.S. allies such as Japan, South Korea and Germany. Similar views were expressed by Austria, Ireland, Mexico, Pakistan, Switzerland, New Zealand, Ecuador, Cuba, Ghana and the influential International Committee of the Red Cross.
“The fact the discussions were not marked by ideological or political confrontations gives one hope that formal negotiations could be launched next year,” said an arms control ambassador, who spoke only on the condition of anonymity because of the sensitivity of the topic.
A Western European ambassador, who also asked not to be identified, was more cautious: “It’s still too early to say whether talks will be launched or not.”
Still, there seemed to be near-unanimity about the importance of heading off the idea of killer robots before they can be developed and deployed.
“We are wary of fully autonomous weapons systems that remove meaningful human control from the operation loop, due to the risk of malfunctioning, potential accountability gap and ethical concerns,” said Ahn Yongjip, South Korea’s arms control representative, in remarks typical of many.
But some were willing to consider that autonomous weapons systems should not be banned, notably the United States and Israel.
“It remains our view that it is premature to try and determine where these discussions might or should lead,” Michael D. Meier, the head of the U.S. delegation, said in a statement to the Convention on Certain Conventional Weapons, the international body that sponsored the discussion.
“The United States has a process in place, applicable to all weapon systems, which is designed to ensure weapons operate safely, reliably and are understood by their human operators,” he said. Additionally, he noted that a U.S. Defense Department directive has established “a framework for how the United States would consider proposals to develop lethal autonomous weapon systems.”
In a similar vein, Israel’s envoy Eitan Levon argued that nations had “to maintain an open mind regarding both potential risks as well as possible positive capabilities of future” lethal autonomous weapons systems.
Still, Goose, who co-founded the Campaign to Stop Killer Robots in 2013, said he took heart in the discussions that had taken place in Geneva. Next, he said, the 120 nations that have signed on to the Convention on Certain Conventional Weapons, which restricts the use of land mines, booby traps and incendiary weapons and requires the clearance of unexploded ordnance after a war, need to decide at their annual meeting in November whether they want to begin formal talks on prohibiting killer robots.
Among the considerations, Goose said, is whether a pre-emptive prohibition makes sense and what it means to have “meaningful human control over targeting and kill decisions.”
“If you require a meaningful human control it’s the same thing as calling for a prohibition,” he said.
Even the United States, which wants to leave the door open to acquiring killer robots, makes it clear that it isn’t developing such weapons, Goose said, a sign that nations are convinced that “life and death decisions on the battlefield should never be ceded to machines.”
There’s a precedent for pre-emptively banning weapons. In 1995, the Convention on Certain Conventional Weapons prohibited blinding laser weapons, something that had been only the stuff of science fiction.
Experts also said concerns about human rights issues argued for a pre-emptive ban.
“The question of machines determining whether people will live or die during war is the main issue to be addressed,” said Christof Heyns, the U.N.’s independent expert on extrajudicial executions. “A human being in the sights of a fully autonomous machine is reduced to being an object – being merely a target. A world where the function of pulling the trigger is delegated to machines is a world without hope.”
Amnesty International, the international human rights group, argues that a killer robot couldn’t comply with international law, including the requirement to distinguish between combatants and civilians and to evaluate the proportionality of an attack.
Jody Williams, a Vermont resident who won the Nobel Peace Prize in 1997 for her work for a global ban on land mines, has taken a prominent role in the Campaign to Stop Killer Robots.
“We reject the notion that killer robots are inevitable,” she said during the conference in Geneva. “They are only inevitable if those . . . who oppose lethal weapons without meaningful human control are willing to roll over and allow the not necessarily inevitable to become a deadly and terrifying reality.”
She pointed out that the current generation of drones is “operated by human beings in real time.”
“Remove the human operation of them, they become killer robots,” she said. “Pretty clear difference as far as I can see.”