‘Killer robots’ to be debated at UN
Autonomous killer robots do not currently exist but advances in technology are bringing them closer to reality
Killer robots will be debated during an informal meeting of experts at the United Nations in Geneva.
Two robotics experts, Prof Ronald Arkin and Prof Noel Sharkey, will debate the efficacy and necessity of killer robots.
The meeting will be held during the UN Convention on Certain Conventional Weapons (CCW).
A report on the discussion will be presented to the CCW meeting in November.
This will be the first time that the issue of killer robots, or lethal autonomous weapons systems, will be addressed within the CCW.
"I support a moratorium until that end is achieved, but I do not support a ban at this time" ”
Prof Ronald ArkinGeorgia Institute of Technology
Autonomous kill function
A killer robot is a fully autonomous weapon that can select and engage targets without any human intervention. They do not currently exist but advances in technology are bringing them closer to reality.
Those in favour of killer robots believe the current laws of war may be sufficient to address any problems that might emerge if they are ever deployed, arguing that a moratorium, not an outright ban, should be called if this is not the case.
However, those who oppose their use believe they are a threat to humanity and any autonomous "kill functions" should be banned.
"Autonomous weapons systems cannot be guaranteed to predictably comply with international law," Prof Sharkey told the BBC. "Nations aren’t talking to each other about this, which poses a big risk to humanity."
Prof Sharkey is a member and co-founder of the Campaign Against Killer Robots and chairman of the International Committee for Robot Arms Control.
Side events at the CCW will be hosted by the Campaign to Stop Killer Robots.
Automation of warfare
Prof Arkin from the Georgia Institute of Technology told the BBC he hoped killer robots would be able to significantly reduce non-combatant casualties but feared they would be rushed into battle before this was accomplished.
"I support a moratorium until that end is achieved, but I do not support a ban at this time," said Prof Arkin.
He went on to state that killer robots may be better able to determine when not to engage a target than humans, "and could potentially exercise greater care in so doing".
Prof Sharkey is less optimistic. "I’m concerned about the full automation of warfare," he says.
The discussion of drones is not on the agenda as they are yet to operate completely autonomously, although there are signs this may change in the near future.
The UK successfully tested the Taranis, an unmanned intercontinental aircraft in Australia this year and America’s Defense Advanced Research Projects Agency (Darpa) has made advances with the Crusher, an unmanned ground combat vehicle, since 2006.
The MoD has claimed in the past that it currently has no intention of developing systems that operate without human intervention.
On 21 November 2012 the United States Defense Department issued a directive that, "requires a human being to be ‘in-the-loop’ when decisions are made about using lethal force," according to Human Rights Watch.
The meeting of experts will be chaired by French ambassador Jean-Hugues Simon-Michel from 13 to 16 May 2014.
Robot warriors: Lethal machines coming of age
By Jonathan MarcusBBC Diplomatic Correspondent
The X-47B unmanned aircraft – it could in future fire without human intervention
The era of drone wars is already upon us. The era of robot wars could be fast approaching.
Already there are unmanned aircraft demonstrators like the arrow-head shaped X-47B that can pretty-well fly a mission by itself with no involvement of a ground-based "pilot".
There are missile systems like the Patriot that can identify and engage targets automatically.
And from here it is not such a jump to a fully-fledged armed robot warrior, a development with huge implications for the way we conduct and even conceive of war-fighting.
On a carpet in a laboratory at the Georgia Institute of Technology in Atlanta, Professor Henrik Christensen’s robots are hunting for insurgents. They look like cake-stands on wheels as they scuttle about.
Christensen and his team at Georgia Tech are working on a project funded by the defence company BAE systems.
Their aim is to create unmanned vehicles programmed to map an enemy hideout, allowing human soldiers to get vital information about a building from a safe distance.
"These robots will basically spread out," says Christensen, "they’ll go through the environment and map out what it looks like, so that by the time you have humans entering the building you have a lot of intelligence about what’s happening there."
The emphasis in this project is reconnaissance and intelligence gathering. But the scientific literature has raised the possibility of armed robots, programmed to behave like locusts or other insects that will swarm together in clouds as enemy targets appear on the battlefield. Each member of the robotic swarm could carry a small warhead or use its kinetic energy to attack a target.
Christensen is developing robots that can survey enemy hideouts
Peter W Singer, an expert in the future of warfare at the Brookings Institution in Washington DC, says that the arrival on the battlefield of the robot warrior raises profound questions.
The mere thought that human beings would set about creating machines that they can set loose to kill other human being, I find repulsive”
Jody WilliamsNobel Peace Prize Winner for 1997
"Every so often in history, you get a technology that comes along that’s a game changer," he says. "They’re things like gunpowder, they’re things like the machine gun, the atomic bomb, the computer… and robotics is one of those."
"When we say it can be a game changer", he says, "it means that it affects everything from the tactics that people use on the ground, to the doctrine, how we organise our forces, to bigger questions of politics, law, ethics, when and where we go to war."
Jody Williams, the American who won the Nobel Peace Prize in 1997 for her work leading the campaign to ban anti-personnel landmines, insists that the autonomous systems currently under development will, in due course, be able to unleash lethal force.
Williams stresses that value-free terms such as "autonomous weapons systems" should be abandoned.
"We prefer to call them killer robots," she says, defining them as "weapons that are lethal, weapons that on their own can kill, and there would be no human being involved in the decision-making process. When I first learnt about this," she says, "I was honestly horrified — the mere thought that human beings would set about creating machines that they can set loose to kill other human beings, I find repulsive."
It is an emotive topic.
But Professor Ronald Arkin from the Georgia Institute of Technology takes a different view.
The turtlebot could reconnoitre a battlesite
He has put forward the concept of a weapons system controlled by a so-called "ethical governor".
It would have no human being physically pulling the trigger but would be programmed to comply with the international laws of war and rules of engagement.
"Everyone raises their arms and says, ‘Oh, evil robots, oh, killer robots’," but he notes, "we have killer soldiers out there. Atrocities continue and they have continued since the beginning of warfare."
His answer is simple: "We need to put technology to use to address the issues of reducing non-combatant casualties in the battle-space".
He believes that "the judicious application of ethical robotic systems can indeed accomplish that, if we are foolish enough as a nation, as a world, to persist in warfare."
Arkin is no arms lobbyist and he has clearly thought about the issues.
There is also another aspect to this debate that perhaps would be a powerful encouragement to caution. At present, the US is one of the technological leaders in this field, but as Singer says this situation will not last forever.
"The reality is that besides the United States there are 76 countries with military robotics programmes right now," he says.
"This is a rapidly proliferating technology with relatively low barriers to entry.
"You can, for a couple of hundred dollars, purchase a small drone that a couple of years ago was limited to militaries. This can’t be a situation that you interpret through an American lens. It’s of global concern."
Just as drone technology is spreading fast, making the debates about targeted killings of much wider relevance — so too robotics technology will spread, raising questions about how these weapons may be used or should be controlled.
What if we could stay young forever? What if everyone had a car? What If? is a season across BBC News looking at visions of the future.
The prospect of totally autonomous weapons technology – so called "human-out-of-the-loop" systems – is still some way off. But Nobel Prize winner Jody Williams is not waiting for them to arrive.
She plans to launch an international campaign to outlaw further research on robotic weapons, aiming for "a complete prohibition of robots that have the ability to kill".
"If they are allowed to continue to research, develop and ultimately use them, the entire face of warfare will be changed forever in an absolutely terrifying fashion."
Arkin takes a different view of the ethical arguments.
He says that to ban such robots outright, without doing the research to understand whether they can lower non-combatant casualties, is to do "a disservice to those who are, unfortunately, slaughtered in warfare by human soldiers".
Killer robots’: MP Nia Griffith calls for world ban
A debate on ‘lethal autonomous robotics’ is held by MPs on Monday
A Welsh Labour MP has called on the UK government to support a moratorium on the use and development of so-called "killer robots".
Llanelli MP Nia Griffith said the "frightening technology" of Lethal Autonomous Robotics (LARs) had to be stopped.
She said: "That is one step further than a drone which at least has some kind of [human] control over it."
Supporters say the "lethal autonomous robots" could save soldiers’ lives.
Ms Griffith is raising her concerns about the weapons in a House of Commons debate on Monday.
The robots are machines programmed in advance to take out people or targets, which – unlike drones – operate autonomously on the battlefield.
They are being developed in the US, UK and Israel but have not yet been used.
It makes you think of the Terminator movies with Arnold Schwarzenegger and I don’t want that kind of world”
Peter Black AMLib Dem
Ms Griffith, who is vice-chair of the All Party Parliamentary Group on weapons and the protection of civilians, said they raise serious moral questions about how we wage war.
She told the BBC’s Sunday Politics Wales programme she was disappointed that the UK government had not signed up to a recent UN report calling for a moratorium on the use and development of LARs.
She said: "This is extremely frightening technology and we know how quickly this type of technology is being developed.
"There’s a lot of secrecy about it and we need an international agreement about it.
"For example, blinding-lasers were banned before they came into use and that is the type of ban we need to be looking at.
"The US has already introduced a moratorium and quite clearly you have to work at an international level on this and the UK has to work with other countries to get a ban worldwide."
Liberal Democrat AM Peter Black told the programme he was also concerned about LARs.
He said: "It’s quite frightening, it makes you think of the Terminator movies with Arnold Schwarzenegger and I don’t want that kind of world."
UN mulls ethics of ‘killer robots’
Campaigners against "killer robots" – some of whom lobbied the UK parliament on 23 April – say their use would be morally unacceptable
So-called killer robots are due to be discussed at the UN Human Rights Council, meeting in Geneva.
A report presented to the meeting will call for a moratorium on their use while the ethical questions they raise are debated.
The robots are machines programmed in advance to take out people or targets, which – unlike drones – operate autonomously on the battlefield.
They are being developed by the US, UK and Israel, but have not yet been used.
Supporters say the "lethal autonomous robots", as they are technically known, could save lives, by reducing the number of soldiers on the battlefield.
But human rights groups argue they raise serious moral questions about how we wage war, reports the BBC’s Imogen Foulkes in Geneva.
They include: Who takes the final decision to kill? Can a robot really distinguish between a military target and civilians?
Is a ban needed to prevent robots that can decide when to kill? – first broadcast 23 April 2013
If there are serious civilian casualties, they ask, who is to be held responsible? After all, a robot cannot be prosecuted for a war crime.
"The traditional approach is that there is a warrior, and there is a weapon," says Christof Heyns, the UN expert examining their use, "but what we now see is that the weapon becomes the warrior, the weapon takes the decision itself."
The moratorium called for by the UN report is not the complete ban human rights groups want, but it will give time to answer some of those questions, our correspondent says.
Campaigners call for international ban on ‘killer robots’
By Stuart HughesBBC News
The Taranis is an experimental unmanned combat aircraft designed to attack targets without a pilot in the cockpit
A pre-emptive ban is needed to halt the production of weapons capable of attacking targets without any human intervention, a new campaign has urged.
Jody Williams, from the Campaign to Stop Killer Robots, told the BBC such weapons, which do not yet exist, would be regarded as "repulsive".
But some scientists argue existing laws are sufficient to regulate their use, should they become a reality.
The UK government has said it has no plans to develop such technology.
Weapons with a degree of autonomy, including Unmanned Aerial Vehicles – commonly known as drones – are already widely used on the battlefield.
The public conscience is horrified to learn about this possible advance in weapons systems”
Such weapons are described as "human-in-the-loop" systems because they can only select targets and deliver lethal force with a human command.
But organisers of the Campaign to Stop Killer Robots – a global effort being launched on Tuesday – say advances in robotic technology mean it is only a matter of time before fully autonomous "human-out-of-the-loop" systems – capable of firing on their own – are developed.
They argue that giving machines the power over who lives and dies in war would be an unacceptable application of technology, and would pose a fundamental challenge to international human rights and humanitarian laws.
Estimates vary over how long it could be before such weapons are available, but the group says a new treaty is needed to pre-emptively outlaw their development, production and use.
Is a ban needed to prevent robots that can decide when to kill?
Campaign leader Ms Williams, who won a Nobel Peace Prize in 1997 for her work in bringing about a ban on anti-personnel landmines, told BBC News: "As people learn about our campaign, they will flock to it.
"The public conscience is horrified to learn about this possible advance in weapons systems. People don’t want killer robots out there.
"Normal human beings find it repulsive."
But some experts have questioned the need for a ban, arguing instead for an open debate about the legal and ethical implications of such weapons.
The MoD currently has no intention of developing systems that operate without human intervention”
Lord AstorParliamentary Under Secretary of State for Defence
Roboticist Professor Ronald Arkin, from the Georgia Institute of Technology in the US, told the BBC: "The most important thing from my point of view is that we do not rush these systems into the battlefield.
"A moratorium as opposed to ban – where we say, ‘we’re not going to do this until we can do it right’ – makes far more sense to me than simply crying out, ‘ban the killer robots’.
"Why should we do that now?"
Recent statements by UK and US governments suggest a reluctance to take human beings fully "out-of-the-loop" in warfare.
In March, Lord Astor of Hever – the UK’s parliamentary under secretary of state for defence – said the Ministry of Defence "currently has no intention of developing systems that operate without human intervention".
And a directive issued by the US Department of Defense in November 2012 stated that all weapons with a degree of autonomy "shall be designed to allow commanders and operators to exercise appropriate levels of human judgment over the use of force".