It may have seemed like an obscure United Nations conclave, but a meeting this week in Geneva was closely watched by experts in artificial intelligence, military strategy, disarmament and humanitarian law.
The reason for the interest? Killer robots – drones, weapons and bombs that decide on their own, with artificial brains, to attack and kill – and what should be done, if anything, to regulate or ban them.
Once the domain of science fiction films like the “Terminator” and “RoboCop” series, killer robots, more technically known as Lethal Autonomous Weapons Systems, were invented and tested at an accelerated pace with little oversight. Some prototypes have even been used in real conflicts.
The evolution of these machines is seen as a potentially seismic event in warfare, similar to the invention of gunpowder and nuclear bombs.
This year, for the first time, a majority of the 125 nations that belong to an agreement called the Convention on Certain Conventional Weapons have said they want restrictions on killer robots. But they have encountered opposition from members developing these weapons, notably the United States and Russia.
The group’s conference ended on Friday with only a vague statement on considering possible measures acceptable to all. The Campaign to Stop Killer Robots, a disarmament group, said the outcome was “extremely short.”
What is the Convention on Certain Conventional Weapons?
Sometimes known as the Inhuman Weapons Convention, it is a framework of rules that prohibit or restrict weapons considered to cause unnecessary, unjustifiable and indiscriminate suffering, such as incendiary explosives, blinding lasers and blinding lasers. traps that do not distinguish between combatants and civilians. The convention does not contain any provision concerning killer robots.
What exactly are killer robots?
Opinions differ on an exact definition, but they are widely viewed as weapons that make decisions with little or no human involvement. Rapid improvements in robotics, AI, and image recognition make such weaponry possible.
The drones that the United States has used extensively in Afghanistan, Iraq, and elsewhere are not considered robots because they are operated remotely by people, who choose targets and decide whether or not to fire.
Why are they considered attractive?
For war planners, weapons hold the promise of keeping soldiers safe and making decisions faster than humans, giving more responsibility on the battlefield to autonomous systems like wireless drones. pilot and driverless tanks who independently decide when to strike.
What are the objections?
Critics argue that it is morally repugnant to attribute lethal decision-making to machines, regardless of technological sophistication. How does a machine differentiate an adult from a child, a fighter with a bazooka from a civilian with a broom, a hostile fighter from a wounded or surrendering soldier?
“Fundamentally, autonomous weapon systems raise ethical concerns for society about the substitution of human decisions about life and death with sensors, software and machine processes,” said Peter Maurer, Chairman of the International Committee. of the Red Cross and fierce opponent of killer robots. Geneva Conference.
Ahead of the conference, Human Rights Watch and the International Human Rights Clinic at Harvard Law School called for steps towards a legally binding agreement that requires human oversight at all times.
“Robots lack the compassion, empathy, mercy and judgment necessary to treat humans humanely, and they cannot understand the inherent value of human life,” the groups argued in a backgrounder. to support their recommendations.
Others said autonomous weapons, rather than reducing the risk of war, could do the opposite – providing antagonists with means to inflict damage that minimizes the risk to their own soldiers.
“Mass-produced killer robots could lower the threshold of war by removing humans from the chain of destruction and unleashing machines that could engage a human target without any humans at the controls,” New Zealand minister Phil Twyford said. disarmament.
Why was the Geneva conference important?
The conference was widely viewed by disarmament experts as the best opportunity to date to find ways to regulate, or even ban, the use of killer robots under the convention.
It was the culmination of years of discussions by a group of experts who had been invited to identify challenges and possible approaches to reducing threats from killer robots. But the experts could not even agree on basic questions.
What are the opponents saying about a new treaty?
Some, like Russia, insist that any decision on boundaries must be unanimous – in effect giving opponents a veto.
The United States argues that existing international laws are sufficient and that a ban on autonomous weapons technology would be premature. Chief US delegate to the conference, Joshua Dorosin, proposed a non-binding “code of conduct” for the use of killer robots – an idea disarmament advocates have dismissed as a delaying tactic.
The US military has invested heavily in AI, working with the largest defense contractors including Lockheed Martin, Boeing, Raytheon, and Northrop Grumman. The work has included projects to develop long-range missiles that detect moving targets based on radio frequency, swarm drones capable of identifying and attacking a target, and automated missile defense systems, according to them. research carried out by opponents of weapon systems.
The complexity and varied uses of AI make it more difficult to regulate than nuclear weapons or landmines, said Maaike Verbruggen, emerging military security technologies expert at the Center for Security, Diplomacy and Strategy in Brussels. . She said the lack of transparency about what different countries are building has created “fear and concern” among the military leaders they must follow.
“It is very difficult to get an idea of what another country is doing,” said Verbruggen, who is preparing a doctorate on the subject. “There is a lot of uncertainty and it drives military innovation.”
Franz-Stefan Gady, researcher at the International Institute for Strategic Studies, said that “the arms race for autonomous weapons systems is already underway and will not be canceled any time soon.”
Is there a conflict in the defense establishment over killer robots?
Yes. Even as the technology becomes more advanced, there has been a reluctance to use autonomous weapons in combat due to fear of mistakes, Gady said.
“Can military commanders trust the judgment of autonomous weapons systems?” Here the answer for the moment is clearly ‘no’ and will remain so for the foreseeable future, ”he said.
The autonomous weapons debate has spilled over into Silicon Valley. In 2018, Google said it would not renew a contract with the Pentagon after thousands of its employees signed a letter protesting the company’s work on a program that uses AI to interpret images that could be used to choose drone targets. The company also created new ethical guidelines prohibiting the use of its technology for weapons and surveillance.
Others believe that the United States does not go far enough to compete with its rivals.
In October, former Air Force software chief Nicolas Chaillan told the Financial Times that he had resigned due to what he saw as poor technological advancements within the US military, in especially the use of AI. He said policymakers are being held back by ethical issues as countries like China move forward.
Where have autonomous weapons been used?
There aren’t many verified examples on the battlefield, but critics point to a few incidents that show the potential of the technology.
In March, UN investigators said a “lethal autonomous weapons system” had been used by government-backed forces in Libya against militia fighters. A drone called Kargu-2, made by a Turkish defense contractor, tracked and attacked the fighters as they fled a rocket attack, according to the report, which does not reveal whether humans controlled the fighters. drones.
In the 2020 Nagorno-Karabakh war, Azerbaijan fought Armenia with attack drones and missiles that hover in the air until they detect a signal from an assigned target.
What is happening now?
Many disarmament advocates said the conference outcome had hardened what they described as a determination to push for a new treaty in the coming years, such as those that ban landmines and cluster munitions. .
Daan Kayser, an autonomous weapons expert at PAX, a Dutch-based peace advocacy group, said the conference’s failure to agree to negotiate even on killer robots was “a really clear signal. that the CCW is not up to the task “.
Noel Sharkey, AI expert and chairman of the International Committee for the Control of Robotic Weapons, said the meeting demonstrated that a new treaty was better than continued deliberations on the convention.
“There was a sense of urgency in the room,” he said, that “if there is no movement, we are not ready to stay on this treadmill.”