The Guardian view on killer robots: on the loose | Editorial
Lethal autonomous weapons are a reality, but the campaign to prevent their use is ours to win
The first meeting of the UN-backed group of experts, intended to start work on getting a ban on lethal autonomous weapons, was supposed to wrap up at the end of last week. But only days before it was due to start it was cancelled: funding shortfalls were blamed. A lack of will feels the more likely explanation. Alarmed by the delay the day it was due to begin, more than 100 of those most closely involved in developing the artificial intelligence on which such weapons would rely, led by Tesla’s Elon Musk and Alphabet’s Mustafa Suleyman, wrote a public letter of bleak warning: killer robots amount to a third revolution in warfare, the sequel to gunpowder and nuclear weapons. They are right. The only thing more frightening than a machine that can’t decide for itself who to kill is one that can.
But the technology is out there, within reach of scientists backed by billions of dollars poured into the development of AI by the Pentagon’s Defense Advanced Research Projects Agency, or Darpa, and certainly matched in other less transparent regimes. Some semi-autonomous weaponry is already available, like the border guarding system on the ceasefire line between North and South Korea. The process of what its critics, such as the campaigning group Article 36, call “bureaucratising” weapons, where targets are defined according to an explicit hierarchy, is under way.