Elon Musk and other technology experts sent a letter Monday, urging the United Nations to ban the development and use of "killer robots."
A group of 116 specialists from 26 countries signed the open letter addressed to the UN’s Conference of the Convention on Certain Conventional Weapons (CCW).
"Lethal autonomous weapons threaten to become the third revolution in warfare," the letter warned.
The letter added that eventually, fully autonomous weapons could end up in the hands of terrorists.
Once this Pandora’s box is opened, it will be hard to close.
The group of experts applauded the U.N. for establishing a Group of Governmental Experts (GGE) on lethal autonomous weapon systems. GGE's first meeting, which was scheduled for Aug. 21, was rescheduled for November because several states failed to financially contribute to the effort.
Mary Wareham with Human Rights Watch and the Campaign to Stop Killer Robots said the precursors to these fully autonomous weapons are already out there.
“Drones are perhaps the most vivid example that people know, but the big difference between an armed drone and a killer robot is the drones still have the pilot, who is steering the system and deciding who and when to fire it," Wareham explained.
So, for instance, the U.S. Navy's semi-autonomous X-47B was designed to "perform standard missions like aerial refueling and operate seamlessly with manned aircraft as part of the Carrier Air Wing," according to the defense technology company, Northrop Grumman, that developed it. But Wareham said it's what the X-47B and other systems could be developed into that is concerning.
"We’re principally concerned with ensuring we retain human control over the selection of targets and the use of force,” she added.
Monday's letter to the U.N. warned that fully autonomous weapons would "permit armed conflict to be fought at a scale greater than ever."
Wareham said one of the biggest concerns, is whether fully autonomous weapons would be able to distinguish between a civilian and a combatant.
Beyond that, many technical questions still remain. The International Committee for Robot Arms Control, according to Wareham, has been looking into things that could go wrong "from hacking and spoofing to what happens when your enemy gets hold of it and copies it. They talk about what would happen when two different autonomous weapon systems meet on the battlefield if they've been designed by different people in different ways. That could lead to unintended consequences and basically a spiraling of conflict where you cannot get them to stop."
Musk has previously warned that the risks associated with artificial intelligence are more of a threat than nuclear war with North Korea.
"Nobody likes being regulated, but everything (cars, planes, food, drugs, etc) that's a danger to the public is regulated," Musk tweeted. "AI should be too."
Nobody likes being regulated, but everything (cars, planes, food, drugs, etc) that's a danger to the public is regulated. AI should be too.— Elon Musk (@elonmusk) August 12, 2017
Ultimately, Wareham said this all boils down to the "ethical or moral question" of whether we're willing to allow machines to take a human life on the battlefield or in policing.