<img height="1" width="1" src="https://www.facebook.com/tr?id=769125799912420&amp;ev=PageView &amp;noscript=1">
ADVERTISEMENT
About Our People Legal Stuff
Testing for the future: Marine Corps Warfighting Laboratory

Killer robots: coming to a battlefield near you

0

WASHINGTON (Circa) -- Robots are reaching the point where they can pull the trigger on humans all by themselves, and it's redefining warfare as we know it.

What happens when a robot makes the choice to kill a human being? What happens if it makes a mistake? Better yet, what will happen when these kinds of weapons inevitably hit the battlefield in significant numbers on all sides?

These are the questions security experts, military officials, and governments across the globe are trying to answer, and they don't have much time. Rapid advances in weapons technology, robotics, and artificial intelligence have already led to the creation and deployment of semi-autonomous systems which are given some latitude in making decisions on when to engage the enemy.

Take the Phalanx Close-in Weapon System, for example. Affectionately known as "R2-D2" due to its resemblance to the eponymous droid, the Phalanx is armed with a powerful 20mm Vulcan cannon that can spit 6,000 rounds a minute at enemy missiles, rockets, and artillery autonomously using its radar systems and an algorithm of prescribed criteria. It's remarkable speed and accuracy make it much more effective than a human, and the last line of defense for many U.S. Navy ships. But despite being a protective weapon, accidents involving the Phalanx system have led to casualties.

180429-N-SG189-1395
180429-N-SG189-1395 ATLANTIC OCEAN (April 29, 2018) The close-in weapons system (CIWS) fires from the fantail aboard the Wasp-class amphibious assault ship USS Kearsarge (LHD 3) during a live-fire exercise while in transit to Fleet Week Port Everglades, April 29, 2018. Fleet Week Port Everglades remains the signature event for Broward Navy Days. The organization plans welcoming events and shore activities for visiting Navy and Coast Guard ships and supports the activities of the U.S. Navy Southern Command and Cost Guard Station Fort Lauderdale. (U.S. Navy photo by Mass Communication Specialist Seaman J. Keith Wilson/Released)

The stakes are even higher when it comes to autonomous offensive systems; and the mere potential of their existence is already disrupting the commonly accepted rules of war.

"Does this domain that exists in a very narrow, tactical sense now, for these automated defensive systems, does that begin to expand?" asked Paul Scharre, the author of 'Army of None', a book on autonomous warfare, in an interview with Circa. "Do we see that same concept being used in an offensive setting or in a setting where humans aren't supervising the operation of the weapons systems as they are today?"

Defense companies are already experimenting with the offensive concept. Take the Israeli RAMBOW, for example. Capable of driving itself along predefined routes and navigating obstacles automatically, RAMBOW is one of the most advanced unmanned ground vehicles (UGVs) in existence. It's also capable of mounting a machine gun, though Scharre said Israeli officials told him that a human is responsible for pulling the trigger, if needed.

But the Harpy, another Israeli weapon, has removed the human element from the equation. After being launched into the air, the small drone is tasked with finding enemy radars and blowing them up all by itself. It's a pretty specific task, but it's not difficult to imagine a situation where it could be used in other applications.

ADVERTISEMENT

The pace of technological advancement in robotics is reaching a point where the weapons may be on the field before questions about them are even answered.

"There's obvious concerns about whether the machine decision maker would be as accurate as the human," said Julian Sanchez, a senior fellow with the CATO Institute who specializes in technology and security, in an interview. "But I think there are also reasonable concerns about what happens to our sense of responsibility about the mistakes when they don't feel like anyone's fault."

Scharre, a former Army Ranger who served in Afghanistan, shaped the ethical question using an example from his own experience. While on patrol with his unit, Scharre and his fellow soldiers posted up on a hill outside of a town, and noticed a young girl herding some goats nearby. She was behaving strangely, and they eventually realized she was relaying their position to Taliban fighters. Scharre and his team were already exposed on the barren hill, putting them at risk if they were discovered by the enemy. Under the currently accepted laws of war, the Rangers could have shot the girl, explained Scharre. Technically speaking, she was acting as an enemy combatant by aiding the Taliban fighters, and since there is no definitive rule on age in the laws of armed combat, she was a legitimate target.

Scharre and his team were able to take care of the Taliban threat without firing on the girl, but what would an automated weapon have done? She was a legitimate target, after all. Scharre said he and his comrades discussed their decision after the fact, and they were certain they made the right call in not firing on the girl, even if they technically could have . After all, they hadn't signed up to shoot children, and doing so hardly reflected what the U.S. was there to do, and could jeopardize relations with local Afghans.

A machine may not have taken such nuances into account, which raises more than a few moral questions.

"What is an acceptable civilian casualty rate when there is no human being that is responsible for a mistake?" noted Sanchez. "When a mistake is just a statistic that shows up on a piece of paper, does that make you more willing to accept higher miss rates? I worry about that."

He's not the only one. Several non-governmental organizations have suggested banning so-called "killer robots" outright, due to concerns about what could go wrong.

"In our campaign’s view any measures less than new international law will not be to be effective, binding, or lasting," said Mary Wareham, a spokesperson with the Campaign to Stop Killer Robots, in a statement in April. "States must express their firm determination to avoid dehumanizing the use of force by moving to negotiate new international law now, without further delay."

ADVERTISEMENT
8672254007_6d1ea36f9b_o.jpg
A group of demonstrators raise awareness for the Campaign to Stop Killer Robots. Source: courtesy of Campaign to Stop Killer Robots

Artificial intelligence researcher Stuart Russel illustrated the point in a 7 minute, fictional video depicting a world where a defense contractor has invented a new, miniature drone that carries a small amount of explosives. A Silicon-Valley-esque presenter tells the crowd that his invention could be used with ultimate precision to take out adversaries with minimal collateral damage. The crowd marvels and applauds approvingly as he demonstrates the wondrous machine. But as the technology proliferates, a shadowy terrorist group gets a hold of a batch of the drones which are then programmed to kill university students. The movie ends in a bloody scene, as a student is killed while video-chatting with his mother.

Whether or not such a scenario is likely remains to be seen, but Scharre and some experts note banning autonomous weapons may not be the best course of action. International law can be difficult to maneuver, and as Scharre noted, many states are still trying to figure out what even constitutes an autonomous weapon. Furthermore, he noted a new law might not even be necessary.

"You have a number of countries who have said we have these things called the laws of war," said Scharre. "And of all the things that you are worried about are already prohibited under the laws of war. If you want to build a weapon that is indiscriminate, you can't do that."

Among those countries is the U.S. and Russia, two of the foremost leaders in robotic weapons.

There is also the possibility that autonomous offensive weapons could avoid the mistakes humans make. While the idea of a robot making the decision to kill a human being may be discomforting, computer precision could cut down on the human error we have seen in strikes where a human is responsible for pulling the trigger.

"I don't think we should be in a hurry to roll this stuff out, but you have to allow for the possibility that the error rate would be lower," said Sanchez.

Editor's note: The "Slaughterbots" video was released by the Future of Life Institute, not the Campaign to Stop Killer Robots. A previous version was incorrect on this point.

Comments
Read Comments
Comments
ADVERTISEMENT
Facebook Twitter Instagram Pinterest Linked In List Menu Enlarge Gallery Info Menu Close Angle Down Angle Up Angle Left Angle Right Grid Grid Play Align Left Search Youtube Mail Mail Angle Down Bookmark