Robotopia or Robocalypse? Study warns against fully automated weapons
One of the top US experts on automated weapons systems is urging against their development, arguing that human elements will always be necessary to avoid catastrophic accidents, fatal errors and ethical issues.
Autonomous weapons “pose a novel risk of mass fratricide, with large numbers of weapons turning on friendly forces. This could be because of hacking, enemy behavioral manipulation, unexpected interactions with the environment, or simple malfunctions or software errors,” warned Paul Scharre, senior fellow at the Center for a New American Security (CNAS).
Scharre’s study is titled Autonomous Weapons and Operational Risk, and examines the challenges of employing such weapons systems on the battlefields of tomorrow, since today’s militaries are yet to field robo-weapons in any significant numbers.
READ MORE: Russian ‘Skynet’ to lead military robots on the battlefield
Scharre is a former Army Ranger, member of the Council on Foreign Relations, and worked at the Pentagon between 2008 and 2013 as one of the leading theorists on unmanned and autonomous systems.
Even when working as intended, automated weapons systems lack the ability to step outside their instructions and apply common sense, as humans would. That is assuming they have not been hacked and suborned by the enemy, Scharre warns.
“Autonomous systems will do precisely what they are programmed to do,” Scharre wrote, “and it is this quality that makes them both reliable and maddening, depending on whether what they were programmed to do was the right thing at that point in time.”
Japan debating whether to exploit robot prowess to build automated weapons. #futuristhttps://t.co/ryaHeBCg4P
— Ray Hammond (@hammondfuturist) March 3, 2016
Human operators often strike unintended targets, but can also divert their weapons at the last second, the CNAS expert noted. Autonomous systems are designed with the kind of weapons in mind that targeting errors could result in far more catastrophic consequences.
“The result could be fratricide, civilian casualties, or unintended escalation in a crisis,” Scharre wrote.
One example of such an escalation was the 1983 incident when Soviet Union’s early warning satellites erroneously reported the launch of five US intercontinental ballistic missiles. Lt. Colonel Stanislav Petrov correctly interpreted the alert as a computer error, refusing to pass the information on to headquarters and averting nuclear war.
READ MORE: Rise of the machines: Super-agile cyborg takes first steps to global domination (VIDEO)
Scharre also pointed out the cascading failure effect involved in the 1979 accident at the Three Mile Island nuclear power plant, as proof that complex systems will inevitably encounter errors over a long enough time horizon.
“One of the major advantages of humans over automation is the ability of humans to adapt to unanticipated problems and arrive at novel solutions,” Scharre wrote.
Developing autonomous weapons system, even if guided by artificial intelligence, is more likely to result in a “robocalypse” than a robo-topia, he concluded, urging instead the development of semi-autonomous weapons where humans would be involved as essential operators, moral agents and the ultimate fail-safe.