DoD’s new guidelines call for ‘ethical use’ of military AI but stop short of calling for a ban

3 Nov, 2019 10:02 / Updated 5 years ago

The Pentagon’s advisory body has released a set of “AI ethics” guidelines for the “responsible” use of autonomous weapons, but will it stop the emergence of soulless killing machines?

Around the world, artificial intelligence is steadily becoming a factor that will define future warfare. As the militaries of major powers are looking at its advantages, scholars and some tech leaders are sounding the alarm – they warn that if installed on weapons systems, AI could eventually learn how to wage hostilities on their own or take independent decisions during combat.

The US Department of Defense (DoD) is seeking to alleviate these fears. The Defense Innovation Board (DIB), the Pentagon’s advisory organization, has released a set of five principles for AI that received unanimous approval.

Although some mainstream US media lauded the release, it turns out to be surprisingly short on specifics and fails to set many of the limits that AI opponents fight for.

Also on rt.com Pentagon to create ‘AI assistant’ to help tank crews navigate & MAKE DECISIONS in battle

According to the Pentagon, AI must be reliable enough to fulfill its programmed functions and be “traceable,” so that it could potentially be audited by outside observers. Humans should still be responsible for AI development, deployment and the outcomes of its use.The AI should be free from any “unintended bias” like racism or sexism, unless it is required in a military environment.

Another principle requires an AI system to be able to analyze its own actions and stop itself as soon as it detects a possibility of causing unnecessary harm. There should also be an option to hand the control over to a human operator.

"The DIB's recommendations will help enhance the DoD's commitment to upholding the highest ethical standards as outlined in the DOD AI Strategy, while embracing the US military's strong history of applying rigorous testing and fielding standards for technology innovations,” said Air Force Lt. Gen. John N.T. ‘Jack’ Shanahan, the director of the Joint Artificial Intelligence Center.

READ MORE: Pentagon’s 1st AI strategy vows to keep pace with Russia & China, wants help from tech

It remains to be seen if the report will amend any of the Pentagon’s actual plans, as it believes that AI will help the US military beat near-peer opponents like China or Russia.

The US military’s non-AI record is also far from flawless. Its lethal drone strikes – a dimension of warfare that makes use of modern technologies and relies on remote control – have long been criticized for killing scores of innocent civilians.The drone operators also described their work as being traumatizing.

What could happen if the US military decided to transfer the control of these deadly weapons from humans to soulless machines?

Also on rt.com Killers, drinkers & traumatized for life: What it means to be a US drone operator in ‘war on terror’

Think your friends would be interested? Share this story!