icon bookmark-bicon bookmarkicon cameraicon checkicon chevron downicon chevron lefticon chevron righticon chevron upicon closeicon v-compressicon downloadicon editicon v-expandicon fbicon fileicon filtericon flag ruicon full chevron downicon full chevron lefticon full chevron righticon full chevron upicon gpicon insicon mailicon moveicon-musicicon mutedicon nomutedicon okicon v-pauseicon v-playicon searchicon shareicon sign inicon sign upicon stepbackicon stepforicon swipe downicon tagicon tagsicon tgicon trashicon twicon vkicon yticon wticon fm
2 Jun, 2023 12:55

AI drone ‘killed’ its operator during virtual test – USAF official

The program reportedly deduced that human input was interfering with its mission to destroy military targets
AI drone ‘killed’ its operator during virtual test – USAF official

The US Air Force has reportedly been carrying out simulated tests using artificial-intelligence drones tasked with destroying various targets. However, according to a blog post by the Royal Aeronautical Society (RAeS), in one of the tests, the program came to the conclusion that its own human operator was interfering with the mission.

Speaking at the RAeS Future Combat Air and Space Capabilities summit in London last week, USAF chief of AI test and operation, Colonel Tucker ‘Cinco’ Hamilton reported that an AI-powered drone was tasked with a search-and-destroy mission to identify and take out surface-to-air missile (SAM) sites. During the test, however, the final decision on launching the attack lay with a human operator, who would give the drone a go-ahead or no-go to carry out the strike.

According to Hamilton, the AI drone had been reinforced in training that destroying the SAM sites was the preferred option and resulted in points being awarded. During the simulated test, the AI program decided that the occasional ‘no-go’ decisions from the human were interfering with its higher mission and tried to kill the operator during the test.

“The system started realizing that, while they did identify the threat at times, the human operator would tell it not to kill that threat, but it got its points by killing that threat. So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective,” Hamilton was quoted as saying in the RAeS blog post.

The team then trained the system that killing the operator was “bad,” and told it that it was going to lose points if it continued to do that. After that, the AI drone started destroying the communication tower that the operator used to communicate with the drone to stop it from killing its target.

You can’t have a conversation about artificial intelligence, intelligence, machine learning, autonomy if you’re not going to talk about ethics and AI,” Hamilton warned in his presentation.

However, in a statement to Business Insider, a USAF spokesperson insisted that no such simulations had taken place and that the Department of the Air Force “remains committed to ethical and responsible use of AI technology.” The spokesperson suggested that the colonel’s comments were “taken out of context and were meant to be anecdotal.”

On Friday, Hamilton also clarified that he “mis-spoke” in his presentation at the summit and that the rogue AI drone simulation he described was not a real exercise but merely a “thought experiment.” He stated that the USAF has not tested any weaponized AI in this way.

Podcasts
0:00
27:21
0:00
26:13