icon bookmark-bicon bookmarkicon cameraicon checkicon chevron downicon chevron lefticon chevron righticon chevron upicon closeicon v-compressicon downloadicon editicon v-expandicon fbicon fileicon filtericon flag ruicon full chevron downicon full chevron lefticon full chevron righticon full chevron upicon gpicon insicon mailicon moveicon-musicicon mutedicon nomutedicon okicon v-pauseicon v-playicon searchicon shareicon sign inicon sign upicon stepbackicon stepforicon swipe downicon tagicon tagsicon tgicon trashicon twicon vkicon yticon wticon fm
7 Apr, 2018 13:06

'Disaster for humanity': Experts to RT on joint AI project by Google & Pentagon

'Disaster for humanity': Experts to RT on joint AI project by Google & Pentagon

Hundreds of Google employees are up in arms over the company's partnership with the Pentagon in AI technology, fearing it may be used for war. Experts told RT the "questionable" alliance could result in "disaster for humanity."

Google employees wrote a letter to the company's CEO, Sundar Pichai, calling on the US tech giant to immediately pull out of a controversial program that many fear could be used for warfare.

"We believe that Google should not be in the business of war," the letter obtained by The New York Times and published earlier this week stated. 

Gizmodo broke the news about Google's partnership with the US Department of Defense (DoD) last month, adding that Project Maven, whose stated mission is to "accelerateDoD's integration of big data and machine learning," was established in April 2017. The project will see Google developing AI surveillance to help the US military scrutinize video footage captured by US government drones "to detect vehicles and other objects, track their motions, and provide results to the Department of Defense."

Google claims that the technology is human-friendly and is actually designed to "save lives" and "scoped to be for non-offensive purposes." But Noel Sharkey, Emeritus professor of AI at Sheffield University, told RT that the fears of Google employees "are correct."

The Maven program "is all about bringing AI to the immediate conflict zone" he argued, adding that Google may simply be too naïve here about the real use of its technology.

"Once you start working with the military, you have no control over what they use your product for, and that's very worrying," Professor Sharkey said.

He cautioned that while drones now have human operators, which are at least "looking at the target, engaging with the target and trying to calculate its legitimacy," things can take a drastic turn.

"If Google's imagery is very good, they will stop using that operator, allow robots to go out on their own, find their own targets and kill them without human intervention. And this is a disaster for humanity."

And there is another concern here – privacy.

"Google is a global company and is working for the Pentagon now, and the Pentagon is the United States. For me, in Britain, it means it's a foreign power. How far will they slide into bed with the Pentagon?" Sharkey said.

"Google own most of our data, and I don't want the Pentagon having my data."

The US Department of Defense spent a whopping $7.4 billion on AI-related areas last year, according to the Wall Street Journal.

The experts who spoke to RT say the million-dollar question is whether "this going to lead to saving lives, or is it going to lead to more use of the technology, more drone strikes, more countries engaging in this use of the technology?"

It's really "questionable," physicist and arms control researcher at the University of North Carolina Dr. Mark Gubrud told RT.

"It's very exciting to see a movement arise among Google employees of concern about their company's contribution in the world's drift towards autonomous weapons, killer robots"

According to the Intercept, Google is busy developing technology that will allow drone analysts to "interpret the vast image data vacuumed up from the military's fleet of 1,100 drones to better target bomb strikes against the Islamic State."

This April marks five years since the launch of Campaign to Stop Killer Robots. Its supporters object to "permitting machines to determine who or what to target on the battlefield," pointing to numerous problems, including ethical and legal.

"Bold action is needed before technology races ahead and it's too late to preemptively ban weapons systems that would make life and death decisions on the battlefield," Steve Goose, arms division director at Human Rights Watch, and co-founder of the Campaign to Stop Killer Robots, said in a statement in November.

If you like this story, share it with a friend!

Podcasts
0:00
28:18
0:00
25:17