icon bookmark-bicon bookmarkicon cameraicon checkicon chevron downicon chevron lefticon chevron righticon chevron upicon closeicon v-compressicon downloadicon editicon v-expandicon fbicon fileicon filtericon flag ruicon full chevron downicon full chevron lefticon full chevron righticon full chevron upicon gpicon insicon mailicon moveicon-musicicon mutedicon nomutedicon okicon v-pauseicon v-playicon searchicon shareicon sign inicon sign upicon stepbackicon stepforicon swipe downicon tagicon tagsicon tgicon trashicon twicon vkicon yticon wticon fm
5 Feb, 2025 17:59

Google removes ban on weaponizing AI

In a reversal of previous policies the tech giant will now permit the technology to be used for developing arms and surveillance tools
Google removes ban on weaponizing AI

Google has significantly revised its artificial intelligence principles, removing earlier restrictions on using the technology for developing weaponry and surveillance tools. The update, announced on Tuesday, alters the company’s prior stance against applications that could cause “overall harm.”  

In 2018, Google established a set of AI principles in response to criticism over its involvement in military endeavors, such as a US Department of Defense project that involved the use of AI to process data and identify targets for combat operations. The original guidelines explicitly stated that Google would not design or deploy AI for use in weapons or technologies that cause or directly facilitate injury to people, or for surveillance that violates internationally accepted norms.  

The latest version of Google’s AI principles, however, has scrubbed these points. Instead, Google DeepMind CEO Demis Hassabis and senior executive for technology and society James Manyika have published a new list of the tech giant’s “core tenants” regarding the use of AI. These include a focus on innovation and collaboration and a statement that “democracies should lead in AI development, guided by core values like freedom, equality, and respect for human rights.”  

Margaret Mitchell, who had previously co-led Google’s ethical AI team, told Bloomberg the removal of the ‘harm’ clause may suggest that the company will now work on “deploying technology directly that can kill people.”  

According to The Washington Post, the tech giant has collaborated with the Israeli military since the early weeks of the Gaza war, competing with Amazon to provide artificial intelligence services. Shortly after the October 2023 Hamas attack on Israel, Google’s cloud division worked to grant the Israel Defense Forces access to AI tools, despite the company’s public assertions of limiting involvement to civilian government ministries, the paper reported last month, citing internal company documents.  

Google’s reversal of its policy comes amid continued concerns over the dangers posed by AI to humanity. Geoffrey Hinton, a pioneering figure in AI and recipient of the 2024 Nobel Prize in physics, warned late last year that the technology could potentially lead to human extinction within the next three decades, a likelihood he sees as being up to 20%.  

Hinton has warned that AI systems could eventually surpass human intelligence, escape human control and potentially cause catastrophic harm to humanity. He has urged significant resources be allocated towards AI safety and ethical use of the technology and that proactive measures be developed.

 

Dear readers! Thank you for your vibrant engagement with our content and for sharing your points of view. Please note that we have switched to a new commenting system. To leave comments, you will need to register. We are working on some adjustments so if you have questions or suggestions feel free to send them to feedback@rttv.ru. Please check our commenting policy
Podcasts
0:00
25:31
0:00
26:10