icon bookmark-bicon bookmarkicon cameraicon checkicon chevron downicon chevron lefticon chevron righticon chevron upicon closeicon v-compressicon downloadicon editicon v-expandicon fbicon fileicon filtericon flag ruicon full chevron downicon full chevron lefticon full chevron righticon full chevron upicon gpicon insicon mailicon moveicon-musicicon mutedicon nomutedicon okicon v-pauseicon v-playicon searchicon shareicon sign inicon sign upicon stepbackicon stepforicon swipe downicon tagicon tagsicon tgicon trashicon twicon vkicon yticon wticon fm
24 Feb, 2015 17:48

Capitalist forces could create ‘uncontrollable’ artificial intelligence – scientist

Capitalist forces could create ‘uncontrollable’ artificial intelligence – scientist

Artificial intelligence (AI) needs to develop human emotion if humanity is to avoid the potential existential threat posed by machines capable of consciousness, a leading scientist has warned.

Computers that are “human-like” will be capable of empathy and moral reasoning, therefore reducing the risk of AI turning against humanity, he said.

Murray Shanahan, professor of cognitive robotics at Imperial College London, cautioned against “capitalist forces” developing AI without any sense of morality, arguing it could lead to potentially “uncontrollable military technologies.”

Shanahan’s comments follow warnings from leading scientists and entrepreneurs, including Stephen Hawking, Bill Gates, and Tesla Motors CEO Elon Musk.

READ MORE: Stephen Hawking: Artificial Intelligence could spell end of human race

Gates admitted last month that he doesn’t “understand why some people are not concerned” by the threat of AI.

Speaking to the Centre for the Study of Existential Risk at the University of Cambridge last week, Shanahan argued that AI development faces two options.

Either a potentially dangerous AI is developed – with no moral reasoning and based on ruthless optimization processes – or scientists develop AI based on human brains, borrowing from our psychology and even neurology.

Right now my vote is for option two, in the hope that it will lead to a form of harmonious co-existence [with humanity],” Shanahan said.

AI based on the human brain would not be possible without first mapping the organ – a task the Human Connectome Project (HCP) is undertaking and aims to complete by late 2015.

However, once the map is complete, it could take years to analyze all the data gathered.

READ MORE: Bill Gates on AI doomsday: ‘I don’t understand why we aren’t concerned’

Experts disagree as to how long it will be before AI is successfully developed – or if it is even possible.

Estimates range from 15 years to 100 years from now, with Shanahan believing that by the year 2100, AI will be “increasingly likely but still not certain.

Whether the technology is helpful or harmful to humans depends on which of Shanahan’s two options becomes the driving force behind its development.

There is a fear that current economic and political systems are leading to the development of option one – a machine with no moral reasoning.

Worth reading Superintelligence by Bostrom. We need to be super careful with AI. Potentially more dangerous than nukes.

— Elon Musk (@elonmusk) August 3, 2014

Capitalist forces will drive incentive to produce ruthless maximization processes. With this there is the temptation to develop risky things,” Shanahan said.

For Shanahan, risky things include AI which could rig elections, subvert markets, or become dangerous military technology.

Within the military sphere governments will build these things just in case the others do it, so it's a very difficult process to stop.

Shanahan’s comments echo fears expressed by Gates and Musk last year, both of whom were influenced by Nick Bostrom’s book “Superintelligence: Paths, Dangers, Strategies,” he said.

READ MORE: Elon Musk donates $10mn to stop AI from turning against humans

In his book 'Superintelligence: Paths, dangers, strategies,' Nick Bostrom – a professor of philosophy at Oxford University – argues that if machine brains surpass humans in intelligence, they could eventually replace us as the dominant species on earth.

As the fate of the gorillas now depends more on us humans than on the gorillas themselves,” Bostrom writes, “so the fate of our species then would come to depend on the actions of the machine superintelligence.”

After reading Bostrom's book, Musk warned that the threat posed by AI could be greater than nuclear weapons. He donated $10 million to the Future of Life Institute in January, a global research program aimed at keeping AI beneficial to humanity.

Podcasts
0:00
25:36
0:00
26:25