icon bookmark-bicon bookmarkicon cameraicon checkicon chevron downicon chevron lefticon chevron righticon chevron upicon closeicon v-compressicon downloadicon editicon v-expandicon fbicon fileicon filtericon flag ruicon full chevron downicon full chevron lefticon full chevron righticon full chevron upicon gpicon insicon mailicon moveicon-musicicon mutedicon nomutedicon okicon v-pauseicon v-playicon searchicon shareicon sign inicon sign upicon stepbackicon stepforicon swipe downicon tagicon tagsicon tgicon trashicon twicon vkicon yticon wticon fm
27 Oct, 2014 19:08

‘Summoning the devil’: Elon Musk warns against artificial intelligence

‘Summoning the devil’: Elon Musk warns against artificial intelligence

Elon Musk, the chief executive of Tesla, has warned of the danger of artificial intelligence, saying that it is the biggest existential threat facing humanity.

Musk who was speaking at the Massachusetts Institute of Technology (MIT) Aeronautics and Astronautics department’s Centennial Symposium said that in developing artificial intelligence (AI) “we are summoning the demon.”

Fiction, for example in films like The Terminator and the Matrix, has for many years demonized the perils of AI where technology starts to dominate and manipulate the human minds that created it.

“In all those stories where there’s a guy with the pentagram and the holy water, it’s like yeah he’s sure he can control the demon. Didn’t work out,” he said.

Musk was asked if AI was anywhere close to being a reality and he replied that he thought we were already at the stage where there should be some regulatory oversight.

“I’m increasingly inclined to think there should be some regulatory oversight maybe at the national and international level, just to make sure that we don’t do something very foolish,” he said.

The technology magnate, inventor and investor who is CEO of Tesla, Solar City and SpaceX warned in August that AI could be more dangerous than nuclear weapons.

READ MORE: Elon Musk: Artificial intelligence will be ‘more dangerous than nukes’

Musk is no stranger to the power of technology. In 2002 when he launched SpaceX, some doubted his ability to make it a success, ten years on it became the first private company to launch a vehicle into space and bring it back to earth and now has a major contract with NASA.

But Musk does not appear to believe that space exploration will change the future of humanity.

“It’s cool to send one mission to Mars, but that’s not what will change the future for humanity. What matters is being able to establish a self-sustaining civilization on Mars, and I don’t see anything being done by SpaceX. I don’t see anyone else even trying,” he said.

READ MORE: ‘F**k Earth!’ Elon Musk wants to send million people to Mars to ensure humanity’s survival

But Musk himself has invested in companies developing AI, he says “to keep an eye on them.”

“I wanted to see how artificial intelligence was developing. Are companies taking the right safety precautions?” he told CNN.

Musk is not the only one worried about AI. A group of scholars from Oxford University wrote in a blog post last year that “when a machine is 'wrong,', it can be wrong in a far more dramatic way, with more unpredictable outcomes, than a human could. Simple algorithms should be extremely predictable, but can make bizarre decisions in 'unusual' circumstances."

Dr. Stuart Armstrong, from the Future of Humanity Institute at Oxford University, also warned that AI may have other damaging implications such as uncontrolled mass surveillance and mass unemployment as machines and computers replace humans.

To a certain extent the AI train has already left the station and is already in financial trading, as depicted in Robert Harris’s novel the Fear Index, and in video gaming. Darktrace is an AI program, which uses advanced mathematics to manage the risk of cyber-attacks by detecting abnormal behavior in organizations.

Podcasts
0:00
28:7
0:00
28:37