’Who's in control?' Scientists gather to discuss AI doomsday scenarios

2 Mar, 2017 17:47 / Updated 8 years ago

Artificial intelligence has the capability to transform the world - but not necessarily for the better. A group of scientists gathered to discuss doomsday scenarios, addressing the possibility that AI could become a serious threat.

The event, 'Great Debate: The Future of Artificial Intelligence - Who's in Control?', took place at Arizona State University (ASU) over the weekend.

"Like any new technology, artificial intelligence holds great promise to help humans shape their future, and it also holds great danger in that it could eventually lead to the rise of machines over humanity, according to some futurists. So which course will it be for AI and what can be done now to help shape its trajectory?" ASU wrote in a press release. 

The Saturday gathering included a panel which consisted of Eric Horvitz, managing director of Microsoft's Redmond Lab, Skype co-founder Jaan Tallinn, and ASU physicist Lawrence Krauss. It was partly funded by Tallinn and Tesla's Elon Musk, according to Bloomberg. 

It included 'doomsday games' which organized around 40 scientists, cyber-security experts, and policy experts into groups of attackers and defenders, the news outlet reported.

Participants were asked to submit entries for possible worst-case scenarios caused by AI. They had to be realistic, based on current technologies or those which seem plausible, and could only consider things which might feasibly happen between five and 25 years in the future.

Scenarios ranged from stock market manipulation to global warfare. Others included technology being used to sway elections, or altering a self-driving car to see a "stop" sign as a "yield" sign.

Those with "winning" doomsday scenarios were asked to help lead panels to counter the situation.

Horvitz said it was necessary to "think through possible outcomes in more detail than we have before and think about how we'd deal with them," noting that there are "rough edges and potential downsides" to AI.

While some of the proposed solutions from the ‘defenders’ team seemed viable, others were apparently lacking, according to John Launchbury, who directs one of the offices at the US Defense Advanced Research Projects Agency (DARPA).

One of the failed responses involved how to combat a cyber weapon designed to conceal itself and evade all attempts to dismantle it.

Despite the somewhat unnerving content of the event, Krauss said the purpose was "not to generate fear for the future because AI can be a marvelous boon for humankind...but fortune favors the prepared mind, and looking realistically at where AI is now and where it might go is part of this..."

He added that even situations which we may now fear as "cataclysmic" may actually "turn out to be just fine."

Launchbury said he hopes the presence of policy figures among the participants will spur concrete steps such as agreements on rules of engagement for cyberwar, automated weapons, and robotic troops.

The gathering comes just four months after acclaimed physicist Stephen Hawking warned that robots could become the worst things ever to happen to humanity, stating that they could develop "powerful autonomous weapons" or new methods to "oppress the many." 

Hawking, along with Musk and Apple co-founder Steve Wozniak, released an open letter in 2015 which warned that AI - especially weaponized - is a huge mistake. 

“Artificial Intelligence (AI) technology has reached a point where the deployment of [autonomous] systems is — practically if not legally — feasible within years, not decades, and the stakes are high: autonomous weapons have been described as the third revolution in warfare, after gunpowder and nuclear arms,” they wrote at the time.