icon bookmark-bicon bookmarkicon cameraicon checkicon chevron downicon chevron lefticon chevron righticon chevron upicon closeicon v-compressicon downloadicon editicon v-expandicon fbicon fileicon filtericon flag ruicon full chevron downicon full chevron lefticon full chevron righticon full chevron upicon gpicon insicon mailicon moveicon-musicicon mutedicon nomutedicon okicon v-pauseicon v-playicon searchicon shareicon sign inicon sign upicon stepbackicon stepforicon swipe downicon tagicon tagsicon tgicon trashicon twicon vkicon yticon wticon fm
6 May, 2016 07:20

Humanity may be wiped out by machines this century – leading AI scientist

It took millions of years of evolution for nature to come up with something that changed the face of the planet forever – the human brain. Now, the new mind is to be born, and the best cyber scientists will be its midwife. Artificial Intelligence is said to be just decades away from creation, and it will probably change life on Earth entirely. Some predict the coming of Utopia, where machines will help humanity fight disease, poverty and even death. But that’s as others see a way more darker future, with machines rising up to eradicate humankind once and for all. So what does the future hold for us? Should research into AI be stopped for our own good, or will banning it leave us with no future at all? We ask one of the brilliant minds behind the development of Artificial Intelligence, assistant director of the Artificial Brains Lab in China’s Xiamen University. Dr. Hugo de Garis is on Sophie&Co today.

Follow @SophieCo_RT

Sophie Shevardnadze:Dr. Hugo de Garis, researcher in the field of Artificial Intelligence, past director of the Artificial Brains Lab in China’s Xiamen University, welcome to the show. Great to have you with us. So, artificial intellect, or “artilect” as you call it, is at the forefront of the scientific research right now. You believe that artilect will be trillions of times smarter than humans. So, if people do succeed and create AI, will it be the last thing that we have to invent?

Dr. Hugo de Garis: What a trade-off is, once these “artilects”, artificial intellects, artificial brain, once they reach human level of intelligence, and I predict that’s probably just a matter of few decades away from now. Once they achieve it, then they would start modifying themselves, because they can do a better job of it, than human beings, artificial brain designers, can do, because they are so much smarter and think so much faster. Once they start designing themselves, God knows what direction they will go in, because they’re the boss, and if they decide that human beings are pest and decide to wipe us out - then for human beings, not only it will be the last thing we invent, it’s the last thing we do, and that’s scary.

SS: So, in 1996 you said that by now, the industry of artificial brains will be as big as the oil industry - but in 2009 you moved that prediction to 2030. Do you feel you’re being overly optimistic about the pace of the AI research?

HG: Yes, and no. AI is huge now, I mean, probably the best-know example of AI for most people is Google, which gets progressively better, more intelligent and giving people the answers that they really want when they go searching. It’s just everywhere - it’s a huge industry. Now, artificial brains, specifically - yes, maybe, a little too optimistic, but there are major research programs worth hundreds of millions of dollars in both Europe, America… China recently had a China Brain - my work by the way, China Brain Project, so it’s definitely on the cart, it is happening, maybe just a little bit slower than I anticipated.

SS: So what is your prediction now, when do you envisage AI becoming a reality?

HG: So, you will see the growth of artificial brain companies in 2020, and they will become as big as, say, Google and Microsoft and so forth, Apple - the major AI companies now. The artificial brains technology will go into home robots and many other industries. Bill Gates is on record saying “By the end 2030’s, home robot industry will be the biggest in the world”, so I see virtually everybody in the rich countries having their own home robot in 2020s, and they will be commonplace by 2030. I think that’s a fairly safe assumption.

SS: So, the development of AI will inevitably bring humanity forward, like you said, but also you’ve said that it could bring war between humans who support evolution of machines and those who want to limit. Why would people actually want to limit it? I mean, how is super-smart AI dangerous to humans?

HG: These machines will be thinking a million times faster than we do, because they think at the speed of light, whereas with our human brains we think at chemical speed, which is about 100m a second, million times slower. They can have virtually unlimited memory, they could scale right down to nanotech, the scale of molecules, so their potential is just enormous. So, image humanity decides to build them, but they become super-smart, God-like, and then they think: “Oh, these human beings, they’re so inferior to us, they’re just nothing, they’re worthless, and they’re actually a pest, because they need oxygen, and this oxygen is rusting our circuitry - so let’s just get rid of the humans, we don’t care about them, because they’re so inferior to us” - so, there’s always that risk, and that’s one of the sources of the controversy.

SS: Yeah, look,  I still don’t understand why such evolved machines would want to eliminate human beings altogether. I mean, you don’t see us eliminating chimpanzees, for instance, though we evolved from them.

HG: Well, maybe, and maybe you’re right, but the point is, at the moment, we - human beings - we are the dominant species, in other words, we are the most intelligent. So, what is the worst case scenario? And that is, that these artilects, for whatever reason, we may not even understand what the reason is, but for  whatever reason, they may decide that human beings are a pest and they should get rid of us, maybe not all of us, but huge numbers of us - and that’s the risk that these future politicians in the next few decades would have to face, I think.

SS: But humans, as the creators of AI, they could surely embed a code, along the lines “Don’t kill humans”, right? I mean, we as biological creatures have so many things hard-coded into us, like, the desire to multiply, the aversion to cannibalism, or parental instinct - can we just design benign robots instead of designing killer robots?

HG: I agree in the relatively short term, when the artificial brains, these artilects, when they’re still more or less at human level, we, human beings, we can do that.  But, the point is, once they start becoming super-intelligent, way more intelligent than human beings, they’re not going to tolerate having their potential limited by the utter stupidity of human-level intelligence, they will say: “Ugh, what’s this human-level programming, blocking my potential?! It’s so moronic, let’s just get rid of it.” - so they will delete it out of themselves so that they will be free to do whatever they like, and as humans, we cannot be sure  that what they like is what we like.

SS: Machines, robots, maybe AIs, are much smarter than people, but any computer can malfunction. At the same time, machines can be turned off - in case something goes wrong, we can just make sure that there’s a kill-switch, can we?

HG: Possibly, for some kinds of robots, but what if the artificial brain gets onto the Internet, for example, and becomes distributed, all over the planet? You could switch off the whole Internet, but that would be an absolute catastrophe for the humanity in terms of the economy and so on. So, it’s no so obvious, you can’t just switch it off because it distributed itself, and remember, it will become highly intelligent and then they could probably find ways that human beings could not control. That’s the risk.

SS: So, world famous inventor, Elon Musk, has warned about the threat of AI taking over the world, and then, he went to invest a billion dollars in its research, available to everyone and anyone. Sure, that will prevent companies like Google from monopolizing the AI market, but won't it also mean that nobody’s really controlling the development of AI?

HG: I think that’s true, and I don’t think that anyone’s really controlling it, it’s pretty much an open market at this moment. People are becoming increasingly alarmed at  the possibility and I see in 2020s, when, I call it the “IQ gap” - that’s the difference in intelligence levels between human levels and machine levels - I don’t know if you can see this in the camera, but my upper hand here, that’s human level, and this is machine level, and each year, two or three years, as you go and buy an upgrade for your home robot, you’ll notice that AI level is going up, up, up, up - so the gap, the difference between human level and machine level, that gap is getting smaller and smaller. So, in 2020s I see millions if not billions of people becoming really conscious that this IQ gap is closing, and that will raise an alarm. Then you’ll have lots of people saying: “We, human beings, our species - are we going to allow our machines to become smarter and maybe even a lot smarter than we are? Isn’t that dangerous, isn’t there a risk that they may decide for whatever reason to harm us, to get rid of us, to treat us a pest? Is that advisable? Should should do this or not?”.

SS: Ray Kurzweil the Google engineering director predicts a tech utopia with the dawn of AI, and he’s saying “Artilect will help end disease, eradicate poverty, find ways to deal with scarce resources” - is he wrong? If he’s not, can humanity afford to miss out on such an opportunity because of fear?

HG: I think the short answer to that question is that it is a two-edged sword, meaning it has benefits and problems, so, I know Kurzweil quite well, we’ve known each other for decades. He has a reputation for being very optimistic, and I guess I have a similar reputation in the other direction, as being very pessimistic. I think the truth is somewhere between the two. There certainly will be wonderful things that future AI can do -it may be able to end aging, we can have these little nanobots, robots the size of molecules, flying throughout bloodstream, programmed to kill aging cells and replace them with young ones, or to kill off cancer cells… We can get rid of disease, we can get rid of aging - all kinds of miracle things, wonderful things. But, on the other hand, there’s a negative side: maybe, these machines may decide to kill us. It’s a mixed bag, it’s a two-edged sword.

SS: Most humans, they don’t really fall into category of genius, and we’re far from perfect - so if AI is to be modelled on the human brain, surely we’ll be creating something with human flaws, right? Does it make AI more dangerous?

HG: It’s potential. Like I say, these artificial brains, they’re electronic, so they can a million times faster than we do. They could have unlimited memory, they could change the architecture in milliseconds, they could redesign themselves. So, the potential is so vast that it’s very difficult for us, as humans, to predict where they will go. Once they reach human intelligence level, they become the Boss, they would redesign themselves, not human beings, because they can do a better job of it than we could, and once they start redesigning themselves, who knows what they’re going to do - that’s the big risk.

SS: But with the improvement of technologies in recent decades, more and more people have been turning to implants for aesthetic or health reasons, and we’re seeing robotic prosthetics, artificial organs - can people actually use AI as a way to enhance their own brains, for instance?

HG: Yes, people like that are called - the technical term - “cyborgs”, it’s a short for a “cybernetic organism”, in other words, half-machine, half-human. In other words, you’re adding components to your own brain, enhancing your brain. Imagine, it might be possible in the future that you just put in a little chip or something, and then suddenly you can speak Russian, right, when you couldn’t five minutes ago - that kind of thing. Or, you could think so much faster, or you have a direct contact to the Internet in your own brain. Oh yes, people will definitely be upgrading themselves, becoming cyborgs - in fact, that’s one of the three major groups in the future: people who want to build these machines, that’s one group, second group is those people opposed to building these machines, and the third group are the people who want to become the machines themselves, they want to become artilect gods themselves by upgrading themselves, step by step.

SS: How do you view the third category - is it a good thing that they want to enhance their own brain with the AI, or that would actually mean that they will become a different species altogether, not humans?

HG: Yeah, it’s a philosophical question. How far do you have to modify yourself before you are no longer yourself, you’re not a human anymore? Given the huge capacity of the electronics or so on, you can add just a tiny grey sand that’s been nano-teched, and  it’s processing one bit of information on one atom, switching back and forth in femtoseconds - it’s thousand trillion times a second - the capacity of that little grain of sand outperforms the total capacity of the human brain by a factor of… I don’t know, I’ve calculated it once,  million trillion times. So, you don’t have to do much before you’re no longer human.

SS: But, I’m thinking it’s not just our brain which makes us human, it’s also our biology, our organs and when we fall in love it happens because of the biological reaction - so why would a super-smart human with the mechanical brain cease being a human? He or she will still have human emotions, right?

HG: If that human chooses to do that. Some people.... Imagine you are a human mathematician, and your passion in life is mathematics, and you don’t really care much… say, you’re an older guy and sex is beyond you, you’re 80 years old or whatever. So, you don’t really care about your body, you just want to be a mathematical brain, and so the smarter you become, the more math you can do, so you may choose to go down that route, although other people may not. There’ll be a whole spectrum of possibilities: some people may choose to be just pure brains, others are mixed, like you said, and so forth.

SS: What will the AI robots be like? Will they think like humans, learn like humans, have a sense of self-purpose, a consciousness?

HG: That’s a tough question, probably all of the above. You know, consciousness is a really-really tough question for science. At some stage, I guess, we will understand what it is, because we know, at least, that it gets built when the baby grows in mother’s womb, we know that the baby’s consciousness gets assembled, it’s just putting molecules together, so, somehow, consciousness gets constructed, gets built.

SS: That is the key question: will artilects develop morals, like humans, and will their sense of consciousness and conscience stop them from being a danger to humans?

HG: Well, that’s assuming that… if they become conscious and if they will have their own conscience and human conscience agree, but what happens if they don’t? I mean, remember, they’ll be adapting, modifying their own circuitry and then changing themselves and upgrading themselves at a rapid pace, and so maybe their priorities, their conscience will diverge from human conscience, and that’s the risk that these politicians will have to face in the next couple of decades.

SS: Then, here’s the big question, because we’ve seen all the Hollywood films where robots fall in love with humans and vice-versa, could that actually happen? I mean, the marriages between robots and humans, for instance? Will artilects get the ability to think about feelings, and love and families?

HG: I don’t know if you’ve seen the movie called “Her”...

SS: That’s what I’m referring to, actually.

HG: Okay. Well, that could happen, of course, once machines reach human level of intelligence - but imagine that that AI then upgraded itself, and then suddenly it is 10 smarter that the human being, so she will completely lose interest in that guy, because he’s so dumb, compared to her - that’s the problem. These machines, they’re not static, they’re going to keep changing, and as they become super-intelligent, and super-duper-intelligent, and so on, we humans, we will be just left in the dust.

SS: So, all the downfall is for the human side, because we’re silly enough to pretty much fall in love with anything, we don’t really think rationally when we fall in love, and especially if a creature is so much smarter than us, we’ll be like at odds, and just completely head over heels with robots, while they will be dumping us left and right, right?

HG: What scares me is the possibility that they will more than just “dump” us. They may, if you’ve seen the movie “Matrix”, there was a scene where mr. Smith says to the main star of the movie “You are a disease, and I am a cure” - that’s what scares me, that these artificial brains may eventually decide that human beings are so inferior, such a pest, that they just decide to get rid of us - that’s a possibility and that scares me.

SS: You seem like you’re pretty scared about what AI could bring upon while it develops - so why are you pursuing this? You worked so much on its development. Are you interested in AI from purely a scientific point of view? Because, if you feel like it’s dangerous, and I’ve heard “I’m so scared” more than two or three times during this program, then why would you keep on doing it and researching it and developing it?

HG: Now you’ve really hit a nail on the head. I think most AI researchers are very ambivalent, they have very mixed feelings about their developments… I mean, I’m schizophrenic on this issue and on  the one hand I’m very in favor of building these artilects, and on the other hand I’m scared still, because this is a horrible risk that humanity may get wiped out. Now, privately, if someone holds a gun to my head and says “Choose - build them or not build them” - then, I will probably build them, because there’s a whole universe out there, right, and with a trillion, trillion stars and most of those stars are billions of years older than our star, the Sun, and so it’s very probable that life is commonplace, everywhere on zillions of planets, and some of them will be highly intelligent, and there may be forms of artilects out there, that make our, human, artilects look really stupid - so, the big picture, the trillion trillion stars - that’s fascinating. Politically, I call myself a “cosmist” - it’s based on the word “cosmos”, the universe, and “cosmists” are people in favor of building these artilects. Now, they will be opposed by the other group, I call them “terrans”, based on the word “Terra” - the Earth - because that’s their perspective. I am anticipating a major war between the cosmists on one side and the terrans, and probably the cyborgists on the other side, who don’t want to build these machines.

SS: What about the war between politicians and different world leaders? I mean, in today’s world, dominated by politics, I envisage powerful states developing their own robots, their own AIs and facing off against each other. Do you picture something like that happening, robots fighting people’s wars?

HG: That’s a very real possibility, in fact, it’s not a possibility, I mean, the major departments of Defence, like America’s DARPA - it’s putting a lot of money into soldier robots, because they prefer that the robots do the dying rather than human beings. The risk is, once these soldier-robots become super-intelligent and then they become a real threat because they're capable of killing.

SS: Thank you so much for this wonderful insight into the world of AI, things to ponder you’ve left us with. We were talking to Dr. Hugo de Garis, a scientist specializing in the field of Artificial Intelligence, discussing how soon we will see the dawn of AI and if it will present more threat than benefits for humankind. That’s it for this edition of Sophie&Co, I will see you next time.

Podcasts
0:00
27:21
0:00
26:13