icon bookmark-bicon bookmarkicon cameraicon checkicon chevron downicon chevron lefticon chevron righticon chevron upicon closeicon v-compressicon downloadicon editicon v-expandicon fbicon fileicon filtericon flag ruicon full chevron downicon full chevron lefticon full chevron righticon full chevron upicon gpicon insicon mailicon moveicon-musicicon mutedicon nomutedicon okicon v-pauseicon v-playicon searchicon shareicon sign inicon sign upicon stepbackicon stepforicon swipe downicon tagicon tagsicon tgicon trashicon twicon vkicon yticon wticon fm
24 Jul, 2015 08:00

Robots will create art & may even fall in love - AI engineer

The quest for creating AI (Artificial Intelligence) is on. Two decades ago it all sounded like a far-fetched plot in a science fiction novel. Today, our best minds are working on creating a truly sentient digital brain. What has remained in the realms of fictions may soon become a reality. But how can a machine develop intelligence and could robots defeat and overcome the human race? Or will we be able to coexist? And finally, what will happen to society as we know it when robots become consciously aware of themselves? We pose these intriguing questions to a prominent robotic engineer – Professor Hod Lipson is on Sophie&Co today.

Follow @SophieCo_RT

Sophie Shevardnadze: Professor Hod Lipson, robotic engineer, welcome to the show, it’s great to have you with us. So, here’s the first question: so, first, Steven Hawking, the world-famous physicist, then Elon Musk, and Bill Gates – all these brilliant, amazing people, are warning about the dangers of artificial intelligence, going as far as saying it could even end the human race. So, why should we be sure robots wouldn’t want to destroy mankind?

Hod Lipson: I think, AI – Artificial Intelligence – has been making a lot of progress in the last couple of years, I think, in the last, even, months, it has been making leaps and strides, and I think it’s a very powerful technology and that risk is definitely there, but I think there’s a couple, possibly, misconceptions: one is that, probably, it won’t be sort of robotic AI drones, robots made of titanium shooting people in the street. It’s more of a subtle thing, that AI and robots will start taking jobs, doing things better than humans, and that is a different sort of threat for humanity – much more subtle one, something that we can prepare for in different ways. So, that’s one aspect I want to put into perspective; sometimes, people get the wrong idea through Hollywood depictions of AI taking over. So, it’s going to be very-very different. The real danger of AI, I think, is, in the shorter term, is not what AI will do to people, but what people will do to people, using AI. That’s I think something that we need to think about.

SS: We’re going to talk about that in detail in just a little bit. But here’s a question for you – if you look at it, isn’t world too small for two intelligent species? I mean, we’re overpopulated as it is…

HL: I think it’s not going to play out that way. For species to evolve you’ll need to have a sort of completely independent process, you have to have a situation where these robotic species evolves independently – but that’s not going to happen. Human race and AI coexist and will coexist for a long time. I’m not sure how’s that going to play out, but there’s a lot that we can do in-between – it’s not “going out of control” like biology or something like that. It’s much more intertwined, we have a lot more control over how it will unfold; but, having said that, it’s a good time to start thinking about these topics, and try preparing in advance, being aware of the risks and doing something about it.

SS: Elon Musk has already started preparing for the risk, and he’s investing $10 million dollars into AI safety. So, would you say it is money well spent, and also, can you give me a precise scenario where AI is dangerous?

HL: One aspect of safety of AI is developing tools and AI-techniques that will allow us to prepare software that’s more reliable, so that when we deploy AI-based systems, for example an AI-based system that drives the car or flies the plane or something like that, we can verify and be sure that it does what it needs to do, that it doesn’t do things that it’s not supposed to do, and so forth. So, that, I think, are sort of AI safety measures that people are talking about. It’s important to understand that we’re not talking about humans supervising AI – we are talking about AI’s software supervising AI, so we’re talking about software monitoring software, making sure everything works reliably. I think, it’s much more subtle, and that’s the situation where AI begins to take jobs that humans used to do. It’s already happening in small ways, but if that happens in larger ways, over many… I’m not talking about the next year, I’m talking about, maybe, the next 100 years – and 100 years might sound like a long time, but you know, some of our children and grandchildren will be alive in 100 years. So, it’s a very short time in human evolution scale, and AI and robots can do most things better than most people, in 100 years, not just working factories and driving cars, but also doing other things, like writing poetry and raising children… so when robots and AI can do those sorts of things, I think that will begin to unravel; some social structure will begin to unravel what it means to be human and that’s going to be something that we need to prepare for.

SS: We’re going to go into detail about human can robots get. You work on enabling robots to self-replicate, build themselves from scratch… So, how does a machine with no knowledge and understanding actually do that?

HL: Yes, we’ve been working on many aspects of robotics that emulate biology: self-replication, self-awareness, learning and so forth. So, how do machines know to do how to do those kinds of things without being programmed? The answer is simple: by learning. So, just like a child learns and its brains are its brain is a product of evolution, and brain learns from its experiences – robots can do the same.

SS: So, you’re saying robots are like animals and even humans, and they will undergo an evolution?

HL: At least, that’s what we’re trying to do. In our lab we are simulating evolution and sort of breeding robots so that they are better in what they do. There’s also the Machine Learning Community which trains robots to learn over their lifetime. They are like a child, they begin by not knowing very much and they have experiences, they see things, they learn, they sense and they draw conclusions and they get better and better at what they do over time.

SS: So, you’ve brought up that  robots could actually deal with things that humans will do; so if robots are one day able to do mathematical or logical work for us, and be better than us in all of that – would that enable them to create artwork?

HL: Absolutely. I think, we’re seeing… as these machines learn, as robots become better at doing various things, some of the things that they are beginning to tackle are areas that we once thought are immune to robots and AI, things like curiosity, scientific inquiry, coming up with ideas, hypothesis, creativity – and even art. We have a robot that creates paintings – it looks at an image and paints oil and canvas and it can do… at least it can paint a lot better that I can – so, again, I am not saying that robots will do everything better than everybody, but they will do many things better than many people, and that’s already pretty disruptive.

SS: But what can your robots do? The ones that you work on?

HL: Our robots, right now, are very simple, our robots are learning to do very simple things – from self-replication to recognizing people, to discriminating, classifying images based on perception. We have robots that can create models of themselves, models of other robots – very simple things, by no means even close to human intelligence or even animal intelligence – but we are developing these techniques, and it’s not just our group, there are many roboticists working on these sorts of AI-challenges. What a lot of people are focusing on is not on a particular task, but on actually developing the underlying technologies that allows robots to learn – and when robots learn on their own, they can learn things beyond what their programmer or developer knew to begin with – and that’s the exiting part.

SS: You’re basically describing robots that are going to have their own personalities at some point, right? So, here’s a question – can we fall in love with robots? Or can a robot fall in love with a human being?

HL: That’s a wonderful question. I think, and again, we’re not talking about next year, we’re not talking about next decade – but if I have to project where robots will be in 100 years, or something like that, from now… I think, the answer is “yes”. Humans can fall in love with all kinds of things – certainly, with animals, with teddy bears; so it doesn’t take a lot to get people’s emotions… inanimate objects can elicit emotions from people relatively easily. It’s enough to put two googly eyes onto something and already, people have emotions towards it. But, the question of whether robots can have deep emotions is very interesting. Certainly, simple emotions like fear or self-preservation is possible in sort of more technical ways. Things like love are much more complex – and we don’t even know what love is with humans, so it is very difficult to define with machines for sure.

SS: You’ve done work on “Robot scientist”. Will humanity’s progress in the future be enabled by robots coming up with new technologies?

HL: Yes, that’s one of the exciting things about the AI in general – it accelerates its own discovery. So, it’s not a sort of technology that automates something, and when it’s automated, it is sort of statically automated something… it accelerates its own development. So, we develop software which we call “Eureqa” – spelled with Q – that looks for scientific truths in large amounts of data, sort of scientific data mining tool if you like. With that tool you can find interesting new truths, new scientific laws hidden in data – so, we call it “robotic scientist” – of course, it doesn’t replace a scientist, but it greatly accelerates scientific discovery, because it is like a microscope, it allows you to look into a big data-set and find this minute effects that you wouldn’t see with a naked eye. That’s an example of how AI basically works together with scientists to accelerate scientific discovery, and I think in long-term it’s going to be self-accelerating, if you like.

SS: So, what is your wildest dream about the potential intelligent robots could unlock for us?

HL: When we work on robotic technology, one of my aspirations is to, basically, create a robot that has human-level intelligence, consciousness, if you like, self-awareness – these things that we think are very-very human, so to be able to create that in a machine is a way to understand what it means to be human – and that’s my goal, there’s a long road to get there, but that’s my dream.

SS: But what about colonizing space? I mean, robots don’t need fresh water and air to survive, have you ever thought of that?

HL: Why, there are numerous… so, when people study robotics, I think there are two motivations – one is to understand biology and understand what it means to be human in a deep way; the other one is more practical – and that is to get machines to do things for us, be it working at factories or colonizing other planets. I think there are numerous applications for that. That’s definitely one exciting application for sure.

SS: So you see a prospect of robots, colonizing space?

HL: I think, for space exploration… it was very difficult to send people to the Moon, it’s even more difficult to send people to Mars, and I doubt we would be able to send people any further than that – at least, it will be an incredibly difficult undertakings, so if we want to understand what’s going on other planets, I think there’s no escape from sending machines. These kinds of robots are not necessarily humanoid robots – we’re not sending human-shaped machines to other planets; these are sort of more technical probes that are going to do all kinds of measurements. So it’s a different kind of robot than most people imagine.

SS: That sounds less romantic right away when you put it that way. But tell me something, how do you control a thinking machine? How do you make it do what you want if it is smart enough to figure things  out on its own?

HL: That is the big question, and I think the answer is – you can’t. You can control it to some extent, but you cannot completely control it. So, whenever I see Hollywood movie that shows the Three Laws of Robotics or some other variation of that – I think, it’s a very naïve idea, rooted in 50s, when people actually programmed or thought about programming robots… But the way robots will evolve is through machine-learning, and when learning is involved you don’t exactly know what the robot knows and what it doesn’t know. It’s a little bit like raising a child – you can expose a child to various experiences, you can shape their experiences in different ways, but you never know exactly what they know and what they don’t know, what they learned and what they haven’t learned – that’s the way robotics is going to play out. So, on one hand, it’s exciting, because the robot can learn more than you know, it can be smarter than you, so that’s an exciting thing, but on the other hand, you’ll never know exactly what he and learned and so there’s a little bit of a loss of control, and that’s a trade-off we have to make and we have to be comfortable with, if we’re going to have smart robots around.

SS: But also, there’s a moral question there: like you say, if you create a robot and it’s like you raise a child, so you feel like you’re entitled to that robot, and that robot should be doing whatever you tell him, or her, or it, to do. So, when we talk about machines, an AI, is it even moral to keep it as a slave?

HL: The ethics of robotics is a very-very new area, and you know – I don’t have the answer. But the analogy to child, I think, is a good one, because you have to understand that during the learning period, yes, you do control it to some extent, but later on – you don’t. And, the robot, like a child, will keep on learning on its own, and if it was raised in a good way, then I think things will go well; but we will definitely have to master that aspect, and that’s exactly what people are doing now, trying to figure out how to teach robots, how to learn. We need to figure that out, it’s not simple, but if we do it well, I think it will work.

SS: Right, but if things are decided by machines at some point, and they go wrong – who takes responsibility? Can you sue Siri, for instance? 

HL: That challenge already exists today. If you apply for credit and your credit in a bank is denied, it’s probably some kind of AI program that determined that you don’t get credit – and that has an effect on your life. So, AIs already have effect on your life. Now, who takes responsibility for that kind of decision? What if that decision is wrong? That’s already happening, that’s already difficult to decipher… If you call up your bank, the bank says that the software determined that you’re not eligible; it’s not clear what do to. So, that’s a good question, but there’s no good answer right now, and I think that’s one of the things that, when people talk about AI safety, that’s what they are talking about. We want to be sure that if and when we use AI to make increasingly important decisions that affect people’s lives, we have AI that is robust, that is accurate, that makes the right decisions in new situations – that’s difficult to do, but that’s what we need to have.

SS: And here’s when it gets really dangerous – not in 100 years, but right now: the U.S. Navy is developing semi-automated drone boats that can swarm enemy targets. What happens if they lose control? Can they evolve as well?

HL: I don’t know details of that particular thing, but my understanding of a current state of AI is that it can’t really go out of control, so these things always have an off-switch, they have lots of safety guards, that can be turned off, can be switched off in emergency, so I don’t think anything like that in the near future will go out of control.

SS: But just a thought of robots being designed to kill humans automatically – because it’s part of the same story, it’s just the other side of the story. South Korea, for example, they already have turrets like that. Doesn’t that scare you?

HL: That’s definitely the bad side of automation, for sure. My only hope in that area is that eventually we’ll get to a point where these sort of killing machines kill each other, so we’ll have drones fighting drones and machines shooting machines – and in a way, that’s utopian, we’ll have no people on the ground, it will be just these robots fighting robots, it will all be one big videogame.

SS: Recent study from university of Oxford says a third of UK jobs could be replaced by machines over the next two decades – not hundred years like you’ve brought up, but two decades. So, if most work is done automatically, we will end up jobless – like, millions of people will end up jobless, right? With nothing to do?

HL: That’s right. As I said earlier, that is the big threat of AI; It’s not machines killing people, it’s jobs taken… I think, we will see that happening, we already see that happening in many ways, there are a lot of jobs in manufacturing lost, and many of them are lost because of automation. So, that’s already happening, and it could very well be 30% of jobs disappearing in next two decades and much more over the next 100 years. I think what do we need to do to prepare for that is probably to think about… rethink how we structure society – so, that’s’ a big question, but it’s a good thing to start thinking about now, because a lot of how we distribute wealth, how we distribute power, what people do during the day, how they derive meaning – it’s all coming from their work, so when work is gone and productivity will still exist, there still will be food, money, but we will have to think of different ways to distribute that once jobs go away, and maybe, it is going to be education, maybe it is going to be sport – some other things that people can do that are not related to jobs.

SS: But let me talk to you about education and not only that: skills and talents, from what you’re describing, this process is inevitable, of outsourcing everything to robots. But won’t we lose enormous amount of skills – like, people won’t be able to play piano or for instance, in Finland, kids aren’t taught how to write anymore, they use keyboard instead. If everything rides around in automated car, then most people won’t even learn how to drive – I mean, there are people who love to make shoes, they won’t be driven to that anymore. So, what happens if everything is outsourced to robots? We will lose talents, skills, motivation? Doesn’t that scream degradation of human race to you?

HL: I think that is a very real danger, and I don’t want to understate that; there are some things that we can do, maybe, to counter that to some extent. So, when people have more free time, we might engage in new kinds of arts, new kinds of sport, we might create new entertainment. So, these are things that we need to do. If you think about the last 100 years, a lot of jobs have gone, certainly humans have got more free time than they used to, and we managed to fill that free time with new kinds of activities, from art to sport, to entertainment, to videogames – you name it. So, I think the same trend will continue. We will have to find new things to do, new things to be good at, and new things to keep us motivated.

SS: So you’re saying we’ve got to find new activities to do, so that robots don’t take over us?

HL: We will have to find new activities to do so that we will keep ourselves busy, motivated and entertained.

SS: Professor, thank you very much for this interesting interview. We were talking to professor Hod Lipson, robotics engineer, who succeeded at creating self-aware robots; we were talking about the dangers and blessings of artificial intelligence. That’s it for this edition of Sophie&Co, I will see you next time.

Dear readers! Thank you for your vibrant engagement with our content and for sharing your points of view. Please note that we are about to switch to a new commenting system. Once that happens, you will need to register again to leave comments. We are working on some adjustments so if you have questions or suggestions feel free to send them to feedback@rttv.ru. Please check our commenting policy. Happy holidays to you all! Question More
Podcasts
0:00
28:26
0:00
25:13