icon bookmark-bicon bookmarkicon cameraicon checkicon chevron downicon chevron lefticon chevron righticon chevron upicon closeicon v-compressicon downloadicon editicon v-expandicon fbicon fileicon filtericon flag ruicon full chevron downicon full chevron lefticon full chevron righticon full chevron upicon gpicon insicon mailicon moveicon-musicicon mutedicon nomutedicon okicon v-pauseicon v-playicon searchicon shareicon sign inicon sign upicon stepbackicon stepforicon swipe downicon tagicon tagsicon tgicon trashicon twicon vkicon yticon wticon fm
13 Dec, 2019 07:01

We’re living in a computer simulation – philosopher

With technology evolving at cosmic speeds and artificial super-intelligence no longer just a Hollywood dream, is humanity’s path ahead a dangerous one? And will our lives still be real? We talked about this with one of the most acclaimed thinkers of our time, philosopher and director of the Future of Humanity Institute at Oxford University, Nick Bostrom.

Follow @SophieCo_RT  

Instagram Sophieco.visionaries

Podcast https://soundcloud.com/rttv/sets/sophieco-visionaries 

Sophie Shevardnadze: Nick Bostrom, it's really great to have you with us. So you're a philosopher, you're author who writes about what's going to happen to us basically, possibly. So what are the ideas that you put forward? Is this idea of vulnerable world? 

Nick Bostrom: Right. Yes. 

SS: So, correct me if I'm wrong, but if I get this correctly, it's basically that humanity may come up with a technology that may do most to extinction and therefore, we would need computer surveillance. 

NB: Well, that might be an oversimplification. But the vulnerable world hypothesis is the hypothesis that at some level of technological development - it gets to use it to destroy things basically so that by default, when civilization reaches that level of development, we'll get devastated. There’s a couple of different ways in which this could be true. One, maybe the easiest way to see is if it just becomes very easy at some level of development, even for a small group or individual to cause mass destruction. So imagine if nuclear weapons, for example, instead of requiring these rare, difficult to obtain raw materials, like plutonium or highly enriched uranium, - imagine if it had been an easy way to do it, like baking sand in the microwave, and you could have unleashed energy of that. If that had turned out to be the way things are, then maybe at that point civilization would have come to an end. 

SS: But then with surveillance, from what I understand, you can't really predict future, nothing can - I mean, you can surveil people and watch what they're doing - but then they will be inventing things under surveillance, but you won't know that it is detrimental until something has gone wrong. The fact of surveillance wouldn't really prevent. 

NB: So if one thinks that the world at some level of technology is vulnerable in this sense, one and then obviously wants to ask, “Well, what could we possibly do in that situation to prevent the world from actually getting destroyed?” And it does look like in certain scenarios, ubiquitous surveillance would be the only thing that could possibly prevent that. Now, would even that work? Well, I mean, parts of the specifics of this scenario. So you'd have to think just how easy would it be to cause destruction? Would you just snap your finger or say  a magic word then the world blows up? Well, then maybe surveillance wouldn't suffice. But suppose it's something that takes several weeks and you have to build something in your apartment and maybe require some skill. At that point, you could imagine a very fine grained surveillance infrastructure giving the capability to intercept. Also, how much destruction is created if somebody does this? Is it one city blows up or the whole of the earth like maybe you could afford a few sweepings through the net. So you'd have to then look at the specifics. Now, of course, surveillance in itself also is a source of risk to human civilization. You could imagine various kinds of totalitarian regimes becoming more effective, more permanent. 

SS: Make computer surveillance in itself is a totalitarian regime. 

NB: What do you mean? 

SS: I mean, if you're surveilled 24/7, all of us, that in essence is a giant computer police state. 

NB: Well, it depends on, I think, what this information would be used for, if it so that some, say, central authority micromanages what everybody's allowed to do with their lives, then certainly that would be totalitarian to an unprecedented degree. But suppose it was a kind of passive surveillance and people just went on with their lives and only if somebody actually tried to create this mass destruction thing, would that be a response in that scenario? Maybe it would not look so totalitarian. 

SS: But is it really realistic?. Because as soon as someone is in charge of this total surveillance, even it's passive, like you're saying, for very specific things like total destruction of a city or the world. They would for sure take advantage of it. 

NB: It's possible, yes. I mean… 

SS: It is just  the way humans are made. 

NB: Yes. Well, I think to varying degrees, there are institutional checks and balances in different countries. Right now we have a lot of very powerful tools and in some places of the world they are used by despots and in other parts of the world they're used by more democratically accountable liberal governments and anything in between. Certainly it would be the case that if you created this kind of extremely fine grained surveillance infrastructure, that it would create a very substantial danger that either immediately or after some period of time it would be captured by some nefarious group or individual and then used for oppressive purposes. I think that is one major reason why people are rightly, in my view, very suspicious of surveillance technologies and where that might lead. But it could still be the case because it's not something we get to choose that the world is so configured that some level of technology destruction is much easier than creation or defence. And it could just be that in that situation, the only thing that would prevent actual destruction would be a very fine grained surveillance. 

SS: Forgive me for gouding this a little. Just because I've seen with my own eyes what a police state is a little bit, so it never really works unless it's sort of a vacuum. And the world is so diverse and we're all so different. And I've seen it with my own eyes that human imperfections and disorganisation… they just somehow always grow through any restrictions or norms, just like grass though pavement, you know? 

NB: Yes. Well, so what is it precisely that you're not convinced about that there could be some level of technology at which destruction becomes easy or that some possible surveillance could prevent the world from getting destroyed? 

SS: Yes. I believe that possible surveillance will still have to interact somehow with humans. That's what is not convincing to me.  

NB: Right. So I think there it becomes a matter of degree which set of scenarios would you be able to prevent the world from getting destroyed in with surveillance? So take today's world where massive destruction is possible, but it's also very hard like nuclear weapons let us say... so that we can have a reasonable ability, even with present day surveillance technology to detect if some nation is building a secret programme. So if you then roll it back, you require less of rare raw materials, less big installations, fewer people working. Obviously, it gets harder and harder to detect, right? With current technology. But this is a very rapidly advancing field with, say, facial recognition software that you could have cameras that could monitor in principle, you could monitor everybody and you could imagine even, if you want an extreme case, but just to kind of demonstrate the theoretical possibility, matter if everybody wore a colour all the time with cameras and microphones. So that literally all the time when you were doing something, some AI system could kind of classify what actions you were taking. And then if somebody were detected to be doing this kind of forbidden action, some alarm could be sounded and some human alert or something like that. 

SS:I do have my problem with AI — it is that it is created by, in essence, beings that are flawed, by human beings. So how can it be something better or perfecter than human beings able to not miss a thing? Because I'm thinking if flawed beings are creating artificial intelligence and artificial intelligence is simulating human beings, then it's simulating flawed beings and it's going to miss something. 

NB: Well, I mean, I'm not sure it would have to simulate human beings, but depending on which type particular scenario we are looking at, it may or may not be necessary to not miss a single thing. I mean, if you're looking at the kind of much worse global warming scenario, it's fine if few people drive cars even in that world, right? As long as the majority kind of stopped doing it, you wouldn't even need new surveillance technology there. You would just need like a carbon tax or something. If you move to the other extreme where a single individual alone can destroy the whole world, then obviously there it would be essential that not a single one slip through. But then it depends on how hard would it be for a single individual. Would I need to do some very distinctive activity, accumulate some special raw materials? Then maybe it would become possible to have that kind of surveillance that could avoid that. Today, obviously, our law enforcement capabilities are very limited, but I do think there are quite rapid advances in using AI to recognize imagery, like recognize faces and then classify actions. And then you could imagine that being built up over a period of 10 or 20 years into something quite formidable. 

SS: So it wouldn't be voluntarily submitting the human race to a robot rule? That's what I'm asking about basically. 

NB: I'm not advocating. I'm just noting that there are certain scenarios. If the world unfortunately turns out to be vulnerable in that way, where it looks like it will lead to actually get destroyed or people will put in place the surveillance measure. Now, there might be, depending on what kind of surveillance technology you have, different ways of configuring that. Maybe it would be almost completely automated or in the near term certainly it would require a lot of human involvement, a way to sort of check things that have been flagged by algorithmic means, for example, and then maybe respond. 

SS: So you know a lot about AI, much more than me. Do you think we can program artificial intelligence to be this benevolent platonic king? This, I don't know, enlightened monarch, or anything that has to do with control or total control is inevitably repressive and bad. 

NB: Well, I mean, I don't think we would know how to do that today. I mean, of course, we can't even build AI that can do all the things that humans can do today. But if, say, next year, somebody figured out a way to make AI do all the jobs that humans can do, like some big breakthrough, I don't think we would at that point know yet how also to align it with human values. That is still a technical problem that people have begun working on since the last few years. But with some significant ways still to go. So getting methods for scalable AI control so that no matter how smart the AI becomes, even if it maybe becomes far smarter than we one day. 

SS: Do you think there’s a possibility? 

NB:  That AI becomes smarter than us? I think eventually, yes. And then by that time, you would want to also have the ability to make sure that they still act in the way you intended, even when they become intellectually far superior ultimately. So that's a technical problem that needs to be solved with technical means. But then if you solve that, you still then have what we could call the political problem of the governance problem. So it would enable humans to get the AI to do what they want. We still do need to figure out how to ensure that this new powerful technology is used primarily for beneficial purposes as opposed to wage a war or oppress one another. And that part is not a technical problem, that it's kind of a political problem. 

SS: I feel like, judging from the history of humanity… if you were saying there is a slight possibility that AI can become more intelligent than us, in real way more intelligent, it's not they mean humans trying to control and make AI do all these things that they want to do.It's the AI controlling the humans and doing with humans what they would want them to do. 

NB: Well, I mean both. But in the ideal case, the AI being aligned with human values in as much as we would, you know, specify what it is that we want to achieve, the AI would help us achieve it. 

SS: But do you think AI could ever simulate real feelings, memories? Do you think it can ever really predict a human brain, something as chaotic as a human brain? Because we don't really know what it is. 

NB: I don't think that would be necessary for alignments to have a very detailed... I mean, we humans can't do that with one another and we can still be friends with one another or help other people and so forth. So that doesn't require the ability to create 100 percent accurate emulation or prediction. 

SS: So you had this other theory before the vulnerable world that we might all be living inside some sort of a matrix. 

NB: Oh, yes. 

SS: And then our lives may be a simulation. Is that right? 

NB: Yes, actually, it's something I published back in 2003. And it's an argument that tries to show that one of three propositions is true, but it doesn't tell us which one. So Proposition 1, the first alternative is that all civilizations at our current stage of technological development go extinct before they reach technological maturity. So it could be that maybe they're out there far away, other civilizations, but they all fail to reach technological maturity. 

SS: Because human nature doesn't change. I mean, technology goes further. But humans use it to destroy the world. 

NB: Yes, that could be the case and a very robust if so, that even if you have thousands of human like civilizations out there, they would all succumb before they reach technological maturity. So that's one way things could be. Another, the second alternative, is that amongst all civilizations that do reach technological maturity, they all lose interest in creating these kinds of what I call ancestral simulations. This would be detailed computer simulations at a fine enough level of granularity that the people in the simulations would be conscious and have experiences like ours. So maybe some civilizations do get there, but they're just completely uninterested in using their resources to create these kinds of simulations. And the third alternative, the only one remaining, I argue, is that we are almost certainly living in a computer simulation right now, built by some advanced civilization. 

SS: Why do you think that's the most probable one? 

NB: The simulation argument doesn't say anything about which of these is true or most likely. It just demonstrates this constraint that if you reject all three of them, you have a kind of probabilistic incoherence. The full argument involve some probability theory and stuff. But I think the basic idea can be conveyed relatively intuitively. So, suppose the first alternative is false so that some non-trivial fraction gets through to maturity. Suppose the second alternative is also false so that some of those who have gone through to maturity do use some of their resources to create simulations. 

SS: OK. 

NB: Then you can show that, because each one of those could run a lot of simulations, so that if some of them go through, there would be many, many more simulated people like us than there would be people like us living in original history. 

SS: You think even like six billion of us? 

NB: Yes. But not just that. But you could show that at technological maturity, even by using just a tiny fraction of, say, one planet's worth of compute resources, even just for one minute you could on, you know, tens of thousands of simulations of all of human history so that if the first two... and we could talk more about the evidence about… 

SS: Yes, but I still don’t get how that simulation is possible if we don't understand our brain. 

NB: Well, I mean, obviously, we can do it, right? 

SS: You say that it’s really far..?  

NB: The simulation argument makes no assumption about the timescale, even if it’s 20000 years or 20 million years - it still holds. And so because each simulating civilization would be able to run using a tiny fraction of its resources, hundreds of thousands, millions of runs through all of human history, almost all beings with our kinds of experiences would then be simulated ones rather than non-simulated ones, and conditional on that I argue we should think we are probably one of the simulated ones. So in other words, what that means is if you reject the first two alternatives, it then seems you are forced to accept the third one, which then shows you can reject all three. In other words, that at least one of them is true. So that's the structure of the simulation argument. 

SS: OK, so you answered my first question about how can anything simulate human brain. Because you're saying there's no time span. So I get that. Two questions: if we're living in a simulation, why would the future us even make one? Just find out. 

NB: I mean, many possible reasons you could imagine. I mean, you could imagine scientific exploration, like wanting to know counterfactual in history: what would have happened if things had gone differently - that could kind of be both theoretically interesting and maybe useful for trying to understand other extraterrestrial civilizations you might encounter. You could imagine entertainment reasons: we, humans, do our best like with novels that bring you into this world, that we put on theatre plays and make movies, computer games. In many cases making the most realistic as we can. Of course, we can't make the perfectly realistic now. But if you had that ability, maybe we would make them perfectly realistic. So that would be another example. Maybe with some kind of historical tourism, you could imagine if you can't actually time travel, maybe you could build an exact simulation of the past and then interact with that. And it would be as if you had to travel to the past and you could experience what it would be like. And other reasons as well… We don't necessarily know very much about what would motivate or drive some kind of technologically mature, repulsed human civilization, why they would want to do different things with their resources. 

SS: And then I guess the core question is, even if we're living in a simulation, does it really matter to us? And me and you and everyone around us. I mean,Buddhists say the whole world is illusion.So what? Does this cancel out the things that we live that are good or bad, like love and feelings and problems? It doesn’t really change a thing, right? 

NB: I think to a first approximation,if you became convinced you are living in a simulation, you probably should go on as if you were not living in a simulation like most everyday things, like if you want to get into your car, you still have to take out the car key and open the door, etc.. So I think that's true. I think there might be some respects in which new possibilities would exist if you are in a simulation, that wouldn't really exist if you're not in a simulation. For example, we think the universe can't just suddenly pop out of existence, right? With the conservation of energy and momentum and so forth... Whereas of course if you're in a simulation, if somebody pulls the plug off the simulation, the whole thing ceases to exist. So the possibility of… 

SS: Big Bang backwards.  

NB: Yes. That world turned to ending without...  I’m not saying it’s likely or not over some timescale, but at least it seems like a possibility. Other things as well. You could imagine things like afterlife, that is clearly possible in the simulation. You could just rerun the same person in another simulation and so forth. Or various interventions by the simulators. In some ways, actually, as a set of possibilities kind of structurally similar to what theologians have been thinking about in terms of supernatural relationship to a creator, and so forth. It's kind of analogues of that that arise within this simulation theory stuff. Although I don't think there is any logical necessary connection one way or the other. It's still kind of intriguing that you get these kind of parallel set of possibilities in some respects. Not exactly the same, but in some ways, kind of similar. 

SS: OK. Now I understand the whole theory, because I wasn't really putting two and two together because I was really thinking now. So it wasn't making sense. It doesn't make sense now. And it's still all related somehow to artificial intelligence because what would be simulating us - it would be some sort of AI, right?  

NB: Yes.  

SS: Yes. So a lot of scenarios today are linked to this doomsday when the artificial intelligence takes over. Or the contrary. A lot of people are saying that artificial intelligence is actually the solution to a lot of our problems, like hunger and inequality and global warming. Where are you at? 

NB: I'm sorry, but I think both are possible outcomes. People sometimes ask me whether I'm an optimist or a pessimist, I have started to refer to myself as a frightened optimist. I do think that the AI transition is not something we should avoid. I see it more as a kind of gate through which we need to passage and that all the plausible paths to a really grand future for humanity goes through this gate, at some point involves the development of greater than human machine intelligence.  

SS: Sort of like a purgatory before you go… 

NB: No, not like purgatory, but a necessary transition, which, however, will be associated with significant risk, including existential risk - threats that we actually permanently destroy ourselves or what we care about in making this transition. I think as we destroyed ourselves through some other ways before, I think this transition will happen, and our focus should be on getting our act together as much as possible in whatever time frame we have remaining — with some decades of whatever it is — try to do the research. to figure out scalable methods for our control to the extent we can, try to get the global order in whatever reasonable shape we can, foster more collaboration, and in particular within AI community and so forth to develop a common set of norms, and the idea that it should be for the common good of all. And then making sure we don't destroy ourselves before we even get a chance with AI would be good as well. And yes, trying to grow up and become wiser in whatever sort of intervening number of years we have. 

SS: Thank you so much for this wonderful insight. It's been a pleasure talking to you. 

NB: Pleasure talking to you.

Podcasts
0:00
29:12
0:00
28:18