Podchaser Logo
Home
🤖🌈 My chat (+transcript) with Nick Bostrom on life in an AI utopia

🤖🌈 My chat (+transcript) with Nick Bostrom on life in an AI utopia

Released Thursday, 6th June 2024
Good episode? Give it some love!
🤖🌈 My chat (+transcript) with Nick Bostrom on life in an AI utopia

🤖🌈 My chat (+transcript) with Nick Bostrom on life in an AI utopia

🤖🌈 My chat (+transcript) with Nick Bostrom on life in an AI utopia

🤖🌈 My chat (+transcript) with Nick Bostrom on life in an AI utopia

Thursday, 6th June 2024
Good episode? Give it some love!
Rate Episode

The media is full of dystopian depictions of artificial intelligence, such asThe Terminator andThe Matrix, yet few have dared to dream up the image of an AI utopia. Nick Bostrom’s most recent book,Deep Utopia: Life and Meaning in a Solved World attempts to do exactly that. Bostrom explores what it would mean to live in a post-work world, where human labor is vastly outperformed by AI, or even made obsolete. When all of our problems have been solved in an AI utopia . . . well, what’s next for us humans?

Bostrom is a philosopher and was founding director of the Future of Humanity Institute at Oxford University. He is currently the founder and director of research at the Macrostrategy Research Initiative. He also wrote the much-discussed 2014 book,Superintelligence: Paths, Dangers, Strategies.

In This Episode

* Our dystopian predisposition (1:29)

* A utopian thought experiment (5:16)

* The plausibility of a solved world (12:53)

* Weighing the risks (20:17)

Below is a lightly edited transcript of our conversation

Our dystopian predisposition (1:29)

Pethokoukis: The Dutch futurist, Frederik Polak famously put it that any culture without a positive vision of the future has no future. It's a light paraphrase. And I kind of think that's where we are right now, that despite the title of your book, I feel like right now people can only imagine dystopia. Is that what you think? Do I have that wrong?

Bostrom: It's easier to imagine dystopia. I think we are all familiar with a bunch of dystopian works of fiction. The average person could rattle offBrave New World,1984,The Handmaid's Tale. Most people couldn't probably name a single utopian work, and even the attempts that have been made, if you look closely at them, you probably wouldn't actually want to live there. It is an interesting fact that it seems easier for us to imagine ways in which things could be worse than ways in which things could be better. Maybe some culture that doesn't have a positive vision has no future but, then again, cultures that have had positive visions also often have ended in tears. A lot of the times utopian blueprints have been used as excuses for imposing coercively some highly destructive vision on society. So you could argue either way whether it is actually beneficial for societies to have a super clear, long-term vision that they are staring towards.

I think if we were to ask people to give a dystopian vision, we would get probably some very picturesque, highly detailed visions from having sort of marinated in science fiction for decades. But then if you asked people about utopia, I wonder if all their visions would be almost alike: Kind of this clean, green world, with maybe some tall skyscrapers or something, and people generally getting along. I think it'd be a fairly bland, unimaginative vision.

That would be the idea of “all happy families are alike, but each unhappy family is unhappy in its own unique way.” I think it's easy enough to enable ways in which the world could be slightly better than it is. So imagine a world exactly like the one we have, except minus childhood leukemia. So everybody would agree that definitely seems better. The problem is if you start to add these improvements and you stack on enough of them, then eventually you face a much more philosophically challenging proposition, which is, if you remove all the difficulties and all the shadows of human life, all forms of suffering and inconvenience, and all injustice and everything, then you risk ending up in this rather bland future where there is no challenge, no purpose, no meaning for us humans, and it then almost becomes utopian again, but in a different way. Maybe all our basic needs are catered to, but there seems to be then some other part missing that is important for humans to have flourishing lives.

A utopian thought experiment (5:16)

Is your book a forecast or is it a thought experiment?

It's much more a thought experiment. As it happens, I think there is a non-trivial chance we will actually end up in this condition, I call it a “solved world,” particularly with the impending transition to the machine intelligence era, which I think will be accompanied by significant risks, including existential risk. In my previous book,Superintelligence, which came out in 2014, focused on what could go wrong when we are developing machine super intelligence, but if things go right—and this could unfold within the lifetime of a lot of us who are alive on this planet today—if things go right, they could go very right, and, in particular, all kinds of problems that could be solved with better technology could be solved in this future where you have superintelligent AIs doing the technological development. And we might then actually confront the situation where these questions we can now explore as a thought experiment would become pressing practical questions where we would actually have to make decisions on what kinds of lives we want to live, what kind of future we want to create for ourselves if all these instrumental limitations were removed that currently constrain the choices set that we face.

I imagine the book would seem almost purely a thought experiment before November 2022 when ChatGPT was rolled out by OpenAI, and now, to some people, it seems like these are questions certainly worth pondering. You talked about the impending machine superintelligence—how impending do you think, and what is your confidence level? Certainly we have technologists all over the map speaking about the likelihood of reaching that maybe through large language models, other people think they can't quite get us there, so how much work is “impending” doing in that sentence?

I don't think we are in a position any longer to rule out even extremely short timelines. We can't be super confident that we might not have an intelligence explosion next year. It could take longer, it could take several years, it could take a decade or longer. We have to think in terms of smeared out probability distributions here, but we don't really know what capabilities will be unlocked as you scale up even the current architectures one more order of magnitude like GPT-5-level or GPT-6-level. It might be that, just as the previous steps from GPT-2 to GPT-3 and 3 to 4 sort of unlocked almost qualitatively new capabilities, the same might hold as we keep going up this ladder of just scaling up the current architectures, and so we are now in a condition where it could happen at any time, basically. It doesn't mean it will happen very soon, but we can't be confident that it won't.

I do think it is slightly easier for people maybe now, even just with looking at the current AI systems, we have to take these questions seriously, and I think it will become a lot easier as the penny starts to drop that we're about to see this big transition to the machine intelligence era. The previous book, Superintelligence, back in 2014 when that was published—and it was in the works for six years prior—at that time, what was completely outside the Overton window was even the idea that one day we would have machine superintelligence, and, in particular, the idea that there would then be an alignment problem, a technical difficulty of steering these superintelligent intellects so that they would actually do what we want. It was completely neglected by academia. People thought, that’s just science fiction or idle futurism. There were maybe a handful of people on the internet who were starting to think about that. In the intervening 10 years, that has changed, and so now all the frontier AI labs have research teams specifically trying to work on scalable methods for AI alignment, and it's much more widely recognized over the last couple of years that this will be a transformative thing. You have statements coming out from leading policy makers from the White House, the UK had this global summit on AI, and so this alignment problem and the risks related to AI have sort of entered the Overton window, and I think some of these other issues as to what the world will look like if we succeed, similarly, will have to come inside the Overton window, and probably will do so over the next few years.

So we have an Overton window, we have this technological advance with machine intelligence. Are you as confident about one of the other pillars of your thought experiment, which is an equally, what might seem science-futuristic advance in our ability to edit ourselves, to modify ourselves and our brains and our emotions. That seems to hand-in-hand with the thought experiment.

I think once we develop machine superintelligence, then we will soon thereafter have tremendous advances in other technological areas as well because we would then not be restricted to humans trying to develop new technologies with our biological brains. But this research and development would be done by superintelligences on digital timescales rather than biological timescales. So the transition to superintelligence would, I think, mean a kind of telescoping of the future.

So there are all these technologies we can see are, in principle, possible. They don't violate the law of physics. In the fullness of time, probably human civilization would reach them if we had 10,000 years to work on it, all these science fiction like space colonies, or cures for aging, or perfect virtual reality uploading into computers, we could see how we might eventually . . . They're unrealistic given the current state of technology, but there's no (in principle) barriers, so we could imagine developing those if we had thousands of years to work on them. But all those technologies might become available quite soon after you have superintelligence doing the research and development. So I think we will then start to approximate the condition of technological maturity, like a condition where we have already developed most of those general purpose technologies that are physically possible, and for which there exists some in principally feasible pathway from where we are now to developing them.

 The plausibility of a solved world (12:53)

I know one criticism of the book is, with this notion of a “solved world” or technological maturity, that the combinatorial nature of ideas would allow for almost an unlimited number of new possibilities, so in no way could we reach maturity or a technologically solved state of things. Is that a valid criticism?

Well, it is a hypothesis you could entertain that there is an infinite number of ever-higher levels of technological capability such that you'd never be able to reach or even approximate any maximum. I think it's more likely that there will eventually be diminishing returns. You will eventually have figured out the best way to do most of the general things that need doing: communicating information, processing information, processing raw materials, creating various physical structures, et cetera, et cetera. That happens to be my best guess, but in any case, you could bracket that, we could at least establish lower bounds on the kinds of technological capabilities that an advanced civilization with superintelligence would be able to develop, and we can list out a number of those technologies. Maybe it would be able to do more than that, but at least it would be able to do various things that we can already sort of see and outline how you could do, it's just we can't quite put all the pieces together and carry it out yet.

And the book lists a bunch of these affordances that a technologically mature civilization would at least have, even if maybe there would be further things we haven't even dreamt of yet. And already that set of technological capabilities would be enough to radically transform the human condition, and indeed to present us with some of these basic philosophical challenges of how to live well in this world where we wouldn't only have a huge amount of control over the external reality, we wouldn't only be able to automate human labor across almost all domains, but we would also, as you alluded to earlier, have unprecedented levels of control over ourselves or our biological organism and our minds using various forms of bio technologies or newer technologies.

In this kind of scenario, is the purpose of our machines to solve our problems, or, not give us problems, but give us challenges, give us things to do?

It then comes down to questions about value. If we had all of these capabilities to achieve various types of worlds, which one would we actually want? And I think there are layers to this onion, different levels of depth at which one can approach and think about this problem. At the outermost layer you have the idea that, well, we will have increased automation as a result of advances in AI and robotics, and so there will be some humans who become unemployed as a result. At the most superficial layer of analysis, you would then think, “Well some jobs become unnecessary, so you need to maybe retrain workers to move to other areas where there is continued demand for human labor. Maybe they need some support whilst they're retraining and stuff like that.”

So then you take it a step further, like you peel off another layer of the onion and you realize that, well, if AI truly succeeds, if you have artificial general intelligence, then it's really not just some areas of human economic contribution that gets affected, but all areas, with a few exceptions that we can return to. But AIs could do everything that we can do, and do it better, and cheaper, and more efficiently. And you could say that the goal of AI is full unemployment. The goal is not just to automate a few particular tasks, but to develop a technology that allows us to automate all tasks. That's kind of what AI has always been about; it's not succeeded yet, but that's the goal, and we are seemingly moving closer to that. And so, with the asterisk here that there are a few exceptions that we can zoom in on, you would then get a kind of post-work condition where there would be no need for human labor at all.

My baseline—I think this is a reasonable baseline—is that the history of technology is a history of both automating things, but then creating new things for us to do. So I think if you ask just about any economist, they will say that that should be our guide for the future: that this exact same technology will think of new things for people to do, that we, at least up to this point, have shown infinite creativity in creating new things to do, and whether you want to call those “work,” there's certainly things for us to do, so boredom should not be an issue.

So there's a further question of whether there is anything for us to do, but if we just look at the work part first, are there ways for humans to engage in economically productive labor? And, so far, what has been the case is that various specific tasks have been automated, and so instead of having people digging ditches using their muscles, we can have bulldozers digging ditches, and you could have one guy driving the bulldozer and do the work of 50 people with a shovel or something. And so human labor is kind of just moving out of the areas where you can automate it and into other areas where we haven't yet been able to automate it. But if AIs are able to do all the things that we can do, then that would be no further place, it would look like, at least at first sight, for human workers to move into. The exceptions to this, I think, are cases were the consumer cares not just about the product, but about how the product

They want that human element.

You could have consumers with just a raw preference that a particular task was performed by humans or a particular product—just as now sometimes consumers play a little premium sometimes if a little gadget was produced by a politically favored group, or maybe handcrafted by indigenous people, we may pay more for it than if the same object was made in a sweatshop in Indonesia or something. Even if the actual physical object itself is equally good in both cases, we might care about the causal process that brought it into existence. So to the extent that consumers have those kinds of preferences, there could remain ineliminable demand for human labor, even at technological maturity. You could think of possible examples: Maybe we just prefer to watch human athletes compete, even if robots could run faster or box harder. Maybe you want a human priest to officiate at your wedding, even if the robot could say the same words with the same intonations and the same gestures, et cetera. So there could be niches of that sort, where there would remain demand for human labor no matter how advanced our technology.

Weighing the risks (20:17)

Let me read one friendly critique from Robin Hanson of the book:

Bostrom asks how creatures very much like him might want to live for eons if they had total peace, vast wealth, and full eternal control of extremely competent AI that could do everything better than they. He . . . tries to list as many sensible possibilities as possible . . .

But I found it . . . hard to be motivated by his key question. In the future of creatures vastly more capable than us I'm far more interested in what those better creatures would do than what a creature like me now might do there. And I find the idea of creatures like me being rich, at peace, and in full control of such a world quite unlikely.

Is the question he would prefer you answer unanswerable, therefore you cannot answer that question, so the only question you can answer is what people like us would be like?

No, I think there are several different questions, each of which, I think, is interesting. In some of my other work, I do, in fact, investigate what other creatures, non-human creatures, digital minds we might be building, for example, AIs of different types, what they might want and how one might think of what would be required for the future to go well for these new types of being that we might be introducing. I think that's an extremely important question as well, particularly from a moral point of view. It might be, in the future, most inhabitants of the future will be digital minds or AIs of different kinds. Some might be at scales far larger than us human beings.

In this book, though, I think the question I'm primarily interested in is: What if we are interested in it from our own perspective, what is the best possible future we could hope for for ourselves, given the values that we actually have? And I think that could be practically relevant in various ways. There could, for example, arise situations where we have to make trade-offs between delaying the transition to AI with maybe the risk going up or down, depending on how long we take for it. And then, in the meantime, people like us dying, just as a result of aging and disease and all kinds of things that currently result in people.

So what are the different risk tradeoffs we are willing to take? And that might depend, in part, on how much better we think our lives could be if this goes well. If the best we could hope for was just continuing our current lives for a bit longer, that might be a different choice situation than if there was actually on the table something that would be super desirable from our current point of view, then we might be willing to take bigger risks to our current lives if there was at least some chance of achieving this much better life. And I think those questions, from a prudential point of view, we can only try to answer if we have some conception of how good the potential outcome would be for us. But I agree with him that both of these questions are important.

It also seems to me that, initially, there was a lot of conversation after the rollout of ChatGPT about existential risk, we were talking about an AI pause, and I feel like the pendulum has swung completely to the other side, that, whether it's due to people not wanting to miss out on all the good stuff that AI could create, or worried about Chinese AI beating American AI, that the default mode that we're in right now is full speed ahead, and if there are problems we'll just have to fix them on the fly, but we're just not going to have any substantial way to regulate this technology, other than, perhaps,   the most superficial of guardrails. I feel like that's where we're at now; at least, that's what I feel like in Washington right now.

Yeah, I think that has been the default mode of AI development since its inception, and still is today, predominantly. The difficulties are actually to get the machines to do more, rather than how to limit what they're allowed to do. That is still the main thrust. I do think, though, that the first derivative of this is towards increased support for various kinds of regulations and restrictions, and even a growing number of people calling for an “AI pause” or wanting to stop AI development altogether. This used to be basically a completely fringe . . . there were no real serious efforts to push in this direction for almost all decades of AI up until maybe two years ago or so. And since then there has been an increasingly vocal, still minority, but a set of people who are trying hard to push for increased regulation, and for slowing down, and for raising the alarm of AI developments. And I think it remains an open question how this will unfold over the coming years.

I have a complex view on this, what would actually be desirable here. On the one hand, I do think there are these significant risks, including existential risks, that will accompany a transition. When we develop superintelligent machines, it's not just one cool more gadget, right? It's the most important thing ever happening in human history, and they will be to us as we are to chimpanzees or something—potentially a very powerful force, and things could go wrong there. So I do agree with the C.

So I've been told over the past two years!

And to the point where some people think of me as a kind of doomsayer or anti-AI, but that's not the full picture. I think, ultimately, it would be a catastrophe if superintelligence was never developed, and that we should develop this, ideally carefully, and it might be desirable if, at a critical point, just when we figure out how to make machines superintelligent, whoever is doing this, whether it's some private lab, or some government Manhattan Project, whoever it is, has the ability to go a little bit slow in that, maybe to pause for six months or, rather than immediately cranking all the knobs up to 11, maybe do it incrementally, see what happens, make sure the safety mechanisms work. I think that might be more ideal than a situation where you have, say, 15 different labs all racing together first, and whoever takes any extra precautions just immediately fall behind and become irrelevant. I think that would seem . . .

I feel like where we're at right now—I may have answered this differently 18 months ago—but I feel like where we're at right now is that second scenario. At least here in the United States, and maybe I'm too Washington-centric, but I feel we're at the “crank it up to 11,” realistically, phase.

Well, we have seen the first-ever real AI regulations coming on board. It's something rather than nothing, and so you could easily imagine, if pressure continues to build, there will be more demand for this, and then, if you have some actual adverse event, like some bad thing happening, then who knows? There are other technologies that have been stymied because of . . . like human cloning, for example, or nuclear energy in many countries. So it's not unprecedented that society could convince itself that it's bad. So far, historically, all these technology bans and relinquishments have probably been temporary because there have been other societies making other choices, and eventually, just like each generation is, to some extent, like a new role of the die, and eventually you get . . .

But it might be that we already have, in particular with AI technologies that, if fully deployed, could allow a society in a few years to lock itself in to some sort of permanent orthodoxy. If you imagine deploying even current AI systems fully to censor dissenting information—if you had some huge stigmatization of AI where it becomes just taboo to say anything positive about AI, and then very efficient ways of enforcing that orthodoxy by shadow banning people who dissent from it, or canceling them, or you could imagine surveilling anybody not to do any research on AI like that, the technology to sort of freeze in a temporary social consensus might be emerging. And so if 10 years from now there were a strong global consensus of some of these issues, then we can't rule out that that would become literally permanent. My probably optimal level of government oversight and regulation would be more than we currently have, but I do worry a little bit about it not increasing to the optimal point and then stopping there, but once the avalanche starts rolling, it could overshoot the target and result in a problem. To be clear, I still think that's unlikely, but I think it's more likely than it was two years ago.

In 2050, do you feel like we'll be on the road to deep utopia or deep dystopia?

I hope the former, I think both are still in the cards for what we know. There are big forces at play here. We've never had machine intelligence transition before. We don't have the kind of social or economic predictive science that really allows us to say what will happen to political dynamics as we change these fundamental parameters of the human condition. We don't yet have a fully reliable solution to the problem of scalable alignment. I think we are entering uncharted territories here, and both extremely good and extremely bad outcomes are possible, and we are a bit in the dark as to how all of this will unfold.

Faster, Please! is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.



This is a public episode. If you’d like to discuss this with other subscribers or get access to bonus episodes, visit fasterplease.substack.com/subscribe

Show More

Unlock more with Podchaser Pro

  • Audience Insights
  • Contact Information
  • Demographics
  • Charts
  • Sponsor History
  • and More!
Pro Features