The Dairy of a CEO: Mustafa Suleyman, DeepMind Co-Founder [Summary + Transcript]
The Dairy of a CEO podcast's host Steven Bartlett engages in a thought-provoking conversation with Google's DeepMind Co-Founder Mustafa Suleyman. They delve into the complexities of artificial intelligence, ethical considerations, incentives, and the future implications of AI technology.
Here's a quick summary of their conversation:
AI is becoming more dangerous: Mustafa Suleyman, Google DeepMind Co-Founder | Summary by Fireflies.ai
Outline
- Chapter 1: Initial Questions and Introduction (00:02 - 00:57)
- Chapter 2: The Importance of Subscribing (01:33 - 02:00)
- Chapter 3: Early Prototype of Image Recognition (07:13 - 09:56)
- Chapter 4: Predicting the Future of AI (09:07 - 10:59)
- Chapter 5: The Coming Wave (13:27 - 16:20)
- Chapter 6: The Containment Problem (16:20 - 24:44)
- Chapter 7: The Power of AI and its Potential Implications (32:52 - 41:49)
- Chapter 8: The Challenge of AI Containment (41:50 - 49:56)
- Chapter 9: Trusting in a World of AI (57:24 - 1:02:28)
- Chapter 10: Containment Must Be Possible (1:04:41 - 1:15:19)
- Chapter 11: Unstoppable Incentives (1:20:38 - 1:24:55)
- Chapter 12: The Dangers of Technology Production (1:27:20 - 1:30:09)
- Chapter 13: Concluding Remarks and Final Insights (1:33:54 - 1:44:48)
Notes
- The potential of AI to enhance knowledge and understanding.
- The challenge of balancing AI advancements with ethical considerations.
- Steven Bartlett asking viewers to support the channel by subscribing.
- Mustafa Suleyman's emphasis on the potential of AI models to understand and generate new examples of complex data.
- Discussion on the potential of AI to expand to images, videos, and audio.
- The introduction of PI AI (personal intelligence) as a tool for individual knowledge expansion.
- Mustafa Suleyman's emphasis on the potential of AI to feel like a knowledgeable human conversation partner.
- The public conversation around AI and the questions it raises.
- Steven Bartlett's discussion on the potential implications of AI.
- Suggestions for viewers to be more skeptical and engage with AI models, understand their limitations and potentials.
- The idea of containment as a solution to AI's potential risks.
- Mustafa Suleyman's emphasis on understanding the current state of AI regulation and engaging with groups involved in AI ethics.
- Mustafa Suleyman's encouragement for individuals to take control of their own destiny using AI tools.
- Steven Bartlett's appreciation of Mustafa Suleyman's balanced presentation of AI's benefits and potential risks.
Want to know the full conversation? Read the time-stamped transcript:
AI is becoming more dangerous: Mustafa Suleyman, Google DeepMind Co-Founder | Transcript by Fireflies.ai
00:00
Steven Bartlett
It. Are you uncomfortable talking about this?
00:02
Mustafa Suleyman
Yeah. I mean, it's pretty wild, right? Mustafa Suleiman, the billionaire founder of Google's AI technology.
00:09
Steven Bartlett
He's played a key role in the development of AI from its first critical step in 2020.
00:14
Mustafa Suleyman
I moved to work on Google's chatbot. It was the ultimate technology. We can use them to turbocharge our knowledge unlike anything else.
00:21
Steven Bartlett
Why didn't they release it?
Read the full transcript
00:23
Mustafa Suleyman
We were nervous. We were nervous. Every organization is going to race to get their hands on intelligence, and that's going to be incredibly disruptive. This technology can be used to identify cancerous tumors as it can to identify a target on the battlefield. A tiny group of people who wish to cause harm are going to have access tools that can instantly destabilize our world. That's the challenge. How to stop something that can cause harm or potentially kill. That's where we need containment.
00:54
Steven Bartlett
Do you think that it is containable?
00:56
Mustafa Suleyman
It has to be possible.
00:57
Steven Bartlett
Why?
00:57
Mustafa Suleyman
It must be possible.
00:58
Steven Bartlett
Why must it be?
00:59
Mustafa Suleyman
Because otherwise it contains us.
01:01
Steven Bartlett
Yet you chose to build a company in this space. Why did you do that?
01:05
Mustafa Suleyman
Because I want to design an AI that's on your side. I honestly think that if we succeed, everything is a lot cheaper. It's going to power new forms of transportation, reduce the cost of health care.
01:17
Steven Bartlett
But what if we fail?
01:19
Mustafa Suleyman
The really painful answer to that question.
01:21
Steven Bartlett
Is that, do you ever get sad about it?
01:26
Mustafa Suleyman
Yeah, it's intense.
01:30
Steven Bartlett
I think this is fascinating. I looked at the back end of our YouTube channel, and it says that since this channel started, 69.9% of you that watch it frequently haven't yet hit the subscribe button. So I have a favor to ask you. If you've ever watched this channel and enjoyed the content, if you're enjoying this episode right now, please could I ask a small favor? Please hit the subscribe button. Helps this channel more than I can explain. And I promise if you do that to return the favor, we will make this show better and better and better. That's the promise I'm willing to make you if you hit the subscribe button. Do we have a deal? Everything that's going on with artificial intelligence now and this new wave and all these terms like Agi, and I saw another term in your book called AcI.
02:19
Steven Bartlett
It's the first time I'd heard that term. How do you feel about it emotionally? If you had to encapsulate how you feel emotionally about what's going on in this moment, what words would you use?
02:29
Mustafa Suleyman
I would say in the past, it would have been petrified. And I think that over time, as you really think through the consequences and the pros and cons and the trajectory that we're on, you adapt, and you understand that actually, there is something incredibly inevitable about this trajectory, and that we have to wrap our arms around it and guide it and control it as a collective species, as humanity. And I think the more you realize how much influence we collectively can have over this outcome, the more empowering it is. Because on the face of it, this is really going to be the tool that helps us tackle all the challenges that we're facing as a species.
03:25
Steven Bartlett
Right?
03:25
Mustafa Suleyman
We need to fix water desalination. We need to grow food 100 x cheaper than we currently do. We need renewable energy to be ubiquitous and everywhere in our lives, we need to adapt to climate change. Everywhere you look in the next 50 years, we have to do more with less. And there are very few proposals, let alone practical solutions, for how we get there. Training machines to help us as aides, scientific research partners, inventors, creators, is absolutely essential. And so the upside is phenomenal. It's enormous. But AI isn't just a thing. It's not an inevitable whole. Its form isn't inevitable, right?
04:18
Mustafa Suleyman
Its form, the exact way that it manifests and appears in our everyday lives, and the way that it's governed and who it's owned by and how it's trained, that is a question that is up to us collectively as a species, to figure out over the next decade. Because if we don't embrace that challenge, then it happens to us. And that's really what I have been wrestling with for 15 years of my career, is how to intervene in a way that this really does benefit everybody. And those benefits far outweigh the potential risks.
04:54
Steven Bartlett
At what stage were you petrified?
04:58
Mustafa Suleyman
So I founded DeepMind in 2010. And over the course of the first few years, our progress was fairly modest. But quite quickly, in sort of 2013, as the deep learning revolution began to take off, I could see glimmers of very early versions of AIs, learning to do really clever things. So, for example, one of our big initial achievements was to teach an AI to play the Atari games. So, remember Space invaders and pong, where you bat a ball from left to right? And we trained this initial AI to purely look at the raw pixels, screen by screen, flickering or moving in front of the AI, and then control the actions up, down, left, right, shoot, or not.
05:53
Mustafa Suleyman
And it got so good at learning to play this simple game simply through attaching a value between the reward like it was getting score and taking an action, that it learned some really clever strategies to play the game really well, that us games, players and humans hadn't really even noticed, or at least people in the office hadn't noticed it. Some professionals did. And that was amazing to me because I was like, wow. This simple system that learns through a set of stimuli plus a reward to take some actions can actually discover many strategies, clever tricks to play the game well, that us humans hadn't occurred to us, right? And that to me, is both thrilling because it presents the opportunity to invent new knowledge and advance our civilization. But of course, in the same measure, it's also petrifying.
07:00
Steven Bartlett
Was there a particular moment when you were at DeepMind where you go, where you had that kind of eureka moment, like a day when something happened and it caused that epiphany, I guess. Was it?
07:13
Mustafa Suleyman
Yeah. It was actually a moment even before 2013 where I remember standing in the office and watching a very early prototype of one of these image recognition image generation models that was trained to generate new handwritten black and white digits. So imagine zero to 123-45-6789 all in different style of handwriting on a tiny grid of like 300 pixels by 300 pixels in black and white. And were trying to train the AI to generate a new version of one of those digits. A number seven in a new handwriting sounds so simplistic today, given the incredible photorealistic images that are being generated, right? And I just remember so clearly, it took sort of ten or 15 seconds, and it just resolved. The number appeared.
08:11
Mustafa Suleyman
It went from complete black to, like, slowly gray, and then suddenly these were like, white pixels appeared out of the black darkness, and it revealed a number seven. And that sounds so simplistic in hindsight, but it was amazing. I was like, wow. The model kind of understands the representation of a seven well enough to generate a new example of a number seven. An image of a number seven and you roll forward ten years, and our predictions were correct. In fact, it was quite predictable in hindsight, the trajectory that were on, more compute plus vast amounts of data, has enabled us, within a decade, to go from predicting black and white digits, generating new versions of those images, to now generating unbelievable photorealistic. Not just images, but videos, novel videos with a simple natural language instruction or a prompt.
09:16
Steven Bartlett
What has surprised you? You referred to that as predictable, but what has surprised you about what's happened over the last decade?
09:23
Mustafa Suleyman
So I think what was predictable to me back then was the generation of images and of audio, because the structure of an image is locally contained. So pixels that are near one another create straight lines and edges and corners, and then eventually they create eyebrows and noses and eyes and faces and entire scenes. And I could just intuitively, in a very simplistic way, I could get my head around the fact that, okay, well, we're predicting these number sevens. You can imagine how you then can expand that out to entire images, maybe even to videos, maybe to audio, too. What I said a couple of seconds ago is connected in phoneme space in the spectrogram. But what was much more surprising to me was that those same methods for generation applied in the space of language.
10:20
Mustafa Suleyman
Language seems like such a different abstract space of ideas. When I say, like, the cat sat on the most people would probably predict mat, right? But it could be table, car, chair, tree, it could be mountain, cloud. I mean, there's a gazillion possible next word predictions. And so the space is so much larger, the ideas are so much more abstract. I just couldn't wrap my intuition around the idea that we would be able to create the incredible large language models that you see today.
10:59
Steven Bartlett
Your chat GPTs.
11:02
Mustafa Suleyman
Chat GPT Google bard the Google's bard inflection my new company has an AI called PI. PI AI, which stands for personal intelligence. And it's as good as chat GPT, but much more emotional and empathetic and kind. So it's just super surprising to me that just growing the size of these large language models, as we have done by ten x every single year for the last ten years, we've been able to produce this. And that's just an amazingly large number. If you just kind of pause for a moment to grapple with the numbers here. In 2013, when we trained the Atari AI that I mentioned to you at DeepMind that used two PETA flops of computation. So PETA pe t a stands for a million billion calculations. A flop is a calculation. So 2 million billion.
12:02
Steven Bartlett
Right.
12:02
Mustafa Suleyman
Which is already an insane number of calculations.
12:05
Steven Bartlett
Lost me at two.
12:06
Mustafa Suleyman
It's totally crazy. Yeah. Just two of these units that are already really large. And every year since then, we've ten x'd the number of calculations that can be done, such that today the biggest language model that we train at inflection, uses 10 billion petaflops. So 10 billion million billion calculations. I mean, it's just unfathomably large number. And what we've really observed is that scaling these models by ten x every single year produces this magical experience of talking to an AI that feels like you're talking to a human that is super knowledgeable and super smart.
12:50
Steven Bartlett
There's so much that's happened in public conversation around AI, and there's so many questions that I have. I've been speaking to a few people about artificial intelligence to try and understand it. And I think where I am right now is I feel quite scared. But when I get scared, it's not the type of scared that makes me anxious. It's not like an emotional scared. It's a very logical scared. It's my very logical brain hasn't been able to figure out how the inevitable outcome that I've arrived at, which is that humans become the less dominant species on this planet, how that is to be avoided in any way. The first chapter of your book, the coming Wave, is titled, appropriately to how I feel containment is not possible.
13:37
Steven Bartlett
You say in that chapter, the widespread emotional reaction I was observing is something I've come to call the pessimism aversion trap.
13:45
Mustafa Suleyman
Correct.
13:46
Steven Bartlett
What is the pessimism aversion trap?
13:49
Mustafa Suleyman
Well, so all of us, me included, feel what you just described. When you first get to grips with the idea of this new coming wave. It's scary, it's petrifying, it's threatening. Is it going to take my job? Is my daughter or son going to fall in love with it? What does this mean? What does it mean to be human in a world where there's these other human like things that aren't human? How do I make sense of that? It's super scary. And a lot of people, over the last few years, things have changed in the last six months, I have to say. But over the last few years, I would say the default reaction has been to avoid the pessimism and the fear. Right.
14:36
Mustafa Suleyman
To just kind of recoil from it and pretend that it's, like either not happening or that it's all going to work out to be rosy, it's going to be fine. We don't have to worry about it. People often say, well, we've always created new jobs. We've never permanently displaced jobs. We've only ever seen new jobs be created. Unemployment is at an all time low.
14:55
Steven Bartlett
Right.
14:55
Mustafa Suleyman
So there's this default optimism bias that we have, and I think it's less about a need for optimism and more about a fear of pessimism. And so that trap, particularly in elite circles, means that often we aren't having the tough conversations that we need to have in order to respond to the coming wave.
15:18
Steven Bartlett
Are you scared, in part, about having those tough conversations because of how it might be received not so much anymore.
15:27
Mustafa Suleyman
So I've spent most of my career trying to put those tough questions on the policy table, right? I've been raising these questions, the ethics of AI safety and questions of containment for as long as I can remember, with governments and civil societies and all the rest of it. And so I've become used to talking about that. And I think it's essential that we have the honest conversation because we can't let it happen to us. We have to openly talk about it.
16:05
Steven Bartlett
This is a big question, but as you sit here now, do you think that it is containable? Because I can't see how. I can't see how it can be contained. Chapter three is the containment problem, where you give the example of how technologies are often invented for good reasons and for certain use cases like the hammer, which is used maybe to build something, but it also can be used to kill people. And you say in history we haven't been able to ban a technology ever, really. It has always found a way into society because of other societies have an incentive to have it, even if we don't. And then we need it, like the nuclear bomb, because if they have it, then we don't, then we're at a disadvantage. So are you optimistic?
17:00
Mustafa Suleyman
Honestly, I don't think an optimism or a pessimism frame is the right one because both are equally biased in ways that I think distract us. As I say in the book, on the face of it does look like containment isn't possible. We haven't contained or permanently banned a technology of this type in the past. There are some that we have done right. So we banned CFCs, for example, because they were producing a hole in the ozone layer. We've banned certain weapons, chemical and biological weapons, for example, or blinding lasers. Believe it or not, there are such things as lasers that will instantly blind you. So we have stepped back from the frontier in some cases, but that's largely where there's either cheaper or equally effective alternatives that are quickly adopted. In this case, these technologies are omni use.
18:01
Mustafa Suleyman
So the same core technology can be used to identify cancerous tumors in chest x rays as it can to identify a target on the battlefield for an aerial strike. So that mixed use or omni use is going to drive the proliferation because there's huge commercial incentives, because it's going to deliver a huge benefit and do a lot of good. And that's the challenge that we have to figure out, is how to stop something which on the face of it is so good, but at the same time can be used in really bad ways, too.
18:37
Steven Bartlett
Do you think we will?
18:38
Mustafa Suleyman
I do think we will. So I think that nation states remain the backbone of our civilization. We have chosen to concentrate power in a single authority, the nation state, and we pay our taxes, and we've given the nation state a monopoly over the use of violence. And now the nation state is going to have to update itself quickly to be able to contain this technology, because without that kind of essentially oversight, both of those of us who are making it, but also, crucially, of the open source, then it will proliferate and it will spread. But regulation is still a real tool, and we can use it, and we must.
19:30
Steven Bartlett
What does the world look like in, let's say, 30 years if that doesn't happen, in your view, people, because people, the average person, can't really grapple their head around artificial intelligence when they think of it. They think of, like, these large language models that you can chat to and ask it about your homework. That's like the average person's understanding of artificial intelligence, because that's all they've ever been exposed to of it. You have a different view because of the work you've spent the last decade doing. So to try and give Dave, who's, I don't know, an Uber driver in Birmingham an idea. Who's listening to this right now, what artificial intelligence is and its potential capabilities. If there's no containment, what does the world look like in 30 years?
20:19
Mustafa Suleyman
So I think it's going to feel largely like another human. So think about the things that you can do. Not again in the physical world, but in the digital world.
20:32
Steven Bartlett
2050, I'm thinking of. I'm in 2050.
20:36
Mustafa Suleyman
2050, we will have robots. 2050 we will definitely have robots. I mean, more than that. 2050, we will have new biological beings as well, because the same trajectory that we've been on with hardware and software is also going to apply to the platform of biology.
20:55
Steven Bartlett
Are you uncomfortable talking about this?
20:57
Mustafa Suleyman
Yeah, I mean, it's pretty wild, right?
20:59
Steven Bartlett
I notice you crossed your arms. No, I always use that as a cue for someone when a subject matter is uncomfortable. And it's interesting because I know you know so much more than me and about this, and I know you've spent way more hours thinking off into the future about the consequences of this. I mean, you've written a book about it, so, bloody hell. Like, you spent ten years at the very deep mind, is one of the pinnacle companies, the pioneers in this whole space. So you know some stuff, and it's funny, because when I was, I watched an interview with Elon Musk, and he was asked a question similar to this.
21:33
Steven Bartlett
I know he speaks in certain tone of voice, but he said that he's gotten to the point where he thinks he's living in suspended disbelief, where he thinks that if he spent too long thinking about it, he wouldn't understand the purpose of what he's doing right now. And he says that it's more dangerous than nuclear weapons and that it's too late to stop it. This one interview, that's chilling. And I was filming dragons Den the other day, and I showed the dragons the clip. I was like, look what Elon Musk said when he was asked about what advice he should give to his children. In a world of. In an inevitable world of artificial intelligence, it's the first time I've seen Elon Musk stop for, like, 20 seconds and not know what to say.
22:10
Steven Bartlett
Stumble, stumble, stumble, and then conclude that he's living in suspended disbelief.
22:18
Mustafa Suleyman
Yeah, I mean, I think it's a great phrase. That is the moment we're in. We have to. What I said to you about the pessimism aversion trap, we have to confront the probability of seriously dark outcomes, and we have to spend time really thinking about those consequences, because the competitive nature of companies and of nation states is going to mean that every organization is going to race to get their hands on intelligence. Intelligence is going to be a new form of capital. Right? Just as there was a grab for land or there's a grab for oil, there's a grab for anything that enables you to do more with less. Faster, better, smarter.
23:03
Steven Bartlett
Right?
23:04
Mustafa Suleyman
And we can clearly see the predictable trajectory of the exponential improvements in these technologies. And so we should expect that wherever there is power, there's now a new tool to amplify that power, accelerate that power, turbocharge it. Right? And in 2050, if you asked me to look out there, of course, it makes me grimace. That's why I was like, oh, my God, it really does feel like a new species, and that has to be brought under control. We cannot allow ourselves to be lodged from our position as the dominant species on this planet. We cannot allow that.
23:52
Steven Bartlett
You mentioned robots, so these are sort of adjacent technologies that are rising with artificial intelligence robots. You mentioned biological, new biological species. Give me some light on what you mean by that.
24:08
Mustafa Suleyman
Well, so far, the dream of robotics hasn't really come to fruition.
24:14
Steven Bartlett
Right.
24:15
Mustafa Suleyman
I mean, we still have. The most we have now are sort of drones and little bit of self driving cars. But that is broadly on the same trajectory as these other technologies. And I think that over the next 30 years, we are going to have humanoid robotics, we're going to have physical tools within our everyday system that we can rely on. That will be pretty good. That will be pretty good to do many of the physical tasks. That's a little bit further out because I think there's a lot of tough problems there, but it's still coming in the same way. And likewise with biology. We can now sequence a genome for a millionth of the cost of the first genome, which took place in 2000, so 20 ish years ago, the cost has come down by a million times.
25:10
Mustafa Suleyman
And we can now increasingly synthesize that is create or manufacture new bits of DNA, which obviously give rise to life in every possible form. And we're starting to engineer that DNA to either remove traits or capabilities that we don't like, or indeed to add new things that we want it to do. We want fruit to last longer, or we want meat to have higher protein, et cetera, synthetic meat to have higher protein levels.
25:45
Steven Bartlett
And what's the implications of that? Potential implications?
25:52
Mustafa Suleyman
I think that the darkest scenario there is that people will experiment with pathogens, engineered synthetic pathogens, that might end up accidentally or intentionally being more transmissible, I. E. They can spread faster or more lethal, they cause more harm or potentially kill.
26:17
Steven Bartlett
Like a pandemic.
26:18
Mustafa Suleyman
Like a pandemic. And that's where we need containment, right? We have to limit access to the tools and the know how to carry out that kind of experimentation. So one framework of thinking about this with respect to making containment possible is that we really are experimenting with dangerous materials. And anthrax is not something that can be bought over the Internet, that can be freely experimented with. And likewise, the very best of these tools, in a few years time, are going to be capable of creating new synthetic pandemic pathogens. And so we have to restrict access to those things. That means restricting access to the compute, it means restricting access to the software that runs the models, to the cloud environments that provide APIs, provide you access to experiment with those things.
27:20
Mustafa Suleyman
And of course, on the biology side, it means restricting access to some of the substances. And people aren't going to like this. People are not going to like that claim, because it means that those who want to do good with those tools, those who want to create a startup, the small guy, the little developer that struggles to comply with all the regulations, they're going to be pissed off, understandably, right? But that is the age we're in deal with it. We have to confront that reality. That means that we have to approach this with the precautionary principle. Right? Never before in the invention of a technology or in the creation of a regulation have we proactively said, we need to go slowly. We need to make sure that this first does no harm. The precautionary principle. And that is just an unprecedented moment.
28:13
Mustafa Suleyman
No other technology has done that. Right? Because I think we collectively in the industry, those of us who are closest to the work, can see a place in five years or ten years where it could get out of control. And we have to get on top of it now. And it's better to forego like that, is give up some of those potential upsides or benefits until we can be more sure that it can be contained, that it can be controlled, that it always serves our collective interests.
28:43
Steven Bartlett
And I think about that. So I think about what you've just said there about being able to create these pathogens, these diseases, viruses, et cetera, that could become weapons or whatever else, but with artificial intelligence and the power of that intelligence with these pathogens, you could theoretically ask one of these systems to create a virus that, a very deadly virus. You could ask the artificial intelligence to create a very deadly virus that has certain properties, maybe even that mutates over time in a certain way. So it only kills a certain amount of people, kind of like a nuclear bomb of viruses that you could just hit an enemy with. Now, if I hear that and I go, okay, that's powerful, I would like one of those.
29:30
Steven Bartlett
There might be an adversary out there that goes, I would like one of those, just in case America get out of hand, and America's thinking, I want one of those in case Russia gets out of hand. And so, okay, you might take a precautionary approach in the United States, but that's only going to put you on the back foot when China or Russia or one of your adversaries accelerates forward in that path. And it's the same with the nuclear bomb.
29:56
Mustafa Suleyman
You nailed it. I mean, that is the race condition. We refer to that as the race condition. The idea that if I don't do it, the other party is going to do it, and therefore I must do it. But the problem with that is that it creates a self fulfilling prophecy. So the default there is that we all end up doing it, and that can't be right, because there is a opportunity for massive cooperation here. There's a shared that is between us and China and every other, quote, unquote them or they or enemy that we want to create. We've all got a shared interest in advancing the collective health and well being of humans and humanity.
30:44
Steven Bartlett
How well have we done at promoting shared interest? Well, in the development of technologies over the years, even at, like, a corporate level, even.
30:53
Mustafa Suleyman
The nuclear nonproliferation treaty has been reasonably successful. There's only nine nuclear states in the world today. We've stopped. Many, like three countries actually gave up nuclear weapons because we incentivize them with sanctions and threats and economic rewards. Small groups have tried to get access to nuclear weapons and so far have largely failed.
31:15
Steven Bartlett
It's expensive, though, right, and hard to like uranium as a chemical, to keep it stable and to buy it and to house it. I mean, I couldn't just put it in the shed.
31:23
Mustafa Suleyman
You certainly couldn't put it in a shed. You can't download uranium two, three, five off the Internet. It's not available open source. That is totally true. So it's got different characteristics, for sure.
31:33
Steven Bartlett
But a kid in Russia could, in his bedroom, could download something onto his computer that's incredibly harmful in the artificial intelligence department. Right.
31:45
Mustafa Suleyman
I think that will be possible at some point in the next five years. It's true, because there's a weird trend that's going on here. On the one hand, you've got the cutting edge AI models that are built by Google and OpenAI and my company inflection, and they cost hundreds of millions of dollars, and there's only a few of them. But on the other hand, what was cutting edge a few years ago is now open source today. So GPT-3 which came out in the summer of 2020, is now reproduced as an open source model. So the code and the weights of the model, the design of the model and the actual implementation code is completely freely available on the web. And it's tiny.
32:33
Mustafa Suleyman
It's like 60 times or 60, 70 times smaller than the original model, which means that it's cheaper to use and cheaper to run. And that's, as we've said earlier, that's the natural trajectory of technologies that become useful. They get more efficient, they get cheaper, and they spread further. And so that's the containment challenge. That's really the essence of what I'm sort of trying to raise in my book, is to frame the challenge of the next 30 to 50 years as around containment and around confronting proliferation.
33:09
Steven Bartlett
Do you believe, because we're both going to be alive unless some robot kills us, but we're both going to be alive in 30 years time, I hope so. Maybe the podcast will still be going. Unless AI is now taking my job. It's very possible.
33:25
Mustafa Suleyman
So I'm gonna.
33:25
Steven Bartlett
I'm gonna sit you here, and, you know, when you're. You'll. You'll be, what, 60, 68 years old? I'll be 60. And I'll say, at that point, when we have that conversation, do you think we would have been successful in containment on a global level?
33:45
Mustafa Suleyman
I think we have to be. I can't even think that we're not.
33:49
Steven Bartlett
Why?
33:59
Mustafa Suleyman
Because I'm fundamentally a humanist, and I think that we have to make a choice to put our species first. And I think that's what we have to be defending for the next 50 years. That's what we have to defend. Because, look, it's certainly possible that we invent these agis in such a way that they are always going to be provably subservient to humans and take instructions from their human controller every single time. But enough of us think that we can't be sure about that I don't think we should take the gamble, basically.
34:49
Mustafa Suleyman
So that's why I think that we should focus on containment and non proliferation, because some people, if they do have access to the technology, will want to take those risks, and they will just want to see what's on the other side of the know, and they might end up opening Pandora's box. And that's a decision that affects all of us. And that's the challenge of the networked age. We live in this globalized world, and we use these words like globalization, and you sort of forget what globalization means. This is what globalization is. This is what a networked world is. It means that someone taking one small action can suddenly spread everywhere, instantly, regardless.
35:30
Steven Bartlett
Of their intentions, when they took the action.
35:32
Mustafa Suleyman
It may be unintentional, like you say. It may be that they weren't ever meaning to do harm.
35:42
Steven Bartlett
Well, I think I asked you, when I said 30 years time, you said that there will be, like, human level intelligence. You'll be interacting with this new species, but the species. For me to think the species will want to interact with me feels like wishful thinking, because what will I be to them? I've got a french bulldog, Pablo. I can't imagine our IQ is that far apart. In relatives terms, the IQ between me and my dog, Pablo, I can't imagine that's that far apart. Even when I think about, is it like the orangutan, where we only have, like, 1% difference in DNA or something crazy? And yet they throw their poop around, and I'm sat here broadcasting around the world. There's quite a difference in that 1%.
36:27
Steven Bartlett
And then I think about this new species, where, as you write in your book in chapter four, there seems to be no upper limit to AI's potential intelligence. Why would such an intelligence want to interact with me?
36:42
Mustafa Suleyman
Well, it depends how you design it. So I think that our goal, one of the challenges of containment, is to design AIs that we want to interact with, that want to interact with us. Right? If you set an objective function for an AI, a goal for an AI by its design, which inherently disregards or disrespects you as a human and your goals, then it's going to wander off and do a lot of strange things.
37:12
Steven Bartlett
What if it has kids? And the kids, you know what I mean? What if it replicates in a way where, because I've heard this conversation around, it depends how we design it. But I think about, it's kind of like if I have a kid and the kid grows up to be a thousand times more intelligent than me, to think that I could have any influence on it, when it's a thinking, sentient, developing species, again, feels like I'm overestimating my version of intelligence and importance and significance in the face of something that is incomprehensibly, like, even 100 times more intelligent than me. And the speed of its computation is 1000 times what the meat in my skull can do.
38:00
Mustafa Suleyman
Yeah.
38:02
Steven Bartlett
How do I know it's going to respect me or care about me, or understand that I may?
38:08
Mustafa Suleyman
I think that comes back down to the containment challenge. I think that if we can't be confident that it's going to respect you and understand you and work for you and us as a species overall, then that's where we have to adopt the precautionary principle. I don't think we should be taking those kinds of risks in experimentation and design. And now, I'm not saying it's possible to design an AI that doesn't have those self improvement capabilities in the limit in, like, 30 or 50 years. I think that's kind of what I was saying is it seems likely that if you have one like that, it's going to take advantage of infinite amounts of data and infinite amounts of computation, and it's going to kind of outstrip our ability to act. And so I think we have to step back from that precipice.
39:01
Mustafa Suleyman
That's what the containment problem is that it's actually saying no. Sometimes it's saying, no. And that's a different sort of muscle that we've never really exercised as a civilization. And that's obviously why containment appears not to be possible, because we've never done it before. We've never done it before. And every inch of our commerce and politics and our war and all of our instincts are just like, clash, compete. Clash, compete.
39:34
Steven Bartlett
Profit, profit, grow, beat.
39:36
Mustafa Suleyman
Exactly. Dominate, fear them, be paranoid. Like, now, all this nonsense about China being this new evil, how does that slip into our culture? How are we suddenly all shifted from thinking it's the muslim terrorists about to blow us all up to now it's the Chinese who are about to blow up Kansas? It's just like, what are we talking about? We really have to pare back the paranoia and the fear and the othering, because those are the incentive dynamics that are going to drive us to cause self harm to humanity.
40:13
Steven Bartlett
Thinking the worse than each other. There's a couple of key moments when, in my understanding of artificial intelligence, that have been kind of paradigm shifts for me, because I think, like many people, I thought of artificial intelligence as, like, a child I was raising, and I would program. I would code it to do certain things, so I'd code it to play chess, and I would tell it the moves that are conducive with being successful in chess. And then I remember watching that, like, alphago documentary, right, which I think was deepMind, was it?
40:44
Mustafa Suleyman
That was us.
40:45
Steven Bartlett
Yeah. You guys. So you programmed this artificial intelligence to play the game go, which is kind of like, just think of it kind of like a chess or a black gamut or whatever, and it eventually just beats the best player in the world of all time. And the way it learned how to beat the best player in the world of all time, the world champion, who was, by the way, depressed when he got beat, was just by playing itself. Right? And then there's this moment, I think, in, is it game four or something?
41:09
Steven Bartlett
Where it does this move that no one could have predicted as a move that seemingly makes absolutely no sense, right, in those moments where no one trained it to do that, and it did something unexpected beyond where humans are trying to figure it out in hindsight, this is where I go, how do you train it if it's doing things we didn't anticipate? Right. Like, how do you control it when it's doing things that humans couldn't anticipate it doing? Where we're looking at that move, it's called, like, move 37 or something, correct?
41:38
Mustafa Suleyman
Yeah.
41:39
Steven Bartlett
Is it move 37?
41:40
Mustafa Suleyman
It is.
41:40
Steven Bartlett
Look at nice intelligence, nice work. Yeah, I'm going to survive a bit longer than I thought.
41:45
Mustafa Suleyman
You've got at least another decade in you.
41:49
Steven Bartlett
Move. 37 does this crazy thing and you see everybody like, lean in and go, why has it done that? And it turns out to be brilliance that humans couldn't afford.
41:58
Mustafa Suleyman
The commentator actually thought it was a mistake. Yeah, he was a pro and he was like, this is definitely a mistake. The alpha goes, lost the game.
42:04
Steven Bartlett
But it was so far ahead of us that it knew something we didn't. Right. That's when I lost hope in this whole idea of, like, oh, train it to do what we want, like a dog, like, sit, paw, roll over.
42:16
Mustafa Suleyman
Right. Well, the real challenge is that we actually want it to do those things. When it discovers a new strategy or it invents a new idea or it helps us find a cure for some disease, that's why we're building it, right. Because we're reaching the limits of what we as humans can invent and solve. Right. Especially with what we're facing in terms of population growth over the next 30 years and how climate change is going to affect that and so on. We really want these tools to turbocharge us.
42:54
Steven Bartlett
Right?
42:55
Mustafa Suleyman
And yet it's that creativity and that invention which obviously makes us also feel, well, maybe it is really going to do things that we don't like, for sure.
43:07
Steven Bartlett
Right, so, interesting. How do you contend with all of this? How do you contend with the clear upside? And then you must, like Elon, must be completely aware of the horrifying existential risk at the same time. And you're building a big company in this space, which I think is valued at 4 billion now, inflection AI, which has got its own model called PI. So you're building in this space, you understand the incentives at both a nation state level and a corporate level that we're going to keep playing forward. Even if the US stops, there's going to be some other country that sees that as a huge advantage. Their economy will swell because they did. If this company stops, then this one's going to get a huge advantage and their shareholders are. Everyone's investing in AI, full steam ahead.
44:01
Steven Bartlett
But you can see this huge existential risk. Is that the path forward? Suspended disbelief. I feel like I know that's going to happen. No one's been able to tell me otherwise, but just don't think too much about it and you'll be afraid.
44:21
Mustafa Suleyman
I think you can't give up. Right. I think that in some ways, your realization, exactly what you've just described, like, weighing up two conflicting and horrible truths about what is likely to happen, those contradictions, that is a kind of honesty and a wisdom, I think, that we need all collectively to realize, because the only path through this is to be straight up and embrace the risks and embrace the default trajectory of all these competing incentives driving forward to kind of make this feel like, inevitable. You put the blinkers on and you kind of just ignore it. Or if you just be super rosy and it's all going to be all right, and if you say that we've always figured it out anyway, then we're not going to get the energy and the dynamism and engagement from everybody to try to figure this out.
45:16
Mustafa Suleyman
And that's what gives me reason to be hopeful, because I think that we make progress by getting everybody paying attention to this. It isn't going to be about those who are currently the AI scientists, or those who are the technologists like me, or the venture capitalists, or just the politicians. All of those people. No one's got answers. So that's what we have to confront. There are no obvious answers to this profound question. And I've basically written the book to say, prove that I'm wrong. Containment must be possible.
45:57
Steven Bartlett
It must be.
45:58
Mustafa Suleyman
It must be possible.
45:59
Steven Bartlett
Why?
46:00
Mustafa Suleyman
It has to be possible. It has to be.
46:02
Steven Bartlett
You want it to be?
46:03
Mustafa Suleyman
I desperately want it to be. Yeah.
46:05
Steven Bartlett
Why must it be?
46:06
Mustafa Suleyman
Because otherwise, I think you're in the camp of believing that this is the inevitable evolution of humans, the transhuman kind of view, some people would argue, like, what is. Okay, let's stretch the timelines out. So let's not talk about 30 years. Let's talk about 200 years. Like, what is this going to look like in 2200?
46:38
Steven Bartlett
You tell me. You're smarter than me.
46:41
Mustafa Suleyman
I mean, it's mind blowing. It's mind blowing.
46:44
Steven Bartlett
What is the.
46:44
Mustafa Suleyman
We'll have quantum computers by then.
46:46
Steven Bartlett
What's a quantum computer?
46:49
Mustafa Suleyman
A quantum computer is a completely different type of computing architecture, which, in simple terms, basically allows you to. Those calculations that I described at the beginning, billions and billions of flops. Those billions of flops can be done in a single computation. So everything that you see in the digital world today relies on computers processing information. And the speed of that processing is a friction. It kind of slows things down. Right. You remember back in the day, old school modems, 56K modem, the dial up sound and the image pixel loading, like pixel by pixel. That was because the computers were slow. And we're getting to a point now where the computers are getting faster and faster, and quantum computing is like a whole new leap, like, way beyond where we currently are.
47:48
Steven Bartlett
And so, by analogy, how would I understand that? So, like, if my, I've got my dial up modem over here and then quantum computing over here, right? What's the difference?
48:01
Mustafa Suleyman
Well, I don't know. It's really difficult to.
48:03
Steven Bartlett
Billion times faster?
48:04
Mustafa Suleyman
Oh, it's, it's like. It's like billions of billions times faster. It's. It's. It's much more than that. I mean, one way of thinking about it is like a floppy disk, which I guess most people remember, 1.4 megabytes, a physical thing. Back in the day, in 1960 or so, that was basically an entire pallet's worth of computer that was moved around by a forklift truck, right? Which is insane. Today you have billions and billions of times that floppy disk in your smartphone in your pocket. Tomorrow you're going to have billions and billions of smartphones in minuscule, wearable devices. There'll be cheap fridge magnets that are constantly on everywhere, sensing all the time, monitoring, processing, analyzing, improving, optimizing, and they'll be super cheap. So it's super unclear. What do you do with all of that knowledge and information? I mean, ultimately, knowledge creates value.
49:17
Mustafa Suleyman
When you know the relationship between things, you can improve them, make it more efficient. And so more data is what has enabled us to build all the value online in the last 25 years. And so what does that look like in 150 years? I can't really even imagine. To be honest with you. It's very hard to say. I don't think everybody is going to be working. We wouldn't be working in that kind of environment. I mean, look, the other trajectory to add to this is the cost of energy production. AI. If it really helps us solve battery storage, which is the missing piece, I think, to really tackle climate change, then we will be able to basically source and store infinite energy from the sun.
50:16
Mustafa Suleyman
And I think in 20 or so years time, 2030 years time, that is going to be a cheap and widely available, if not completely freely available resource. And if you think about it, everything in life has the cost of energy built into its production value. And so if you strip that out, everything is likely to get a lot cheaper. We'll be able to desalinate water, we'll be able to grow crops much cheaper. We were able to grow much higher quality food, right? It's going to power new forms of transportation. It's going to reduce the cost of drug production and health care. Right. So all of those gains, obviously, there'll be a huge commercial incentive to drive the production of those gains, but the cost of producing them is going to go through the floor.
51:03
Mustafa Suleyman
I think that's one key thing that a lot of people don't realize. That is a reason to be hugely hopeful and optimistic about the future. Everything is going to get radically cheaper in 30 to 50 years.
51:18
Steven Bartlett
So 200 years time, we have no idea what the world looks like. This goes back to the point about being. Did you say transhumanist?
51:26
Mustafa Suleyman
Right.
51:26
Steven Bartlett
What does that mean?
51:29
Mustafa Suleyman
Transhumanism? I mean, it's a group of people who basically believe that humans and our soul and our being will one day transcend or move beyond our biological substrate.
51:48
Steven Bartlett
Okay?
51:49
Mustafa Suleyman
So our physical body, our brain, our biology, is just an enabler for your intelligence and who you are as a person. And there's a group of kind of crack bots, basically, I think, who think that we're going to be able to upload ourselves to a silicon substrate, right. A computer that can hold the essence of what it means to be. Steven, so you in 2200 could well still be you by their reasoning, but you'll live on a server somewhere.
52:30
Steven Bartlett
Why are they wrong? I think about all these adjacent technologies, like biological. Biological advancements. Did you call it like biosynthesis or something?
52:39
Mustafa Suleyman
Yeah, synthetic biology.
52:40
Steven Bartlett
Synthetic biology. I think about the nanotechnology development, right? Think about quantum computing, the progress in artificial intelligence, everything becoming cheaper. And I think, why are they wrong?
52:53
Mustafa Suleyman
It's hard to say precisely, but broadly speaking, I haven't seen any evidence yet that we're able to extract the essence of a being from a brain. Right? That kind of dualism, that there is a mind and a body and a spirit. I don't see much evidence for that, even in neuroscience, that actually it's much more one and the same. So I don't think you're going to be able to emulate the entire brain. So their thesis is that, well, some of them cryogenically store their brain after death.
53:31
Steven Bartlett
Jesus.
53:32
Mustafa Suleyman
So they wear these, like, you know how you have like an organ donor tag or whatever? So they have a cryogenically freeze me when I die tag. And so there's like a special ambulance services that will come pick you up, because obviously you need to do it really quickly. The moment you die, you need to get put into a cryogenic freezer to preserve your brain forever. I personally think this is nuts, but their belief is that you'll then be able to reboot that biological brain and then transfer you over. It doesn't seem plausible to me when.
54:08
Steven Bartlett
You said at the start of this little topic here that it must be possible to contain it. It must be possible. The reason why I struggle with that is because in chapter seven you say a line in your book that AI is more autonomous than any other technology in history. For centuries, the idea that technologists is somehow running out of control, a self directed and self propelling force beyond the realms of human agency, remained a fiction. Not anymore. And this idea of autonomous technology that is acting uninstructed and is intelligent, and then you say, we must be able to contain it. It's kind of like a massive dog, like a big Rottweiler that is 1000 times bigger than me. And me looking up at it and going, I'm going to take you for a walk.
55:04
Steven Bartlett
And then it just looking down at me and just stepping over me, or stepping on me.
55:10
Mustafa Suleyman
Well, that's actually a good example, because we have actually contained rotweilers before. We've contained gorillas and tigers and crocodiles and pandemic pathogens and nuclear weapons. It's easy to be a hater on what we've achieved, but this is the most peaceful moment in the history of our species. This is a moment when our biggest problem is that people eat too much. Think about that. We've spent our entire evolutionary period running around looking for food and trying to stop our enemies throwing rocks at us. And we've had this incredible period of 500 years where each year things have broadly. Well, maybe each century, let's say there's been a few ups and downs, but things have broadly got better. And we're on a trajectory for lifespans to increase and quality of life to increase and health and well being to improve.
56:16
Mustafa Suleyman
And I think that's because in many ways we have succeeded in containing forces that appear to be more powerful than ourselves. It just requires unbelievable creativity and adaptation. It requires compromise, and it requires a new tone, right? A much more humble tone to governance and politics and how we run our world. Not this kind of hyper aggressive adversary or paranoia tone that we talked about previously, but one that is much more wise than that, much more accepting that we are unleashing this force that does have that potential to be the rotweiler that you described, but that we must contain that as our number one priority. That has to be the thing that we focus on, because otherwise it contains us.
57:09
Steven Bartlett
I've been thinking a lot recently about cybersecurity as well, just broadly, on an individual level, in a world where there are these kinds of tools, which seems to be quite close, large language models. It brings up this whole new question about cybersecurity and cybersafety. And in a world where there's this ability to generate audio and language and videos that seem to be real, what can we trust? And I was watching a video of a young girl whose grandmother was called up by a voice that was made to sound like her son saying he'd been in a car accident and asking for money, and her nearly sending the money, because this really brings into focus that our lives are built on trusting the things we see, hear and watch.
57:59
Steven Bartlett
And now it feels like a moment where we're no longer going to be able to trust what we see on the Internet, on the phone. What advice do you have for people who are worried about this?
58:15
Mustafa Suleyman
So skepticism, I think, is healthy and necessary, and I think that we're going to need it even more than we ever did.
58:27
Steven Bartlett
Right.
58:28
Mustafa Suleyman
And so if you think about how we've adapted to the first wave of this, which was spammy email scams, everybody got them. And over time, people learned to identify them and be skeptical of them and reject them likewise. You know, I'm sure many of us get, like, text messages. I'd certainly get loads of text messages trying to fish me and ask me to meet up or do this, that, and the other. And we've adapted. Right now, I think we should all know and expect that criminals will use these tools to manipulate us, just as you described. I mean, the voice is going to be humanlike. The deep fake is going to be super convincing. And there are actually ways around those things.
59:18
Mustafa Suleyman
So, for example, the reason why the banks invented one time passwords, where they send you a text message with a special code, is precisely for this reason. So that you have a two factor authentication. Increasingly, we will have a three or four factor authentication where you have to triangulate between multiple separate independent sources. And it won't just be like, call your bank manager and release the funds. Right. So this is where we need the creativity and energy and attention of everybody, because defense, the kind of defensive measures have to evolve as quickly as the potential offensive measures, the attacks that are coming.
01:00:06
Steven Bartlett
I heard you say this, that you think some people are. For many of these problems, we're going to need to develop AIs to defend us from the AIS.
01:00:16
Mustafa Suleyman
Right. We kind of already have that. Right. So we have automated ways of detecting spam online these days. Most of the time, there are machine learning systems which are trying to identify when your credit card is used in a fraudulent way. That's not a human sitting there looking at patterns of spending traffic in real time. That's an AI that is like flagging, that something looks off. Likewise with data centers or security cameras. A lot of those security cameras these days have tracking algorithms that look for surprising sounds, or like, if a glass window is smashed, that will be detected by an AI, often that is listening on the security camera. So that's kind of what I mean by that, is that increasingly, those AIs will get more capable, and we'll want to use them for defensive purposes.
01:01:13
Mustafa Suleyman
And that's exactly what it looks like to have good, healthy, well functioning, controlled AIs that serve us.
01:01:19
Steven Bartlett
I went one of these large language models and said to me, I said to the large language model, give me an example where artificial intelligence takes over the world or whatever and just results in the destruction of humanity, and then tell me what we'd need to do to prevent it. And it said, it gave me this wonderful example of this AI called Cynthia that threatens to destroy the world. And it says, the way to defend that would be a different AI, which had a different name, and it said that this one would be acting in human interests, and we'd basically be fighting one AI with another. Of course.
01:01:55
Mustafa Suleyman
Of course.
01:01:55
Steven Bartlett
At that level, if Cynthia started to wreak havoc on the world and take control of the nuclear weapons and infrastructure and all that, we would need an equally intelligent weapon to fight it.
01:02:08
Mustafa Suleyman
Although one of the interesting things that we found over the last few decades is that it so far tended to be the AI plus the human that is still dominating. That's the case in chess, in go, in other games, in go. Yeah. So there was a paper that came out a few months ago, two months ago, that showed that a human was actually able to beat the cutting edge go program, even one that was better than Alphago, with a new strategy that they had discovered. So, obviously, it's not just a sort of game over environment where the AI just arrives and it gets better. Like, humans also adapt. They get super smart. They, like, I say, get more cynical, get more skeptical, ask good questions, invent their own things, use their own AIs to adapt.
01:03:01
Mustafa Suleyman
And that's the evolutionary nature of what it means to have a technology, right? I mean, everything is a technology. Like your pair of glasses made you smarter in a way. Like, before, there were glasses, and people got bad eyesight that weren't able to read. Suddenly, those who did adopt those technologies were able to read for longer in their lives or under low light conditions, and they were able to consume more information and got smarter. And so that is the trajectory of technology. It's this iterative interplay between human and machine that makes us better over time.
01:03:37
Steven Bartlett
You know, the potential consequences if we don't reach a point of containment. Yet you chose to build a company in this space.
01:03:46
Mustafa Suleyman
Yeah.
01:03:48
Steven Bartlett
Why that? Why did you do that?
01:03:51
Mustafa Suleyman
Because I believe that the best way to demonstrate how to build safe and contained AI is to actually experiment with it in practice. And I think that if we are just skeptics or critics and we stand back from the cutting edge, then we give up that opportunity to shape outcomes. Know all of those other actors that we referred to, whether it's like China and the US going at each other's throats or other big companies that are purely pursuing profit at all costs. And so it doesn't solve all the problems, of course. It's super hard, and again, it's full of contradictions, but I honestly think it's the right way for everybody to proceed.
01:04:41
Steven Bartlett
Experiment at the front.
01:04:43
Mustafa Suleyman
Yeah. If you're afraid, Russia, Putin, understand.
01:04:47
Steven Bartlett
Right.
01:04:47
Mustafa Suleyman
What reduces fear is deep understanding. Spend time playing with these models. Look at their weaknesses. They're not superhumans yet. They make tons of mistakes. They're crappy in lots of ways. They're actually not that hard to make.
01:05:01
Steven Bartlett
The more you've experimented, has that correlated with a reduction in fear?
01:05:09
Mustafa Suleyman
Cheeky question. Yes and no. You're totally right. Yes, it has, in the sense that the problem is, the more you learn, the more you realize.
01:05:22
Steven Bartlett
I was fine before I started talking about AI. Now the more I talk about it.
01:05:28
Mustafa Suleyman
It's true. It's true. It's sort of pulling on a thread. This is crazy spiral. Yeah. I think in the short term, it's made me way less afraid because I don't see that kind of existential harm that we've been talking about in the next decade or two. But longer term, that's where I struggle to wrap my head around how things play out in 30 years.
01:05:54
Steven Bartlett
Some people say government regulation will sort it out. You discussed this in chapter 13 of your book, which is titled Containment must be possible. I love how you didn't say is, yeah, containment must be. Containment must be possible. What do you say to people that say government regulation will sort it out? I heard Rishi Sunak did some announcement, and he's got a Cobra committee coming together. They'll handle it.
01:06:22
Mustafa Suleyman
That's right. And the EU have a huge piece of regulation called the. You know, President Joe Biden has got his own set of proposals. And we've been working with both Rishi Sunak and Biden and trying to contribute and shape it in the best way that we can. Look, it isn't going to happen without regulation. So regulation is essential. It's critical, again, going back to the precautionary principle, but at the same time, regulation isn't enough. I often hear people say, well, we'll just regulate it. We'll just stop. We'll just stop. We'll just stop. We'll slow down. And the problem with that is that it kind of ignores the fact that the people who are putting together the regulation don't really understand enough about the detail today. In their defense, they're rapidly trying to wrap their head around it, especially in the last six months.
01:07:26
Mustafa Suleyman
And that's a great relief to me because I feel the burden is now increasingly shared. And just from a personal perspective, I feel like I've been saying this for about a decade, and just in the last six months now everyone's coming at me and saying like, what's going on? I'm like, great. This is the conversation we need to be having because everybody can start to see the glimmers of the future. Like, what will happen if a chat GPT like product or a pie like product really does improve over the next ten years? And so when I say regulation is not enough, what I mean is it needs movements, it needs culture, it needs people who are actually building and making in modern, creative, critical ways, not just like giving it up to companies or small groups of people, right?
01:08:16
Mustafa Suleyman
We need lots of different people experimenting with strategies for containment.
01:08:20
Steven Bartlett
Isn't it predicted that this industry is a $15 trillion industry or something like that?
01:08:25
Mustafa Suleyman
Yeah, I've heard that it is a lot.
01:08:28
Steven Bartlett
So if I'm Rishi and I know that I'm going to be chucked out office, Rishi is the prime minister of the UK. If I'm going to be chucked out office in two years unless this economy gets good, I don't want to do anything to slow down that $15 trillion bag that I could be on the receiving end of. I would definitely not want to slow that 15 billion trillion dollar bag and give it to America or Canada or some other country. I'd want that $15 trillion windfall to be on my country.
01:08:57
Mustafa Suleyman
Right?
01:08:57
Steven Bartlett
So other than the long term health and success of humanity in my four year election window, I've got to do everything I can to boost these numbers and get us looking good. So I could give you lip service, but listen, I'm not going to be here unless these numbers look good, right?
01:09:20
Mustafa Suleyman
Exactly. That's another one of the problems. Short termism is everywhere. Who is responsible for thinking about the 20 year future?
01:09:32
Steven Bartlett
Who is it?
01:09:33
Mustafa Suleyman
I mean, that's a deep question, right? I mean, the world is happening to us on a decade by decade timescale. It's also happening hour by hour. So change is just ripping through us. And this arbitrary window of governance of like a four year election cycle where actually it's not even four years, because by the time you've got in, you do some stuff for six months, and then by month twelve or 18, you're starting to think about the next cycle and are you going to pull. This is like the short termism is killing us, right? And we don't have an institutional body whose responsibility is stability. You could think of it as a global technology stability function. What is the global strategy for containment that has the ability to introduce friction when necessary to implement the precautionary principle and to basically keep the peace?
01:10:34
Mustafa Suleyman
That I think is the missing governance piece which we have to invent in the next 20 years. And it's insane because I'm basically describing the UN Security Council plus the World Trade Organization. All these huge global institutions which formed after the horrors of the second world war have actually been incredible. They've created interdependence and alignment and stability.
01:11:03
Steven Bartlett
Right.
01:11:03
Mustafa Suleyman
Obviously, there's been a lot of bumps along the way in the last 70 years, but broadly speaking, it's an unprecedented period of peace. And when there's peace, we can create prosperity. And that's actually what we're lacking at the moment, is that we don't have an international mechanism for coordinating among competing nations, competing corporations to drive the peace. In fact, we're actually going kind of in the opposite direction. We're resorting to the old school language of a clash of civilizations with like, China is the new enemy. They're going to come to dominate us, we have to dominate them. It's a battle between two poles. China's taking over Africa, China's taken over the Middle east. We have to count. I mean, it's just like, that can only lead to conflict. That just assumes that conflict is inevitable.
01:11:53
Mustafa Suleyman
And so when I say regulation is not enough, no amount of good regulation in the UK or in Europe or in the US is going to deal with that clash of civilizations language which we seem to have been become addicted to.
01:12:07
Steven Bartlett
If we need that global collaboration to be successful here, are you optimistic now that we'll get it because the same incentives are at play with climate change and AI. Why would I want to reduce my carbon emissions when it's making me loads of money? Or why would I want to reduce my AI development when it's going to make us 15 trillion?
01:12:26
Mustafa Suleyman
Yeah. So the really painful answer to that question is that we've only really ever driven extreme compromise and consensus in two scenarios. One, off the back of unimaginable catastrophe and suffering. Hiroshima, Nagasaki, and the Holocaust and World War II, which drove ten years of consensus and new political structures. Right, and then the second is, we.
01:13:01
Steven Bartlett
Did fire the bullet, though, didn't we? We fired a couple of those nuclear bombs.
01:13:05
Mustafa Suleyman
Exactly. And that's why I'm saying the brutal truth of that is that it takes a catastrophe to trigger the need for alignment. Right? So that's one. The second is where there is an obvious mutually assured destruction dynamic, where both parties are afraid that this would trigger nuclear meltdown.
01:13:32
Steven Bartlett
Right.
01:13:33
Mustafa Suleyman
And that means suicide.
01:13:34
Steven Bartlett
And when there was few parties.
01:13:36
Mustafa Suleyman
Exactly.
01:13:38
Steven Bartlett
When there was just nine people.
01:13:39
Mustafa Suleyman
Exactly.
01:13:40
Steven Bartlett
You could get all nine. But when we're talking about artificial technology, there's going to be more than nine people, right, that have access to the full sort of power of that technology for nefarious reasons.
01:13:51
Mustafa Suleyman
I don't think it has to be like that. I think that's the challenge of containment, is to reduce the number of actors that have access to the existential threat technologies to an absolute minimum, and then use the existing military and economic incentives which have driven world order and peace so far to prevent the proliferation of access to these superintelligences or these agis.
01:14:17
Steven Bartlett
A quick word on Huell. As you know, they're a sponsor of this podcast, and I'm an investor in the company. And I have to say, it's moments like this in my life where I'm extremely busy and I'm flying all over the place, and I'm recording TV shows and I'm recording shows in America and here in the UK. That, Huel, is a necessity in my life. I'm someone that, regardless of external circumstances or professional demands, wants to stay healthy and nutritionally complete. And that's exactly where Hewell fits in my life. It's enabled me to get all of the vitamins and minerals and nutrients that I need in my diet to be aligned with my health goals, while also not dropping the ball on my professional goals. Because it's convenient and because I can get it online in Tesco, in supermarkets all over the country.
01:14:59
Steven Bartlett
If you're one of those people that hasn't yet tried Hul or you have before, but for whatever reason, you're not a Huel consumer right now. I would highly recommend giving Huel a go. And Tesco have now increased their listings with Huel, so you can now get the RTD ready to drink in Tesco expresses all across the UK. Ten areas of focus for containment. You're the first person I've met that's really hazarded a, laid out a blueprint for the things that need to be done cohesively to try and reach this point of containment. So I'm super excited to talk to you about these. The first one is about safety, and you mentioned there, that's kind of what we talked about a little bit about there being AIs that are currently being developed to help contain other AIs.
01:15:41
Steven Bartlett
Two audits, which is being able to, from what I understand, being able to audit what's being built in these open source models. Three choke points. What's that?
01:15:53
Mustafa Suleyman
Yeah. So choke points refers to points in the supply chain where you can throttle who has access to what. Okay, so on the Internet today, everyone thinks of the Internet as an idea, this kind of abstract cloud thing that hovers around above our heads. But really, the Internet is a bunch of cables. Those cables are physical things that transmit information under the sea. And those points, the endpoints, can be stopped and you can monitor traffic. You can control basically what traffic moves back and forth. And then the second choke point is access to chips. So the GPUs, graphics processing units, which are used to train these super large clusters, I mean, we now have the second largest supercomputer in the world today, at least just for this next six months. We will, other people will catch up soon, but we're ahead of the curve.
01:16:54
Mustafa Suleyman
We're very lucky. Cost a billion dollars. And those chips are really the raw commodity that we use to build these large language models. And access to those chips is something that governments can, should, and are restricting. That's a choke point.
01:17:13
Steven Bartlett
You spent a billion dollars on a computer?
01:17:15
Mustafa Suleyman
We did, yeah. It's a bit more than that, actually. About 1.3.
01:17:21
Steven Bartlett
A couple of years time. That'll be the price of an iPhone.
01:17:25
Mustafa Suleyman
That's the problem. Everyone's going to have it.
01:17:29
Steven Bartlett
Number six is quite curious. You say that the need for governments to put increased taxation on AI companies to be able to fund the massive changes in society, such as paying for reskilling and education. You put massive tax on it over here. I'm going to go over here. If you tax it, if I'm an AI company and you're taxing me heavily over here. I'm going to Dubai or portugal if it's that much of a competitive disadvantage. I will not build my company where the taxation is high.
01:18:02
Mustafa Suleyman
Right. So the way to think about this is what are the strategies for containment? If we are agreed that long term we want to contain, that is close down, slow down, control, both the proliferation of these technologies and the way the really big AIs are used, then the way to do that is to tax things. Taxing things slows them down, and that's what you're looking for, provided you can coordinate internationally. So you're totally right that some people will move to Singapore or to Abu Dhabi or Dubai or whatever. The reality is that at least for the next sort of period, I would say ten years or so, the concentrations of intellectual horsepower will remain the big megacities. Right.
01:18:54
Mustafa Suleyman
I moved from London in 2020 to go to Silicon Valley, and I started my new company in Silicon Valley because the concentration of talent there is overwhelming. All the very best people are there in AI and software engineering. So I think it's quite likely that's going to remain the case for the foreseeable future. But in the long term, you're totally right. It's another coordination problem. How do we get nation states to collectively agree that we want to try and contain, that we want to slow down? Because, as we've discussed with the proliferation of dangerous materials or on the military side, there's no use one person doing it or one country doing it if others race ahead. And that's the conundrum that we face.
01:19:37
Steven Bartlett
I don't consider myself to be a pessimist in my life. I consider myself to be an optimist generally. And as you've said, I think we have no choice but to be optimistic. And I have faith in humanity. We've done so many incredible things and overcome so many things. And I also think I'm really logical, as in, I'm the type of person that needs evidence to change my beliefs. Either way, when I look at all of the whole picture, having spoken to you and several others on this subject matter, I see more reasons why we won't be able to contain than reasons why we will, especially when I dig into those incentives.
01:20:11
Steven Bartlett
You talk about incentives at length in your book at different points, and it's clear that all the incentives are pushing towards a lack of containment, especially in the short and midterm, which tends to happen with new technologies. In the short and midterm. It's like a land grab, the gold is in the stream. We all rush to get the shovels and the sieves and stuff, and then we realize the unintended consequences of that. Hopefully not before it's too late. In chapter eight, you talk about unstoppable incentives at play here. The coming wave represents the greatest economic prize in history. And scientists and technologists are all too human. They crave status, success and legacy, and they want to be recognized as the first and the best. They're competitive and clever, with a carefully nurtured sense of their place in the world and in history. Right.
01:21:06
Steven Bartlett
I look at you, I look at people like Sam from OpenAI Elon, you're all humans with the same understanding of your place in history and status and success. You all want that, right? Right. There's a lot of people that maybe don't have as good a track record of you at doing the right thing, which you certainly have, that will just want the status and the success and the money. Incredibly strong incentives. I always think about incentives as being the thing that you look at exactly when you want to understand how people will behave. All of the incentives on a global level suggest that containment won't happen. Am I right in that assumption? That all the incentives suggest containment won't happen in the short or midterm until there is a tragic event that makes us. Forces us towards that idea of containment.
01:22:04
Mustafa Suleyman
Or if there is a threat of mutually assured destruction.
01:22:09
Steven Bartlett
Right.
01:22:10
Mustafa Suleyman
And that's the case that I'm trying to make, is that let's not wait for something catastrophic to happen. So it's self evident that we all have to work towards containment. Right? I mean, you would have thought that the potential threat, the potential idea that COVID-19 was a side effect, let's call it, of a laboratory in Wuhan that was exploring gain of function research, where it was deliberately trying to basically make the pathogen more transmissible. You would have thought that warning to all of us, let's not even debate whether it was or wasn't, but just the fact that it's conceivable that it could be, that should really, in my opinion, have forced all of us to instantly agree that this kind of research should just be shut down. We should just not be doing gain of function research.
01:23:08
Mustafa Suleyman
On what planet could we possibly persuade ourselves that we can overcome the containment problem in biology? Because we've proven that we can't, because it could have potentially got out. And there's a number of other examples of where it did get out of other diseases like foot and mouth disease back in the 90s in the UK.
01:23:29
Steven Bartlett
But that didn't change our behavior.
01:23:31
Mustafa Suleyman
Right. Well, foot and mouth disease clearly didn't cause enough harm because it only killed a bunch of cattle. Right? And the pandemic, we can't seem to, you know, COVID-19 pandemic, we can't seem to agree that it really was from a lab and not from a bunch of bats. Right. And so that's where I struggle now. You catch me in a moment where I feel angry and sad and pessimistic, because to me that's like a straightforwardly obvious conclusion that this is a type of research that we should be closing down. And I think we should be using these moments to give us insight and wisdom about how we handle other technology trajectories in the next few decades. Should we? Should. That's what I'm advocating for. Must. That's the best I can do.
01:24:22
Steven Bartlett
I want to know, Will, I think.
01:24:24
Mustafa Suleyman
The ODs are low. I can only do my best. I'm doing my best to advocate for it. I mean, I'll give you an example. I think autonomy is a type of AI capability that we should not be pursuing.
01:24:38
Steven Bartlett
Really? Like autonomous cars and stuff.
01:24:40
Mustafa Suleyman
Well, autonomous cars, I think, are slightly different because autonomous cars operate within a much more constrained physical domain. Right. Like you really can. The containment strategies for autonomous cars are actually quite reassuring.
01:24:55
Steven Bartlett
Okay.
01:24:56
Mustafa Suleyman
They have gps control. We know exactly all the telemetry and how exactly all of those components on board a car operate. And we can observe repeatedly that it behaves exactly as intended. Right. Whereas I think with other forms of autonomy that people might be pursuing, like online, where you have an AI that is designed to self improve without any human oversight, or a battlefield weapon, which, unlike a car, hasn't been over that particular moment in the battlefield millions of times, but is actually facing a new enemy every time, every single time. And we're just going to go and allow these autonomous weapons to have these autonomous military robots to have lethal force, I think that's something that we should really resist. I don't think we want to have autonomous robots that have lethal force.
01:25:54
Steven Bartlett
You're a super smart guy, and I struggle to believe that because you demonstrate such a clear understanding of the incentives in your book that I struggle to believe that you don't think the incentives will win out, especially in the short and near term. And then the problem is in the short and near term, as is the case with most of these waves, is we wake up in ten years time and go, how the hell did we get here?
01:26:19
Mustafa Suleyman
Right?
01:26:21
Steven Bartlett
And as you say this precautionary approach of we should have rang the bell earlier, we should have sounded the alarm earlier. But we waltzed in with optimism, right? And with that kind of aversion to confronting the realities of it. And then we woke up in 30 years and we're on a leash, right? And there's a big Rottweiler and we've lost control. We've lost. I would love to know someone as smart as you. I don't. I don't believe can be. Can believe that containment is possible. And that's me just being completely honest. I'm not saying you're lying to me, but I just can't see how someone as smart as you and in the know as you can believe that containment is going to happen.
01:27:04
Mustafa Suleyman
Well, I didn't say it is possible. I said it must be.
01:27:06
Steven Bartlett
Right.
01:27:06
Mustafa Suleyman
Which is, this is what we keep discussing. That's an important distinction on the face of it. Look, I care about science. I care about facts. I care about describing the world as I see it. And what I've set out to do in the book is describe a set of interlocking incentives which drive a technology production process which produces potentially really dangerous outcomes. And what I'm trying to do is frame those outcomes in the context of the containment problem and say, this is the big challenge of the 21st century. Containment is the challenge. And if it isn't possible, then we have serious issues. And on the face of it, like I've said in the book, I mean, the. The first chapter is called containment is not possible. Right. The last chapter is called containment must be possible. For all our sakes, it must be possible.
01:27:57
Mustafa Suleyman
But I agree with you. I'm not saying it is. I'm saying this is what we have to be working on.
01:28:03
Steven Bartlett
We have no choice.
01:28:04
Mustafa Suleyman
We have no choice but to work on this problem. This is a critical problem.
01:28:09
Steven Bartlett
How much of your time are you focusing on this problem?
01:28:12
Mustafa Suleyman
Basically all my time. I mean, building and creating is about understanding how these models work, what their limitations are, how to build it safely and ethically. I mean, we have designed the structure of the company to focus on the safety and ethics aspects. So, for example, we are a public benefit corporation, which is a new type of corporation, which gives us a legal obligation to balance profit making with the consequences of our actions as a company on the rest of the world. The way that we affect the environment, the way that we affect people, the way that we affect users, that people who aren't users of our products. And that's a really interesting. I think an important new direction. It's a new evolution in corporate structure because it says we have a responsibility to proactively do our best to do the right thing. Right.
01:29:12
Mustafa Suleyman
And I think that if you were a tobacco company back in the day or an oil company back in the day, and your legal charter said that your directors are liable if they don't meet the criteria of stewarding your work in a way that doesn't just optimize profit, which is what all companies are incentivized to do at the moment. Talking about incentives, but actually, in equal measure attend to the importance of doing good in the world. To me, that's an incremental but important innovation in how we organize society and how we incentivize our work. So it doesn't solve everything. It's not a panacea, but that's my effort to try and take a small step in the right direction.
01:29:57
Steven Bartlett
Do you ever get sad about it? About what's happening?
01:30:00
Mustafa Suleyman
Yeah, for sure. For sure. It's intense. It's a lot to take in. It's a very reality.
01:30:18
Steven Bartlett
Does that weigh on you?
01:30:21
Mustafa Suleyman
Yeah, it does. I mean, every day. Every day. I mean, I've. I've been working on this for many years now, and it's emotionally a lot to take in. It's hard to think about the far out future and how your actions today, our actions collectively, our weaknesses, our failures, that irritation that I have, that we can't learn the lessons from the pandemic. Right. Like all of those moments where you feel the frustration of governments not working properly or corporations not listening or some of the obsessions that we have in culture, where we're debating, like, small things, and you're just like, whoa, we need to focus on the big picture here.
01:31:10
Steven Bartlett
You must feel a certain sense of responsibility as well that most people won't carry because you've spent so much of your life at the very cutting edge of this technology, and you understand it better than most, you can speak to it better than most, so you have a greater chance than many at steering. That's a responsibility.
01:31:30
Mustafa Suleyman
Yeah, I embrace that. I try to treat that as a privilege. I feel lucky to have the opportunity to try and do that.
01:31:42
Steven Bartlett
There's this wonderful thing in my favorite theatrical play called Hamilton, where he says, history has its eyes on you. Do you feel that?
01:31:53
Mustafa Suleyman
Yeah, I feel that. It's a good way of putting it. I do feel that.
01:32:02
Steven Bartlett
You're happy. Right.
01:32:05
Mustafa Suleyman
Well, what is happiness to know?
01:32:12
Steven Bartlett
What's the range of emotions that you contend with on a frequent basis, if you're being honest.
01:32:20
Mustafa Suleyman
I think it's kind of exhausting and exhilarating in equal measure, because for me, it is beautiful to see people interact with AIs and get huge benefit out of it every day. Now, millions of people have a super smart tool in their pocket that is making them wiser and healthier and happier, providing emotional support, answering questions of every type, making you more intelligent. And so, on the face of it, in the short term, that feels incredible. It's amazing what we are all building. But in the longer term, it is exhausting to keep making this argument and have been doing it for a long time. And in a weird way, I feel a bit of a sense of relief in the last six months, because after chat GBT and this wave feels like it started to arrive and everybody gets it.
01:33:24
Mustafa Suleyman
So I feel like it's a shared problem now, and that feels nice.
01:33:31
Steven Bartlett
And it's not just bouncing around in.
01:33:32
Mustafa Suleyman
Your head a little bit. It's not just in my head and a few other people at DeepMind and OpenAI and other places that have been talking about it for a long time.
01:33:41
Steven Bartlett
Ultimately, human beings may no longer be the primary planetary drivers as we have become accustomed to being. We are going to live in an epoch where the majority of our daily interactions are not with other people, but with AIs. Page 284 of your book.
01:33:59
Mustafa Suleyman
The last page. Yeah. Think about how much of your day you spend looking at a screen.
01:34:12
Steven Bartlett
12 hours, pretty much, right?
01:34:14
Mustafa Suleyman
Whether it's a phone or an iPad or a desktop, versus how much time you spend looking into the eyes of your friends and your loved ones. And so, to me, it's like we're already there. In a way, what I meant by that was, this is a world that we're kind of already. In the last three years, people have been talking about metaverse, metaverse. And the mischaracterization of the metaverse was that it's over there. It was this, like, virtual world that we would all bop around in and talk to each other as these little characters. But that was totally wrong. That was a complete misframing. The metaverse is already here. It's the digital space that exists in parallel time to our everyday life.
01:35:07
Mustafa Suleyman
It's the conversation that you will have on Twitter or the video that you'll post on YouTube or this podcast that will go out and connect with other people. It's that meta space of interaction. And I use meta to mean beyond this space, not just that weird other over there space that people seem to point to and that's really what is emerging here. It's this parallel digital space that is going to live alongside with and in relation to our physical world.
01:35:42
Steven Bartlett
Your kids come to you. You got kids?
01:35:45
Mustafa Suleyman
No, I don't have kids.
01:35:46
Steven Bartlett
Your future kids. If you ever have kids, a young child walks up to you and asks that question that Elon was asked. What should I do about with my future? What should I pursue in the light of everything you know, about how artificial intelligence is going to change the world and computational power and all of these things, what should I dedicate my life to? What do you say?
01:36:07
Mustafa Suleyman
I would say knowledge is power. Embrace, understand, grapple with the consequences. Don't look the other way when it feels scary, and do everything you can to understand and participate and shape, because it is coming.
01:36:31
Steven Bartlett
And if someone's listening to this and they want to do something to help this battle, which I think you present as the solution, containment. What can the individual do?
01:36:43
Mustafa Suleyman
Read, listen, use the tools, try to make the tools understand the current state of regulation. See which organizations are organizing around, you know, campaign groups, activism, know, find solidarity. Connect with other people. Spend time online. Ask these questions. Mention it at the know, ask your parents, ask your mum how she's reacting to talking to Alexa or whatever it is that she might do. Pay attention. I think that's already enough, and there's no need to be more prescriptive than that, because I think people are creative and independent, and it will be obvious to you what you as an individual, feel you need to contribute in this moment, provided you're paying attention.
01:37:38
Steven Bartlett
Last question. What if we fail? And what if we succeed? What if we fail in containment? And what if we succeed in containment of artificial intelligence?
01:37:49
Mustafa Suleyman
I honestly think that if we succeed, this is going to be the most productive and the most meritocratic moment in the history of our species. We are about to make intelligence widely available to hundreds of millions, if not billions of people. And that is all going to make us smarter and much more creative and much more productive. And I think over the next few decades, we will solve many of our biggest social challenges. I really believe that. I really believe we are going to reduce the cost of energy production, storage and distribution to zero marginal cost. We're going to reduce the cost of producing healthy food and make that widely available to everybody. And I think the same trajectory with health care, with transportation, with education, I think that ends up producing radical abundance over a 30 year period.
01:38:45
Steven Bartlett
And in a world of radical abundance, what do I do with my day?
01:38:48
Mustafa Suleyman
I think that's another profound question. And believe me, that is a good problem to have. Absolutely.
01:38:55
Steven Bartlett
Do we not need meaning and purpose?
01:38:56
Mustafa Suleyman
Oh, man. That is a better problem to have than what we've just been talking about for the last, like, 90 minutes. And I think that's wonderful. Isn't that amazing?
01:39:06
Steven Bartlett
I don't know. The reason I'm unsure is because everything that seems wonderful has an unintended consequence.
01:39:14
Mustafa Suleyman
I'm sure it does. We live in a world of food abundance in the west, and our biggest problem is obesity. Right? So I'll take that problem. In the grand scheme of everything, do.
01:39:23
Steven Bartlett
Humans not need struggle? Do we not need that kind of meaningful, voluntary struggle?
01:39:28
Mustafa Suleyman
I think we'll create new type other opportunities to quest.
01:39:35
Steven Bartlett
Okay.
01:39:36
Mustafa Suleyman
I think that's an easier problem to solve, and I think it's an amazing problem. Like, many people really don't want to work, right. They want to pursue their passion and their hobby and all the things that you talk about and so on. Absolutely. We're now, I think, going to be heading towards a world where we can liberate people from the shackles of work, unless you really want to.
01:39:55
Steven Bartlett
Universal basic income.
01:39:57
Mustafa Suleyman
I've long been an advocate of Ubi, very long time.
01:40:01
Steven Bartlett
Everyone gets a check every month.
01:40:03
Mustafa Suleyman
I don't think it's going to quite take that form. I actually think it's going to be that we basically reduce the cost of producing basic goods so that you're not as dependent on income. Like, imagine if you did have basically free energy and food. You could use that free energy to grow your own food. You could grow it in a desert because you would have adapted seeds and so on. You would have desalination and so on. That really changes the structure of cities, it changes the structure of nations. It means that you really can live in quite different ways for very extended periods without contact with the kind of center. I mean, I'm actually not a huge advocate of that kind of libertarian wet dream, but I think if you think about it, in theory, it's kind of a really interesting dynamic.
01:40:50
Mustafa Suleyman
That's what proliferation of power means. Power isn't just about access to intelligence. It's about access to these tools which allow you to take control of your own destiny and your life and create meaning and purpose in the way that you might envision. And that's incredibly creative. Incredibly creative time. That's what success looks like to me in some ways. The downside of that, I think failure is not achieving a world of radical abundance, in my opinion. And more importantly, failure is a failure to contain.
01:41:26
Steven Bartlett
Right. What does that lead to?
01:41:29
Mustafa Suleyman
I think it leads to a mass proliferation of power and people who have really bad intentions.
01:41:37
Steven Bartlett
What does that lead to?
01:41:38
Mustafa Suleyman
Will potentially use that power to cause harm to others. This is part of the challenge, right? In this networked, globalized world, a tiny group of people who wish to deliberately cause harm are going to have access tools that can instantly, quickly have large scale impact on many other people. And that's the challenge of proliferation, is preventing those bad actors from getting access to the means to completely destabilize our world. That's what containment is about.
01:42:16
Steven Bartlett
We have a closing tradition on this podcast where the last guest leaves a question for the next guest, not knowing who they're leaving the question for. The question left for you is, what is a space or place that you consider the most sacred?
01:42:33
Mustafa Suleyman
Well, I think one of the most beautiful places I remember going to as a child was Windermere Lake in the Lake district. And I was pretty young and on a dinghy with some family members. And I just remember it being incredibly serene and beautiful and calm. I actually haven't been back there since, but that was a pretty beautiful place.
01:43:05
Steven Bartlett
Seems like the antithesis of the world we live in, right?
01:43:08
Mustafa Suleyman
Maybe I should go back there and chill out.
01:43:12
Steven Bartlett
Maybe. Thank you so much for writing such a great book. It's wonderful to read a book on this subject matter that does present solutions, because not many of them do. And it presents them in a balanced way that appreciates both sides of the argument. Isn't tempted to just play to either. What do they call it? Playing to the crowd? No, what they call it, like playing to the orchestra, I can't remember. Right. But it doesn't attempt to play to either side or ponder to either side in order to score points. It seems to be entirely nuanced, incredibly smart, and incredibly necessary because of the stakes that the book confronts that are at play in the world at the moment. And that's really important. It's very important. And it's important that I think everybody reads this book. It's incredibly accessible as well.
01:43:56
Steven Bartlett
And I said to Jack, who's the director of this podcast, before we started recording, that there's so many terms like nanotechnology and the stuff about biotechnologies and quantum computing, that reading through the book, suddenly I understood what they meant. And these had been kind of exclusive terms and technologies. And I also had never understood the relationship that all of these technologies now have with each other and how robotics emerging with artificial intelligence is going to cause this whole new range of possibilities that again, have a good side and a potential downside. It's a wonderful book and it's perfectly timed. It's perfectly timed, wonderfully written, perfectly timed. I'm so thankful that I got to read it and I highly recommend that anybody that's curious on this subject matter goes and gets the book. So thank you Mustafa.
01:44:45
Steven Bartlett
Really, really appreciate your time and hopefully it wasn't too uncomfortable for you.
01:44:48
Mustafa Suleyman
Thank you. This was awesome. I loved it. It was really fun and thanks for such an amazing, wide ranging conversation.
01:44:54
Steven Bartlett
Thank you. If you've been listening to this podcast over the last few months, you'll know that we're sponsored and supported by Airbnb. But it amazes me how many people don't realise they could actually be sitting on their very own Airbnb. For me, as someone who works away a lot, it just makes sense to Airbnb my place at home whilst I'm away. If your job requires you to be away from home for extended periods of time, why leave your home empty? You can so easily turn your home into an Airbnb and let it generate income for you whilst you're on the road.
01:45:26
Steven Bartlett
Whether you could use a little extra money to cover some bills or for something a little bit more fun, your home might just be worth more than you think and you can find out how much it's worth at co UK host that's Airbnb. Co uk host.
Source | Google's DeepMind Co-founder: AI Is Becoming More Dangerous And Threatening! - Mustafa Suleyman