Lex Fridman Podcast: OpenAI's CEO Sam Altman [Summary + Transcript]
Podcast transcripts

Lex Fridman Podcast: OpenAI's CEO Sam Altman [Summary + Transcript]

Fireside by Fireflies
Fireside by Fireflies

In this podcast episode, Lex Fridman sits down with OpenAI CEO Sam Altman to discuss his predictions for the computing landscape in the next decade. Altman shares his insights on anticipated advancements in GPT-5, Sora, Artificial General Intelligence (AGI), and his recent challenges with the OpenAI board.

Here are the key takeaways from the podcast:

Lex Fridman Podcast: OpenAI's CEO Sam Altman—Summary powered by Fireflies.ai

Outline
  • Chapter 1: Introduction (00:06)
  • Chapter 2: Sponsor Acknowledgement (00:58)
  • Chapter 3: Reflections on Company Structures (04:38)
  • Chapter 4: Challenges in Progress (06:49)
  • Chapter 5: Decision-Making Process (16:17)
  • Chapter 6: Mission and Worries (20:07)
  • Chapter 7: Characterization of the Journey (26:26)
  • Chapter 8: Exploration of New Ideas (32:20)
  • Chapter 9: Sequential Steps and Brainstorming (35:44)
  • Chapter 10: Usefulness of GPTV (52:41)
  • Chapter 11: Fact Checking and Validating Information (53:30)
  • Chapter 12: Customizing AI for Individual Needs (55:56)
  • Chapter 13: Slow Thinking and Sequential Thinking (1:00:44)
  • Chapter 14: OpenAI's Approach to Milestones (1:03:47)
  • Chapter 15: Challenges and Bottlenecks in AI Development (1:07:08)
  • Chapter 16: Addressing Theatrics in AI Development (1:13:03)
  • Chapter 17: Vision for AI in the Future (1:15:37)
  • Chapter 18: AI as an Information Source and Search Engine (1:19:12)
  • Chapter 19: Ethical Issues in AI Development (1:22:16)
  • Chapter 20: Public Input and Transparency in AI Behavior (1:24:25)
  • Chapter 21: AI in Programming and Code Completion (1:29:56)
  • Chapter 22: The Role of Mathematics in AI (1:37:49)
  • Chapter 23: Personal Habits and the Use of Capital Letters (1:45:02)
  • Chapter 24: Reflections on Existential Questions (1:48:16)

Notes
  • Sam Altman and Lex Fridman discuss the importance of iterating organizational structures to de-escalate power struggles within a company.
  • They reflect on board structures, power dynamics, and the balance between research, product development, and funding.
  • Sam Altman emphasizes the importance of a board understanding the technical aspects of a company's mission, not just the business side.
  • They touch upon the importance of maintaining a broad understanding of various areas of a company or field, rather than focusing too deeply on one aspect.
  • Sam discusses the capabilities of OpenAI's GPT-3 model, mentioning its potential use in writing code, editing papers, and performing other knowledge work tasks.
  • Lex and Sam talk about the challenges of fact-checking information produced by AI like GPT-3.
  • They explore the idea of an AI model that learns and adapts to an individual over time, becoming more useful as it gains experience.
  • Sam discusses the potential of AI tools to synthesize information in new ways, offering a different approach to traditional search engines.
  • They talk about the potential impact of advertising on content and truth manipulation.
  • Sam proposes a public, transparent process for defining a model's desired behavior, inviting public input and explaining edge cases.
  • They discuss the importance of safety and responsible practices in AI development.
  • Sam talks about his interest in exploring the philosophical concept of consciousness and how it relates to AI.
  • Lastly, Lex thanks Sam for the conversation and encourages listeners to support the podcast by checking out their sponsors.

Lex Fridman Podcast: OpenAI's CEO Sam Altman - Summary powered by Fireflies.ai

Want to read the full conversation? Read the time-stamped transcript:

Lex Fridman Podcast: OpenAI's CEO Sam Altman—Transcript powered by Fireflies.ai

00:00
Sam Altman

I think compute is going to be the currency of the future. I think it will be maybe the most precious commodity in the world. I expect that by the end of.

00:08
Sam Altman

This decade, and possibly somewhat sooner than that, we will have quite capable systems that we look at and say, wow, that's really remarkable.

00:21
Sam Altman

The road to AgI should be a giant power struggle. I expect that to be the case.

00:26
Lex Fridman

Whoever builds AgI first gets a lot of power. Do you trust yourself with that much power? The following is a conversation with Sam Altman, his second time in the podcast. He is the CEO of OpenAI, the company behind GPT four, Chad, GPT, Sora, and perhaps one day the very company that will build Agi. This is Alex Friedman podcast to support it. Please check out our sponsors in the description. And now, dear friends, here's Sam Altman. Take me through the OpenAI board saga that started on Thursday, November 16, maybe Friday, November 17 for you.

01:13
Sam Altman

That was definitely the most painful professional.

01:16
Sam Altman

Experience of my life, and.

01:21
Sam Altman

Chaotic and.

Read the full transcript

00:00
Sam Altman
I think compute is going to be the currency of the future. I think it will be maybe the most precious commodity in the world. I expect that by the end of.

00:08
Sam Altman
This decade, and possibly somewhat sooner than that, we will have quite capable systems that we look at and say, wow, that's really remarkable.

00:21
Sam Altman
The road to AgI should be a giant power struggle. I expect that to be the case.

00:26
Lex Fridman
Whoever builds AgI first gets a lot of power. Do you trust yourself with that much power? The following is a conversation with Sam Altman, his second time in the podcast. He is the CEO of OpenAI, the company behind GPT four, Chad, GPT, Sora, and perhaps one day the very company that will build Agi. This is Alex Friedman podcast to support it. Please check out our sponsors in the description. And now, dear friends, here's Sam Altman. Take me through the OpenAI board saga that started on Thursday, November 16, maybe Friday, November 17 for you.

01:13
Sam Altman
That was definitely the most painful professional.

01:16
Sam Altman
Experience of my life, and.

01:21
Sam Altman
Chaotic and.

01:23
Sam Altman
Shameful and upsetting and a bunch of other negative things.

01:30
Sam Altman
There were great things about it, too, and I wish it had not been in such an adrenaline rush that I wasn't able to stop and appreciate them at the time. But I came across this old tweet.

01:46
Sam Altman
Of mine, or this tweet of mine.

01:47
Sam Altman
From that time period, which was like, it was like kind of going to your own eulogy, watching people say all these great things about you and just.

01:55
Sam Altman
Like unbelievable support from people I love and care about.

02:01
Sam Altman
That was really nice. That whole weekend, I kind of like, felt, with one big exception, I felt.

02:09
Sam Altman
Like a great deal of love and very little hate.

02:18
Sam Altman
Even though it felt like I have no idea what's happening and what's going to happen here. And this feels really bad.

02:23
Sam Altman
And there were definitely times I thought it was going to be like one.

02:27
Sam Altman
Of the worst things to ever happen for AI safety. Well, I also think I'm happy that.

02:32
Sam Altman
It happened relatively early.

02:35
Sam Altman
I thought at some point between when OpenAI started and when we created AgI, there was going to be something crazy.

02:42
Sam Altman
And explosive that happened, but there may.

02:45
Sam Altman
Be more crazy and explosive things still to happen.

02:49
Sam Altman
It still, I think, helped us build up some resilience and be ready for.

03:00
Sam Altman
More challenges in the future.

03:02
Lex Fridman
But the thing you had a sense that you would experience is some kind of power struggle.

03:08
Sam Altman
The road to AGI should be a.

03:10
Sam Altman
Giant power struggle, should.

03:14
Sam Altman
Well, not should I expect that to be the case.

03:17
Lex Fridman
And so you have to go through that. Like you said, iterate as often as possible in figuring out how to have a board structure, how to have organization, how to have the kind of people that you're working with, how to communicate all that in order to de escalate the power struggle as much as possible, pacify it.

03:38
Sam Altman
But at this point, it feels like.

03:45
Sam Altman
Something that was in the past that was really unpleasant and really difficult and painful. But we're back to work, and things are so busy and so intense that I don't spend a lot of time thinking about it.

04:00
Sam Altman
There was a time after there was like this fugue state for kind of.

04:06
Sam Altman
Like the month after, maybe 45 days after that was. I was just sort of like drifting through the days. I was so out of it. I was feeling so down.

04:17
Lex Fridman
Just on a personal, psychological level.

04:19
Sam Altman
Yeah, really painful and hard to have.

04:24
Sam Altman
To keep running open AI in the middle of that. I just wanted to crawl into a.

04:30
Sam Altman
Cave and kind of recover for a while. But now it's like we're just back.

04:35
Sam Altman
To working on the mission.

04:38
Lex Fridman
Well, it's still useful to go back there and reflect on board structures, on power dynamics, on how companies are run, the tension between research and product development and money and all this kind of stuff, so that you, who have a very high potential of building AGI, would do so in a slightly more organized, less dramatic way in the future. So there's value there to go, both the personal psychological aspects of you as a leader, and also just the board structure and all this kind of messy stuff.

05:18
Sam Altman
Definitely learned a lot about structure and incentives and what we need out of a board.

05:28
Sam Altman
And I think it is valuable that.

05:31
Sam Altman
This happened now, in some sense. I think this is probably not like the last high stress moment of OpenAI, but it was quite a high stress moment. Like company very nearly got destroyed. And we think a lot about many of the other things we've got to get right for AGI.

05:50
Sam Altman
But thinking about how to build a.

05:53
Sam Altman
Resilient.Org and how to build a structure that will stand up to a lot of pressure in the world, which I expect more and more as we get closer.

05:59
Sam Altman
I think that's super important.

06:01
Lex Fridman
Do you have a sense of how deep and rigorous the deliberation process by the board was? Can you shine some light on just human dynamics involved in situations like this? Was it just a few conversations and all of a sudden it escalates, and why don't we fire Sam kind of thing?

06:22
Sam Altman
I think the board members were our.

06:26
Sam Altman
Well meaning people on the whole.

06:30
Sam Altman
And I believe that.

06:36
Sam Altman
In stressful situations where people feel time pressure or whatever, people understandably make suboptimal decisions. And I think one of the challenges.

06:51
Sam Altman
For OpenAI will be we're going to have a board and a.

06:56
Sam Altman
Team that are good at operating under pressure.

07:00
Lex Fridman
Do you think the board had too much power?

07:02
Sam Altman
I think boards are supposed to have.

07:04
Sam Altman
A lot of power, but one of.

07:07
Sam Altman
The things that we did see is, in most corporate structures, boards are usually answerable to shareholders. Sometimes people have, like, supervoting shares or whatever. In this case, and I think one of the things with our structure that.

07:21
Sam Altman
We maybe should have thought about more than we did is that the board of a nonprofit has, unless you put other rules in place, like quite a.

07:32
Sam Altman
Lot of power, they don't really answer to anyone but themselves. And there's ways in which that's good. But what we'd really like is for the board of OpenAI to answer to the world as a whole, as much.

07:42
Sam Altman
As that's a practical thing.

07:43
Lex Fridman
So there's a new board announced? Yeah, there's, I guess, a new smaller board at first, and now there's a new final board.

07:53
Sam Altman
Not a final board yet. We've added some. We'll add more.

07:56
Lex Fridman
Added some.

07:56
Sam Altman
Okay.

07:57
Lex Fridman
What is fixed in the new one that was perhaps broken in the previous one?

08:05
Sam Altman
The old board sort of got smaller over the course of about a year. It was nine, and then it went down to six, and then we couldn't agree on who to add.

08:15
Sam Altman
And the board also, I think, didn't.

08:20
Sam Altman
Have a lot of experienced board members.

08:21
Sam Altman
And a lot of the new board members at OpenAI have just have more.

08:26
Sam Altman
Experience as board members.

08:29
Sam Altman
I think that'll help.

08:31
Lex Fridman
It's been criticized, some of the people that are added to the board. I heard a lot of people criticizing the addition of Larry Summers, for example. What's the process of selecting the board? Like, what's involved in that?

08:43
Sam Altman
So Bret and Larry were kind of decided in the heat of the moment over this very tense weekend. And that weekend was like a real roller coaster. It was like a lot of ups and downs. And were trying to agree on.

09:00
Sam Altman
New board members that both sort of.

09:03
Sam Altman
The executive team here and the old board members felt would be reasonable. Larry was actually one of their suggestions. The old board members, Brett, I think I had even previous to that weekend, suggested, but he was busy and didn't want to do it. And then we really needed help, and would. We talked about a lot of other people, too, but.

09:27
Sam Altman
I felt like if I was going to come back, I needed new board members. I didn't think I could work with the old board.

09:37
Sam Altman
Again in the same configuration, although we.

09:39
Sam Altman
Then decided, and I'm grateful that Adam.

09:42
Sam Altman
Would stay, but we considered various configurations, decided we wanted to get to a.

09:49
Sam Altman
Board of three, and had to find.

09:53
Sam Altman
Two new board members over the course of sort of a short period of time. So those were decided honestly, without, that's like you kind of do that on the battlefield.

10:02
Sam Altman
You don't have time to design a.

10:03
Sam Altman
Rigorous process then for new board members, since new board members will add going forward, we have some criteria that we think are important for the board to have different expertise that we want the board to have. Unlike hiring an executive, where you need them to do one role. Well, the board needs to do a whole role of kind of governance and thoughtfulness.

10:28
Sam Altman
Well, and so one thing that Bret says, which I really like, is that.

10:33
Sam Altman
We want to hire board members in slates, not as individuals, one at a, you know, thinking about a group of.

10:39
Sam Altman
People that will bring nonprofit expertise.

10:43
Sam Altman
At running companies, sort of good legal and governance expertise. That's kind of what we've tried to optimize for.

10:49
Lex Fridman
So is technical savy important for the individual board members?

10:52
Sam Altman
Not for every board member, but for certainly some. You need that. That's part of what the board needs to do.

10:56
Lex Fridman
So, I mean, the interesting thing that people probably don't understand about OpenAI, I certainly don't, is all the details of running the business. When they think about the board, given the drama and think about you, they think about, like, if you reach AGI or you reach some of these incredibly impactful products and you build them and deploy them, what's the conversation with the board like? And they kind of think, all right, what's the right squad to have in that kind of situation to deliberate?

11:25
Sam Altman
Look, I think you definitely need some technical experts there, and then you need.

11:29
Sam Altman
Some people who are like.

11:32
Sam Altman
How can we deploy this in a way that will help people in the world the most and people who have a very different perspective.

11:39
Sam Altman
I think a mistake that you or.

11:41
Sam Altman
I might make is to think that only the technical understanding matters, and that's definitely part of the conversation you want that board to have. But there's a lot more about how that's going to just impact society and people's lives that you really want represented in there, too.

11:55
Lex Fridman
And are you looking at the track record of people or you're just having conversations?

12:00
Sam Altman
Track record is a big deal. You, of course, have a lot of conversations. But there's some roles where I kind.

12:09
Sam Altman
Of totally ignore track record and just.

12:14
Sam Altman
Look at slope, kind of ignore the y intercept thank you.

12:18
Lex Fridman
Thank you for making it mathematical, for the.

12:20
Sam Altman
For the audience, for a board member, I do care much more about the Y intercept. I think there is something deep to.

12:26
Sam Altman
Say about track record there, and experience is sometimes very hard to replace.

12:32
Lex Fridman
Do you try to fit a polynomial function or exponential one to the.

12:35
Sam Altman
To the track record? That's not. That analogy doesn't carry that far.

12:39
Sam Altman
All right.

12:39
Lex Fridman
You mentioned some of the low points that weekend. What were some of the low points psychologically for you? Did you consider going to the Amazon jungle and just taking ayahuasca and disappearing forever?

12:54
Sam Altman
There's so many low.

12:56
Sam Altman
That was a very bad period of time.

12:58
Sam Altman
There were great high points, too. My phone was just, like, sort of nonstop blowing up with nice messages from people I work with every day, people I hadn't talked to in a decade. I didn't get to appreciate that as much as I should have because I was just, like, in the middle of this firefight.

13:13
Sam Altman
But that was really nice.

13:14
Sam Altman
But on the whole, it was like a very painful weekend, and also just like a very. It was like a battle fought in.

13:24
Sam Altman
Public to a surprising degree.

13:26
Sam Altman
And that was extremely exhausting to me, much more than I expected. I think fights are generally exhausting, but.

13:32
Sam Altman
This one really was. Board did this.

13:37
Sam Altman
Friday afternoon. I really couldn't get much in the way of answers, but I also was.

13:42
Sam Altman
Just like, well, the board gets to.

13:44
Sam Altman
Do this, and so I'm going to think for a little bit about what I want to do, but I'll try to find the blessing in disguise here. And I was like, well, my current job at OpenAI is, or it was to run a decently sized company at this point. And the thing I'd always liked the most was just getting to work with the researchers. And I was like, yeah, I can just go do, like, a very focused hei research effort. And I got excited about that. Didn't even occur to me at the time to possibly that this was all going to get undone. This was, like, Friday afternoon.

14:18
Lex Fridman
So you've accepted the death very quickly.

14:22
Sam Altman
Very quickly. I went through a little period of confusion and rage, but very quickly. And by Friday night, I was, like.

14:30
Sam Altman
Talking to people about what was going to be next, and I was excited about that.

14:37
Sam Altman
I think it was Friday night evening for the first time that I heard from the exec team here, which is.

14:42
Sam Altman
Like, hey, we're going to fight this.

14:44
Sam Altman
And we think, whatever. And then I went to bed just still being like, okay, excited. Onward.

14:52
Lex Fridman
Were you able to sleep?

14:53
Sam Altman
Not a lot. It was one of the weird things.

14:56
Sam Altman
Was it was this period of four and a half days where sort of.

15:01
Sam Altman
Didn'T sleep much, didn't eat much, and still kind of had, like, a surprising amount of energy. You learned, like, a weird thing about adrenaline and wartime.

15:09
Lex Fridman
So you kind of accepted the death of this baby opening.

15:13
Sam Altman
And I was excited for the new thing. I was just like, okay, this was crazy, but whatever.

15:16
Lex Fridman
It's a very good coping mechanism.

15:18
Sam Altman
And then Saturday morning, two of the board members called and said, hey, we didn't mean to destabilize things. We don't want to store a lot of value here. Can we talk about you coming back?

15:29
Sam Altman
And I immediately didn't want to do that, but I thought a little more.

15:33
Sam Altman
And it was like, well, I really care about the people here, the partners, shareholders. I love this company. And so I thought about it, and I was like, well, okay, but here's the stuff I would need.

15:46
Sam Altman
And then the most painful time of.

15:47
Sam Altman
All was over the course of that weekend.

15:52
Sam Altman
I kept thinking and being told, and.

15:55
Sam Altman
We all kept, not just me, like, the whole team here kept thinking. While were trying to keep open, AI stabilized. While the whole world was trying to break it apart, people trying to recruit, whatever, we kept being told, like, all right, we're almost done. We're almost done. We just need, like, a little bit more time. And it was this very confusing state. And then Sunday evening when, again, every few hours, I expected that were going to be done and we're going.

16:17
Sam Altman
To figure out a way for me.

16:19
Sam Altman
To return and things to go back to how they were. The board then appointed a new interim CEO. And then I was like, I mean, that feels really bad.

16:30
Sam Altman
That was the low point of the whole thing.

16:36
Sam Altman
I'll tell you something, it felt very.

16:39
Sam Altman
Painful, but I felt a lot of.

16:42
Sam Altman
Love that whole weekend. Other than that one moment Sunday night, I would not characterize my emotions as.

16:49
Sam Altman
Anger or hate, but I really just, like.

16:54
Sam Altman
I felt a lot of love from people towards people. It was, like, painful, but it was like the dominant emotion of the weekend was love, not hate.

17:04
Lex Fridman
You've spoken highly of Mira Morati, that she helped, especially as you put in a tweet in the quiet moments when it counts. Perhaps we could take a bit of a tangent. What do you admire about Mira?

17:15
Sam Altman
Well, she did a great job during that weekend in a lot of chaos. But people often see leaders in the crisis moments, good or bad. But a thing I really value in leaders is how people act on a boring Tuesday at 946 in the morning and in just sort of the normal.

17:38
Sam Altman
Drudgery of the day to day, how someone shows up in a meeting, the quality of the decisions they make. That was what I meant about the quiet moments.

17:47
Lex Fridman
Meaning, like, most of the work is done on a day by day, in the meeting, by meeting, just be present and make great decisions.

17:58
Sam Altman
Yeah. I mean, look, what you have wanted to spend the last 20 minutes about, and I understand, is, like, this one very dramatic weekend.

18:05
Sam Altman
Yeah.

18:06
Sam Altman
But that's not really what opening eye is about. Opening eye is really about the other seven years.

18:10
Lex Fridman
Well, yeah. Human civilization is not about the invasion of the Soviet Union by Nazi Germany, but still, that's something people focus on very understandable. It gives us an insight into human nature, the extremes of human nature, and perhaps some of the damage and some of the triumphs of human civilization can happen in those moments. So it's, like, illustrative. Let me ask you about Ilya. Is he being held hostage in a secret nuclear facility?

18:36
Sam Altman
No.

18:37
Lex Fridman
What about a regular secret facility?

18:39
Sam Altman
No.

18:39
Lex Fridman
What about a nuclear, non secret facility?

18:41
Sam Altman
Neither of. Not that either.

18:43
Lex Fridman
I mean, it's becoming a meme at some point. You've known Ilya for a long time. He was obviously in part of this drama with the board and all that kind of stuff. What's your relationship with him now?

18:56
Sam Altman
I love Ilya.

18:57
Sam Altman
I have tremendous respect for Ilya. I don't have anything I can say about his plans right now. That's a question for him. But I really hope we work together.

19:08
Sam Altman
For certainly the rest of my career.

19:11
Sam Altman
He's a little bit younger than me. Maybe he works a little bit.

19:16
Lex Fridman
A. There's a meme that he saw something, like, he maybe saw Agi, and that gave him a lot of worry internally. What did Ilya see?

19:28
Sam Altman
Ilya has not seen Agi. None of us have seen Agi. We've not built Agi. I do think one of the many things that I really love about Ilya is he takes Agi and the safety concerns, broadly speaking, including things like the impact this is going to have on.

19:50
Sam Altman
Society very seriously, and as we continue.

19:56
Sam Altman
To make significant progress. Ilya is one of the people that I've spent the most time over the.

20:02
Sam Altman
Last couple of years talking about what this is going to mean, what we.

20:07
Sam Altman
Need to do to ensure we get it right, to ensure that we succeed at the mission.

20:11
Sam Altman
So Ilya did not see Agi, but Ilya is a.

20:22
Sam Altman
Credit to humanity in terms of how much he thinks and.

20:26
Sam Altman
Worries about making sure we get this right.

20:30
Lex Fridman
I've had a bunch of conversations with him in the past. I think when he talks about technology, he's always, like, doing this long term thinking type of thing. So he's not thinking about what this is going to be in a year, he's thinking about in ten years. Just thinking from first principles, like, okay, if the scales, what are the fundamentals here? Where is this going? And so that's a foundation for then thinking about all the other safety concerns and all that kind of stuff, which makes him a really fascinating human to talk with. Do you have any idea why he's been kind of quiet? Is it he's just doing some soul searching?

21:08
Sam Altman
Again, I don't want to speak for Ilya. I think that you should ask him that. He's definitely a thoughtful guy, I think. I kind of think Ilia is, like, always on a soul search.

21:26
Sam Altman
In a really good way. Yes.

21:27
Lex Fridman
Yeah. Also, he appreciates the power of silence. Also, I'm told he can be a silly guy, which I totally. I've never seen that.

21:36
Sam Altman
It's very sweet when that happens.

21:39
Lex Fridman
I've never witnessed a silly ilya, but I look forward to that as well.

21:43
Sam Altman
I was at a dinner party with him recently, and he was playing with a puppy, and he was, like, in a very silly move, very endearing. And I was thinking, like, oh, man, this is, like, not the side of.

21:52
Sam Altman
The ilyo that the world sees the most.

21:55
Lex Fridman
So just to wrap up this whole saga, are you feeling good about the board structure, about all of this and where it's moving?

22:03
Sam Altman
I feel great about the new board in terms of the structure of OpenAI. One of the board's tasks is to look at that and see where we can make it more robust. We wanted to get new board members.

22:15
Sam Altman
In place first, but we clearly learned.

22:19
Sam Altman
A lesson about structure throughout this process. I don't have, I think, super deep things to say. It was a crazy, very painful experience. I think it was like a perfect storm of weirdness.

22:29
Sam Altman
It was like a preview for me.

22:31
Sam Altman
Of what's going to happen as the stakes get higher and the need that we have, like, robust governance structures and processes and people. I am kind of happy it happened.

22:41
Sam Altman
When it did, but it was a shockingly painful thing to go through.

22:47
Lex Fridman
Did it make you be more hesitant in trusting people?

22:50
Sam Altman
Yes.

22:51
Lex Fridman
Just on a personal level, I think.

22:53
Sam Altman
I'm, like an extremely trusting person. I have always had a life philosophy of, don't worry about all of the paranoia, don't worry about the edge cases.

23:01
Sam Altman
You get a little bit screwed in.

23:04
Sam Altman
Exchange for getting to live with your guard down and this was so shocking to me.

23:09
Sam Altman
I was so caught off guard that it has definitely changed, and I really don't like this.

23:15
Sam Altman
It's definitely changed how I think about just, like, default trust of people and planning for the bad scenarios.

23:21
Lex Fridman
You got to be careful with that. Are you worried about becoming a little too cynical?

23:26
Sam Altman
I'm not worried about becoming too cynical. I think I'm like the extreme opposite of a cynical person, but I'm worried about just becoming less of a default trusting person.

23:35
Lex Fridman
I'm actually not sure which mode is best to operate in. For a person who's developing AgI, trusting or untrusting, it's an interesting journey you're on. But in terms of structure, see, I'm more interested on the human level. Like, how do you surround yourself with humans that are building cool shit, but also are making wise decisions? Because the more money you start making, the more power the thing has, the weirder people get.

24:05
Sam Altman
I think you could make all kinds.

24:08
Sam Altman
Of comments about the board members and.

24:12
Sam Altman
The level of trust I should have had there or how I should have done things differently.

24:16
Sam Altman
But in terms of the team here.

24:18
Sam Altman
I think you'd have to give me.

24:20
Sam Altman
A very good grade on that one.

24:23
Sam Altman
And I have just, like, enormous gratitude.

24:26
Sam Altman
And trust and respect for the people.

24:29
Sam Altman
That I work with every day. And I think being surrounded with people.

24:31
Sam Altman
Like that is really important.

24:39
Lex Fridman
Our mutual friend Elon sued OpenAI. What to you is the essence of what he's criticizing. To what degree does he have a point? To what degree is he wrong?

24:52
Sam Altman
I don't know what it's really about. We started off just thinking were.

24:57
Sam Altman
Going to be a research lab and having no idea about how this technology was going to go. It's hard to, because it was only seven or eight years ago, it's hard to go back and really remember what it was like then. But before language models were a big deal, this was before we had any idea about an API or selling access to a chat bot, before we had any idea were going to productize at all. So we're like, we're just going to try to do research, and we don't really know what we're going to do with that. I think with many new, fundamentally new things, you start fumbling through the dark and you make some assumptions, most of which turn out to be wrong.

25:31
Sam Altman
And then it became clear that we.

25:34
Sam Altman
Were going to need to do.

25:40
Sam Altman
Different.

25:40
Sam Altman
Things and also have huge amounts more capital. So we said, okay, well, the structure doesn't quite work for that. How do we patch the structure and then patch it again and patch it again, and you end up with something.

25:51
Sam Altman
That does look kind of eyebrow raising, to say the least. But we got here gradually with, I.

25:58
Sam Altman
Think, reasonable decisions at each point along the way.

26:01
Sam Altman
And doesn't mean I wouldn't do it.

26:04
Sam Altman
Totally differently if we could go back now with an oracle.

26:06
Sam Altman
But you don't get the oracle at the time.

26:08
Sam Altman
But anyway, in terms of what Elon's real motivations here are, I don't know.

26:12
Lex Fridman
To the degree you remember, what was the response that OpenAI gave in the blog post? Can you summarize it?

26:21
Sam Altman
Oh, we just said, know, Elon said.

26:25
Sam Altman
This set of things. Here's our characterization, or here's the sort of.

26:31
Sam Altman
Not our characterization.

26:32
Sam Altman
Here's like the characterization of how this went down. We tried to not make it emotional.

26:37
Sam Altman
And just sort of say.

26:42
Sam Altman
Here'S the history.

26:44
Lex Fridman
I do think there's a degree of mischaracterization from Elon here about one of the points you just made, which is the degree of uncertainty you had at the time. You guys are a bunch of, like a small group of researchers crazily talking about Agi when everybody's laughing at that thought.

27:08
Sam Altman
Wasn't that long ago Elon was crazily talking about launching rockets when people were.

27:13
Sam Altman
Laughing at that thought. So I think he'd have more empathy for this.

27:20
Lex Fridman
I mean, I do think that there's personal stuff here, that there was a split that OpenAI and a lot of amazing people here chose to part ways with Elon. So there's a PERSONAL.

27:33
Sam Altman
Elon chose to PART WAYS.

27:37
Lex Fridman
Can you describe that exactly, the choosing to part ways?

27:41
Sam Altman
He thought OpenAI was going to fail. He wanted total control to sort of turn it around. We wanted to keep going in the direction that now has become OpenAI. He also wanted Tesla to be able to build an AGI effort at various times.

27:53
Sam Altman
He wanted to make OpenAI into a.

27:57
Sam Altman
For profit company that he could have control of or have it merge with Tesla.

28:01
Sam Altman
We don't want to do that.

28:02
Sam Altman
And he decided to leave, which that's fine.

28:06
Lex Fridman
So you'rE SayInG, and that's one of the things that the blog post says, is that he wanted OpenAI to be basically acquired by.

28:16
Sam Altman
Yeah.

28:16
Lex Fridman
In the same way that, or maybe something similar or maybe something more dramatic than the partnership with Microsoft.

28:22
Sam Altman
My memory is the proposal was just like, yeah, get acquired by Tesla and have Tesla have full control over it. I'm pretty sure that's what it was.

28:29
Lex Fridman
So what is the word open in OpenAI mean to Elon? At the time, Ilya has talked about this in the email exchanges and all this kind of stuff. What does it mean to you at the time? What does it mean to you now?

28:43
Sam Altman
I would definitely pick a different, speaking of going back with an oracle, I'd pick a different name. One of the things that I think OpenAI is doing that is the most important of everything that we're doing is.

28:55
Sam Altman
Putting powerful technology in the hands of people for free as a public good.

29:02
Sam Altman
We don't run ads on our free.

29:04
Sam Altman
Version, we don't monetize it in other ways.

29:08
Sam Altman
We just say it's part of our mission. We want to put increasingly powerful tools in the hands of people for free and get them to use them. And I think that kind of open is really important to our mission. I think if you give people great tools and teach them to use them, or don't even teach them, they'll figure it out and let them go. Build an incredible future for each other with that. That's a big deal. So if we can keep putting free or low cost or free and low cost powerful AI tools out in the.

29:37
Sam Altman
World, I think it's a huge deal.

29:40
Sam Altman
For how we fulfill the mission, open source or not. Yeah, I think we should open source some stuff and not other stuff. It does become this religious battle line where nuance is hard to have, but.

29:53
Sam Altman
I think nuance is the right answer.

29:55
Lex Fridman
So he said, change your name to closed AI and I'll drop the lawsuit. I mean, is it going to become this battleground in the land of memes about.

30:06
Sam Altman
I think that speaks to the seriousness with which Elon means the mean.

30:18
Sam Altman
That's like an astonishing thing to say, I think.

30:21
Lex Fridman
I don't think the lawsuit maybe, correct me if I'm wrong, but I don't think the lawsuit is legally serious. It's more to make a point about the future of Agi and the company that's currently leading the.

30:37
Sam Altman
I mean, Grok had not open sourced anything until people pointed out it was a little bit hypocritical. And then he announced that Grok will open source things this week. I don't think open source versus not.

30:47
Sam Altman
Is what this is really about for him.

30:48
Lex Fridman
Well, we'll talk about open source and not. I do think maybe criticizing the competition is great. Just talking a little shit, that's great. But friendly competition versus, like, I personally hate lawsuits.

31:01
Sam Altman
Look, I think this whole thing is, like, unbecoming of the builder. And I respect Elon as one of the great builders of our time. And I know he knows what it's like to have haters attack him. And it makes me extra sad. He's doing it to us.

31:18
Lex Fridman
Yeah, he's one of the greatest builders of all time, potentially the greatest builder of all time.

31:22
Sam Altman
It makes me sad, and I think it makes a lot of people sad. Like, there's a lot of people who've really looked up to him for a long time and, you know, in some interview or something that I missed the old Elon and the number of messages I got being like that exactly encapsulates how I feel.

31:36
Lex Fridman
I think he should just win. He should just make Grok beat GPT, and then GPT beats Grok, and it's just a competition, and it's beautiful for everybody. But on the question of open source.

31:51
Sam Altman
Do you think there's a lot of.

31:53
Lex Fridman
Companies playing with this idea? It's quite interesting. I would say meta, surprisingly, has led the way on this, or at least took the first step in the game of chess of really open sourcing the model. Of course, it's not the state of the art model, but open sourcing. Llama, Google is flirting with the idea of open sourcing a smaller version. What are the pros and cons of open sourcing? Have you played around with this idea?

32:22
Sam Altman
Yeah, I think there is definitely a place for open source models, particularly smaller models that people can run locally. I think there's huge demand for. I think there will be some open source models. There will be some closed source models. It won't be unlike other ecosystems in that way.

32:39
Lex Fridman
I listened to all in podcast talking about this lawsuit and all that kind of stuff, and they were more concerned about the precedent of going from nonprofit to this cap for profit, what precedent that sets for other startups.

32:56
Sam Altman
I would heavily discourage any startup that was thinking about starting as a nonprofit and adding like a for profit arm later. I'd heavily discourage them from doing that.

33:04
Sam Altman
I don't think we'll set a precedent here.

33:05
Lex Fridman
Okay, so most startups should go just for sure.

33:09
Sam Altman
And again, if we knew what was going to happen, we would have done that too.

33:12
Lex Fridman
Well, in theory, if you dance beautifully here, there's some tax incentives or whatever.

33:19
Sam Altman
I don't think that's like how most people think about these things.

33:22
Lex Fridman
Just not possible to save a lot of money for a startup if you do it this way.

33:26
Sam Altman
No, I think there's like laws that would make that pretty difficult.

33:30
Lex Fridman
Where do you hope this goes with Elon, this tension, this dance? Where do you hope this, like, if we go one, two, three years from now, your relationship with him on a personal level, too, like friendship, friendly competition, just all this kind of stuff.

33:51
Sam Altman
Yeah. I really respect Elon, and I hope.

34:01
Sam Altman
That years in the future, we have an amicable relationship.

34:05
Lex Fridman
Yeah, I hope you guys have an amicable relationship, like this month, and just compete and win and explore these ideas together. I do suppose there's competition for talent or whatever, but it should be friendly competition. Just build cool shit. And Elon is pretty good at building cool shit, but so are you. So speaking of cool shit, Sora, there's like a million questions I could ask. First of all, it's amazing. It truly is amazing on a product level, but also just on a philosophical level. So let me just. Technical, philosophical, ask, what do you think it understands about the world? More or less than GPT four, for example, the world model. When you train on these patches versus language tokens, I think all of these.

35:05
Sam Altman
Models understand something more about the world model than most of us give them credit for. And because they're also very clear things, they just don't understand or don't get right. It's easy to look at the weaknesses, see through the veil, and say, this is all fake. But it's not all fake. It's just some of it works and some of it doesn't work. I remember when I started first watching Sora videos, and I would see, like, a person walk in front of something for a few seconds and occlude it and then walk away, and the same thing was still there. I was like, oh, it's pretty good. Or there's examples where the underlying physics looks so well represented over a lot.

35:44
Sam Altman
Of steps in a sequence.

35:46
Sam Altman
It's like, this is quite impressive, but fundamentally, these models are just getting better, and that will keep happening. If you look at the trajectory from dolly one to two to three to Sora, there were a lot of people that were dunked on each version saying, it can't do this, it can't do.

36:03
Sam Altman
That, and look at it now.

36:05
Lex Fridman
Well, the thing you just mentioned is kind of with occlusions is basically modeling the physics of three dimensional physics of the world sufficiently well to capture those kinds of things.

36:17
Sam Altman
Well.

36:19
Lex Fridman
Yeah, maybe you can tell in order to deal with occlusions, what does the world model need to.

36:24
Sam Altman
Yeah, so what I would say is it's doing something to deal with occlusions really well. Would I represent that? It has, like, a great underlying 3d.

36:30
Sam Altman
Model of the world. It's a little bit more of a.

36:33
Lex Fridman
Stretch, but can you get there through just these kinds of two dimensional training data approaches.

36:38
Sam Altman
It looks like this approach is going to go surprisingly far. I don't want to speculate too much about what limits it will surmount and which it won't.

36:45
Lex Fridman
But what are some interesting limitations of the system that you've seen? I mean, there's been some fun ones you've posted.

36:52
Sam Altman
There's all kinds of fun. I mean, like cats sprouting an extra limit, random points in a video. Pick what you want. But there's still a lot of problem, a lot of weaknesses.

37:02
Lex Fridman
Do you think it's a fundamental flaw of the approach, or is it just bigger model or better technical details or better data? More data is going to solve the cat sprouting?

37:18
Sam Altman
I'll say yes to both. I think there is something about the approach which just seems to feel different from how we think and learn and whatever. And then also I think it'll get better with scale.

37:30
Lex Fridman
Like I mentioned, LLMs have tokens, text tokens, and Sora has visual patches. So it converts all visual data, diverse kinds of visual data, videos and images, into patches. Is the training, to the degree you can, say, fully self supervised, or is there some manual labeling going on? Like, what's the involvement of humans in all this?

37:49
Sam Altman
I mean, without saying anything specific about.

37:51
Sam Altman
The sora approach, we use lots of human data in our work, but not Internet scale data.

38:03
Lex Fridman
So lots of humans. Lots is a complicated word, Sam.

38:08
Sam Altman
I think lots is a fair word in this, doesn't.

38:12
Lex Fridman
Because to me, lots, like, listen, I'm an introvert, and when I hang out with, like, three people, that's a lot of people. Four people. That's a lot. But I suppose you mean more than.

38:21
Sam Altman
More than three people work on labeling the data for these models. Yeah.

38:24
Lex Fridman
Okay. All right. But fundamentally, there's a lot of self supervised learning, because what you mentioned in the technical report is Internet scale data. That's another beautiful. It's like poetry. So it's a lot of data that's not human label. It's like it's self supervised in that way. And then the question is, how much data is there on the Internet that could be used that is conducive to this kind of self supervised way? If only we knew the details of the self supervised. Have you considered opening it up a little more details?

39:02
Sam Altman
We have. You mean for source specifically?

39:04
Lex Fridman
Source specifically, because it's so interesting that can the same magic of LLMs now start moving towards visual data. And what does that take to do that?

39:18
Sam Altman
I mean, it looks to me like.

39:19
Sam Altman
Yes, but we have more work to do? Sure.

39:22
Lex Fridman
What are the dangers? Why are you concerned about releasing the system? What are some possible dangers of this?

39:29
Sam Altman
Frankly speaking, one thing we have to do before releasing the system is just.

39:34
Sam Altman
Get it to work at a level.

39:38
Sam Altman
Of efficiency that will deliver the scale people are going to want from this. So I don't want to downplay that. And there's still a ton of work to do there. But you can imagine issues with deep fakes, misinformation. We try to be a thoughtful company about what we put out into the world, and it doesn't take much thought to think about the ways this can go badly.

40:05
Lex Fridman
There's a lot of tough questions here. You're dealing in a very tough space. Do you think training AI should be or is fair use under copyright law?

40:14
Sam Altman
I think the question behind that question is, do people who create valuable data deserve to have some way that they get compensated for use of it? And that I think the answer is yes. I don't know yet what the answer is. People have proposed a lot of different things. We've tried some different models. But if I'm like an artist, for.

40:34
Sam Altman
Example, A, I would like to be.

40:37
Sam Altman
Able to opt out of people generating art in my style, and B, if they do generate art in my style.

40:43
Sam Altman
I'd like to have some economic model associated with that.

40:46
Lex Fridman
Yeah, it's that transition from CDs to Napster to Spotify. We have to figure out some kind of model.

40:52
Sam Altman
The model changes, but people have got.

40:54
Sam Altman
To get paid well.

40:55
Lex Fridman
There should be some kind of incentive if we zoom out even more for humans to keep doing cool shit of.

41:02
Sam Altman
Everything I worry about. Humans are going to do cool shit and society is going to find some way to reward it. That seems pretty hardwired. We want to create, we want to be useful, we want to achieve status in whatever way. That's not going anywhere, I don't think.

41:17
Lex Fridman
But the reward might not be monetary, financial. It might be like fame and celebration.

41:24
Sam Altman
Of other cool, maybe financial in some other way. Again, I don't think we've seen the last evolution of how the economic system.

41:30
Sam Altman
Is going to work.

41:31
Lex Fridman
Yeah, but artists and creators are worried when they see Sora. They're like, holy shit.

41:36
Sam Altman
Sure. Artists were also super worried when photography came out. And then photography became a new art form and people made a lot of money taking pictures. And I think things like that will keep happening. People will use the new tools in new ways.

41:50
Lex Fridman
If we just look on YouTube or something like this, how much of that will be using Sora, like AI generated content? Do you think in the next five.

42:00
Sam Altman
Years people talk about how many jobs.

42:03
Sam Altman
Is they are going to do in five years, and the framework that people have is what percentage of current jobs are just going to be totally replaced by some AI doing the job. The way I think about it is not what percent of jobs AI will do, but what percent of tasks will AI do and over what time horizon. So if you think of all of the, like five second tasks in the economy, the five minute tasks, the five hour tasks, maybe even the five day tasks, how many of those can AI do?

42:29
Sam Altman
And I think that's a way more.

42:31
Sam Altman
Interesting, impactful, important question than how many jobs AI can do, because it is.

42:38
Sam Altman
A tool that will work at increasing.

42:41
Sam Altman
Levels of sophistication and over longer and.

42:43
Sam Altman
Longer time horizons for more and more.

42:46
Sam Altman
Tasks and let people operate at a higher level of abstraction. So maybe people are way more efficient at the job they do. And at some point that's not just a quantitative change, but that's a qualitative one too, about the kinds of problems you can keep in your head. I think that for videos on YouTube.

43:02
Sam Altman
It'Ll be the same many videos.

43:04
Sam Altman
Maybe most of them will use AI tools in the production, but they'll still be fundamentally driven by a person thinking about it, putting it know, doing parts.

43:14
Sam Altman
Of it, sort of directing and running it.

43:18
Lex Fridman
Yeah, it's so interesting. I mean, it's scary, but it's interesting to think about. I tend to believe that humans like to watch other humans, or other humans.

43:27
Sam Altman
Really care about other humans a lot.

43:29
Sam Altman
Yeah.

43:29
Lex Fridman
If there's a cooler thing that's better than a humans care about that for like two days and then they go back to humans.

43:39
Sam Altman
That seems very deeply wired.

43:41
Lex Fridman
It's the whole chess thing. Yeah, but now everybody keep playing and let's ignore the elephant in the room that humans are really bad at chess.

43:50
Sam Altman
Relative to AI systems, we still run races and cars are much faster. I mean, there's like a lot of examples.

43:56
Lex Fridman
Yeah. And maybe it'll just be tooling in the Adobe suite type of way where you can just make videos much easier and all that kind of stuff. Listen, I hate being in front of the camera. If I can figure out a way to not be in front of the camera, I would love it. Unfortunately, it'll take a while like that. Generating faces. It's getting there, but generating faces in video format is tricky when it's specific people versus generic people. Let me ask you about GPT four. There's so many questions. First of all, also amazing looking back it'll probably be this kind of historic, pivotal moment with three, five and four.

44:38
Sam Altman
With Chad, GBT, maybe five will be the pivotal moment.

44:41
Sam Altman
I don't know.

44:42
Sam Altman
Hard to say that looking forwards, we never know.

44:45
Lex Fridman
That's the annoying thing about the future. It's hard to predict. But for me, looking back, GPT four, Chad, GPT is pretty damn impressive. Like, historically impressive. So allow me to ask, what's been the most impressive capabilities of GPT four to you and GPT four turbo?

45:05
Sam Altman
I think it kind of sucks.

45:08
Lex Fridman
Typical human. Also gotten used to an awesome thing.

45:11
Sam Altman
No, I think it is an amazing thing.

45:13
Sam Altman
But.

45:16
Sam Altman
Relative to where we need to get to and where I believe we will get to at the time of.

45:23
Sam Altman
Like, GPT-3 people are like, oh, this is amazing.

45:27
Sam Altman
This is, this, like, marvel of technology. And it is, it was.

45:31
Sam Altman
But now we have GPT four and.

45:35
Sam Altman
Look at GPT-3 and you're like, that's unimaginably horrible. I expect that the delta between five and four will be same as between four and three. And I think it is our job.

45:46
Sam Altman
To live a few years in the.

45:48
Sam Altman
Future and remember that the tools we.

45:50
Sam Altman
Have now are going to kind of.

45:53
Sam Altman
Suck looking backwards at them.

45:55
Sam Altman
And that's how we make sure the future is better.

45:59
Lex Fridman
What are the most glorious ways in that GPT four sucks?

46:04
Sam Altman
Meaning what are the best things it can do?

46:06
Lex Fridman
What are the best things it can do, and the limits of those best things that allow you to say it sucks, therefore gives you an inspiration and hope for the future.

46:16
Sam Altman
One thing I've been using it for.

46:17
Sam Altman
More recently is sort of like a.

46:21
Sam Altman
Brainstorming partner.

46:25
Lex Fridman
For that.

46:25
Sam Altman
There's a glimmer of something amazing in there. I don't think it gets. When people talk about what it does, they're like, it helps me code more productively, it helps me write more faster and better. It helps me translate from this language to another. All these amazing things. But there's something about the kind of creative brainstorming partner I need to come up with a name for this thing. I need to think about this problem in a different way. I'm not sure what to do here that I think gives a glimpse of something I hope to see more of. One of the other things that you can see, like a very small glimpse.

47:07
Sam Altman
Of, is when it can help on longer horizon tasks.

47:12
Sam Altman
Break down something in multiple steps, maybe like, execute some of those steps, search the Internet, write code, whatever, put that together. When that works, which is not very.

47:21
Sam Altman
Often, it's like, very magical.

47:24
Lex Fridman
The iterative back and forth with a human. It works a lot for me. What do you mean?

47:28
Sam Altman
Iterative back and forth with human it can get more often when it can.

47:31
Sam Altman
Go do like a ten step problem on its own.

47:34
Sam Altman
Doesn't work for that too often.

47:35
Lex Fridman
Sometimes at multiple layers of abstraction. Or do you mean just sequential?

47:40
Sam Altman
Both. Like to break it down and then do things at different layers of abstraction.

47:45
Sam Altman
And put them together.

47:47
Sam Altman
Look, I don't want to downplay the accomplishment of GPT four.

47:53
Sam Altman
But I don't.

47:53
Sam Altman
Want to overstate it either. And I think this point that we are on an exponential curve. We will look back relatively soon at GPT four like we look back at GPT-3 now.

48:03
Lex Fridman
That said, Chad, GPT was a transition to where people started to believe it. There is an uptick of believing, not internally at OpenAI, perhaps there's believers here.

48:19
Sam Altman
And in that sense I do think it'll be a moment where a lot of the world went from not believing to believing. That was more about the chat GBT interface. And by the interface and product I also mean the post training of the model and how we tune it to be helpful to you and how to use it than the underlying model itself.

48:38
Lex Fridman
How much of each of those things are important, the underlying model and the RLHF, or something of that nature that tunes it to be more compelling to the human, more effective and productive for the human.

48:55
Sam Altman
I mean, they're both super important. But the RLHF, the post training step.

49:00
Sam Altman
The little wrapper of things that, from.

49:04
Sam Altman
A compute perspective, little wrapper of things that we do on top of the base model, even though it's a huge amount of work, that's really important, to say nothing of the product that we build around it. In some sense, we did have to do two things. We had to invent the underlying technology, and then we had to figure out how to make it into a product.

49:28
Sam Altman
People would love, which is not just.

49:31
Sam Altman
About the actual product work itself, but this whole other step of how you align and make it useful and how.

49:37
Lex Fridman
You make the scale work where a lot of people can use it at the same time.

49:42
Sam Altman
All that kind of stuff and that. But that was like a known difficult thing.

49:47
Sam Altman
Like we knew were going to have to scale it up. We had to go do two things that had never been done before that.

49:53
Sam Altman
Were both like, I would say, quite.

49:54
Sam Altman
Significant achievements, and then a lot of things like scaling it up that other.

49:58
Sam Altman
Companies have had to do before.

50:01
Lex Fridman
How does the context window of going from eight K to 128K tokens compare from GPT four to GPT four turbo.

50:12
Sam Altman
People like long. Most people don't need all the way to 128 most of the time. Although if we dream into the distant future, we'll have way distant future we'll have context, length of several billion. You will feed in all of your information, all of your history over time, and it'll just get to know you better and better, and that'll be great for now. The way people use these models, they're not doing that. And people sometimes post in a paper or a significant fraction of a code repository, whatever. But most usage of the models is not using the long context most of the time.

50:49
Lex Fridman
I like that this is your I have a dream speech. One day you'll be judged by the full context of your character or of your whole lifetime. That's interesting. So that's part of the expansion that you're hoping for is a greater and greater context.

51:06
Sam Altman
I saw this Internet clip once, I'm going to get the numbers wrong, but it was like Bill Gates talking about the amount of memory on some early computer, maybe 64K, maybe 640K, something like that. And most of it was used for the screen buffer. And he just couldn't seem genuine this couldn't imagine that the world would eventually.

51:24
Sam Altman
Need gigabytes of memory in a computer.

51:27
Sam Altman
Or terabytes of memory in a computer. And you always do, or you always do. Just need to follow the exponential of technology and we will find out how to use better technology. So I can't really imagine what it's.

51:42
Sam Altman
Like right now for context links to.

51:44
Sam Altman
Go out to the billions someday. And they might not literally go there, but effectively it'll feel like that.

51:51
Sam Altman
But I know we'll use it and really not want to go back once we have it.

51:56
Lex Fridman
Yeah. Even saying billions ten years from now might seem dumb because it'll be like trillions upon trillions. Sure, there'll be some kind of breakthrough that will effectively feel like infinite context, but even 120, I have to be honest, I haven't pushed it to that degree. Maybe putting in entire books or like parts of books and so on papers. What are some interesting use cases of GPT four that you've seen?

52:23
Sam Altman
The thing that I find most interesting is not any particular use case that we can talk about those, but it's.

52:28
Sam Altman
People who kind of like this is mostly younger people, but people who use it as like their default start for.

52:36
Sam Altman
Any kind of knowledge work task. And it's the fact that it can do a lot of things reasonably well. You can use GPTV, you can use it to help you write code. You can use it to help you do search. You can use it to edit a paper. The most interesting thing to me is the people who just use it as the start of their workflow.

52:52
Lex Fridman
I do as well for many things. I use it as a reading partner for reading books. It helps me think, help me think through ideas, especially when the books are classic. So it's really well written about, and I find it often to be significantly better than even like Wikipedia on well covered topics. It's somehow more balanced and more nuanced. Or maybe it's me, but it inspires me to think deeper than a Wikipedia article does. I'm not exactly sure what that is. You mentioned like this collaboration. I'm not sure where the magic is, if it's in here or if it's in there, or if it's somewhere in between, I'm not sure.

53:30
Lex Fridman
But one of the things that concerns me for knowledge task when I start with GPT is I'll usually have to do fact checking after, like check that it didn't come up with fake stuff. How do you figure that out? That GPT can come up with fake stuff? That sounds really convincing. So how do you ground it in truth?

53:55
Sam Altman
That's obviously an area of intense interest for us.

53:58
Sam Altman
I think it's going to get a.

54:01
Sam Altman
Lot better with upcoming versions, but we'll have to continue to work on it, and we're not going to have it all solved this year.

54:07
Lex Fridman
Well, the scary thing is, as it gets better, you'll start not doing the fact checking more and more, right?

54:14
Sam Altman
I'm of two minds about that. I think people are like much more sophisticated users of technology than we often give them credit for. And people seem to really understand that GPT, any of these models hallucinate some of the time, and if it's mission.

54:26
Sam Altman
Critical, you got to check it.

54:27
Lex Fridman
Except journalists don't seem to understand that. I've seen journalists half assedly just using.

54:32
Sam Altman
GPT for of the long list of things I'd like to dunk on journalists for, this is not my top criticism of them.

54:40
Lex Fridman
Well, I think the bigger criticism is perhaps the pressures and the incentives of being a journalist is that you have to work really quickly. And this is a shortcut. I would love our society to incentivize. I would too long like a journalistic efforts that take days and weeks and rewards great in depth journalism. Also journalism that presents stuff in a balanced way where it celebrates people while criticizing them, even though the criticism is the thing that gets clicks and making shit up also gets clicks and headlines that mischaracterize completely. I'm sure you have a lot of people dunking on all that drama.

55:20
Sam Altman
Probably got a lot of clicks. Probably did.

55:24
Lex Fridman
And that's a bigger problem about human civilization. I'd love to see solidists where we celebrate a bit more. You've given Chad GPT the ability to have memories. You've been playing with that about previous conversations, and also the ability to turn off memory. I wish I could do that sometimes. Just turn on and off, depending, I guess. Sometimes alcohol can do that, but not optimally, I suppose. What have you seen through that? Like playing around with that idea of remembering conversations or not.

55:56
Sam Altman
We're very early in our explorations here, but I think what people want, or at least what I want for myself, is a model that gets to know me and gets more useful to me over time. This is an early exploration.

56:12
Sam Altman
I think there's like a lot of.

56:13
Sam Altman
Other things to do, but that's where you'd like to head. You'd like to use a model and over the course of your life, or use a system, it'd be many models, and over the course of your life.

56:24
Sam Altman
It gets better and better. Yeah.

56:26
Lex Fridman
How hard is that problem? Because right now it's more like remembering little factoids and preferences and so on. What about remembering? Don't you want GPT to remember all the shit you went through in November and all the drama and then you can, because right now you're clearly blocking it out a little bit.

56:43
Sam Altman
It's not just that I want it to remember that. I want it to integrate the lessons of that and remind me in the.

56:53
Sam Altman
Future what to do differently or what to watch out for.

56:58
Sam Altman
And we all gain from experience over.

57:02
Sam Altman
The course of our lives, varying degrees. And I'd like my AI agent to.

57:07
Sam Altman
Gain with that experience too. So if we go back and let ourselves imagine that, trillions and trillions of.

57:15
Sam Altman
Contact length, if I can put every.

57:18
Sam Altman
Conversation I've ever had with anybody in my life in there, if I can have all of my emails input out, like all of my input output in the context window every time I ask a question, that'd be pretty cool, I think.

57:29
Lex Fridman
Yeah, I think that would be very cool. People sometimes will hear that and be concerned about privacy. What do you think about that aspect of it? The more effective the AI becomes at really integrating all the experiences and all the data that happened to you and give you advice.

57:48
Sam Altman
I think the right answer there is just user choice. Anything I want stricken from the record from my AI agent I want to be able to take out, if I don't want it to remember anything.

57:55
Sam Altman
I want that too. You and I may have different opinions.

58:00
Sam Altman
About where on that privacy utility trade off for our own AI we want.

58:04
Sam Altman
To be, which is totally fine. But I think the answer is just.

58:07
Sam Altman
Like, really easy user choice.

58:08
Lex Fridman
But there should be some high level of transparency from a company about the user choice because sometimes company in the past, companies in the past have been kind of shady about, like, it's kind of presumed that we're collecting all your data and we're using it for a good reason, for advertisement and so on. But there's not a transparency about the details of that.

58:31
Sam Altman
That's totally true. You mentioned earlier that I'm, like, blocking out the November stuff, teasing you.

58:36
Sam Altman
Well, I think it was a very.

58:40
Sam Altman
Traumatic thing and it did immobilize me for a long period of time. Definitely the hardest work thing I've had to do was just keep working that period. Because I had to try to come back in here and put the pieces.

58:54
Sam Altman
Together while I was just like in.

58:57
Sam Altman
Sort of shock and pain.

58:58
Sam Altman
And nobody really cares about that.

59:01
Sam Altman
I mean, the team gave me a pass and I was not working at my normal level, but there was a.

59:04
Sam Altman
Period where I was just like, it.

59:07
Sam Altman
Was really hard to have to do.

59:08
Sam Altman
Both, but I kind of woke up one morning and I was like, this.

59:11
Sam Altman
Was a horrible thing that happened to me. I think I could just feel like a victim forever. Or I can say this is like the most important work I'll ever touch in my life and I need to get back to it. And it doesn't mean that I've repressed.

59:23
Sam Altman
It, because sometimes I, like, wake from.

59:26
Sam Altman
The middle of the night thinking about.

59:27
Sam Altman
It, but I do feel like an.

59:29
Sam Altman
Obligation to keep moving forward.

59:32
Lex Fridman
Well, that's beautifully said, but there could be some lingering stuff in there. Like what I would be concerned about is that trust thing that you mentioned, that being paranoid about people as opposed to just trusting everybody or most people, like using your gut. It's a tricky dance, for sure. I mean, because I've seen in my part time explorations, I've been diving deeply into the Zelensky administration and the Putin administration and the dynamics there in wartime in a very highly stressful environment. And what happens is distrust, and you isolate yourself both, and you start to not see the world clearly. And that's a concern. That's a human concern. You seem to have taken it in stride and kind of learned the good lessons and felt the love and let the love energize you, which is great, but still can linger in there.

01:00:30
Lex Fridman
There's just some questions I would love to ask. Your intuition about. What's GPT able to do and not so. It's allocating approximately the same amount of compute for each token it generates. Is there room there in this kind of approach to slower thinking, sequential thinking?

01:00:51
Sam Altman
I think there will be a new paradigm for that kind of thinking.

01:00:55
Lex Fridman
Will it be similar, like architecturally, as what we're seeing now with LLMs? Is it a layer on top of llms?

01:01:04
Sam Altman
I can imagine many ways to implement that. I think that's less important than the question you were getting at, which is, do we need a way to do a slower kind of thinking where the answer doesn't have to get?

01:01:21
Sam Altman
I guess spiritually you could say that.

01:01:22
Sam Altman
You want an AI to be able to think harder about a harder problem and answer more quickly about an easier problem. And I think that will be important.

01:01:30
Lex Fridman
Is that like a human thought that we're just having, you should be able to think hard. Is that wrong? Intuition.

01:01:34
Sam Altman
I suspect that's a reasonable intuition.

01:01:36
Lex Fridman
Interesting. So it's not possible. Once the GPT gets like GPT seven would just be instantaneously be able to see. Here's the proof from our steam.

01:01:49
Sam Altman
It seems to me like you want to be able to allocate more compute to harder problems. It seems to me that a system knowing, if you ask a system like.

01:02:03
Sam Altman
That proof from oslass theorem versus what's today's date?

01:02:11
Sam Altman
Unless it already knew and had memorized.

01:02:13
Sam Altman
The answer to the proof, assuming it's got to go figure that out. Seems like that will take more compute.

01:02:20
Lex Fridman
But can it look like basically LLM talking to itself, that kind of thing?

01:02:25
Sam Altman
Maybe.

01:02:25
Sam Altman
I mean, there's a lot of things.

01:02:26
Sam Altman
That you could imagine working.

01:02:31
Sam Altman
What the right or the best way.

01:02:33
Sam Altman
To do that will be, we don't know.

01:02:37
Lex Fridman
This does make me think of the mysterious, the lore behind Q star. What's this mysterious Q star project? Is it also in the same nuclear facility?

01:02:50
Sam Altman
There is no nuclear facility.

01:02:52
Lex Fridman
That's what a person with a nuclear facility always says.

01:02:54
Sam Altman
I would love to have a secret nuclear facility.

01:02:57
Sam Altman
There isn't one.

01:02:59
Lex Fridman
All right.

01:03:00
Sam Altman
Maybe someday.

01:03:01
Lex Fridman
Someday. All right. One can dream.

01:03:05
Sam Altman
OpenAI is not a good company at keeping secrets. It would be know. We're like been plagued by a lot of leaks, and it would be nice if were able to have something like that.

01:03:14
Lex Fridman
Can you speak to what Qstar is?

01:03:15
Sam Altman
We are not ready to talk about that.

01:03:17
Lex Fridman
See, but answer like that means there's something to talk about. It's very mysterious, Sam.

01:03:22
Sam Altman
I mean, we work on all kinds of research. We have said for a while that.

01:03:30
Sam Altman
We think better reasoning in these systems.

01:03:37
Sam Altman
Is an important direction that we'd like to pursue.

01:03:40
Sam Altman
We haven't cracked the code yet.

01:03:44
Sam Altman
We're very interested in it.

01:03:47
Lex Fridman
Is there going to be moments, Q star or otherwise, where there's going to be leaps similar to Chad GPT, where.

01:03:55
Sam Altman
You'Re like, that's a good question.

01:03:59
Sam Altman
What do I think about that?

01:04:05
Sam Altman
It's interesting. To me, it all feels pretty continuous, right?

01:04:08
Lex Fridman
This is kind of a theme that you saying is there's a gradual, you're basically gradually going up an exponential slope. But from an outsider perspective, for me, just watching it does feel like there's leaps, but to you there isn't.

01:04:21
Sam Altman
I do wonder if we should have. So part of the reason that we.

01:04:26
Sam Altman
Deploy the way we do is that.

01:04:27
Sam Altman
We think, we call iterative deployment. Rather than go build in secret until we got all the way to GPT five, we decided to talk about GPT one, two, three and four. And part of the reason there is, I think AI and surprise don't go together. And also the world, people, institutions, whatever you want to call it, need time to adapt and think about these things. And I think one of the best things that OpenAI has done is this strategy. And we get the world to pay attention to progress, to take Agi seriously.

01:05:00
Sam Altman
To think about what systems and structures.

01:05:04
Sam Altman
And governance we want in place before we're like under the gun and have to make a rush decision. I think that's really good. But the fact that people like you.

01:05:11
Sam Altman
And others say you still feel like.

01:05:15
Sam Altman
There are these leaps, makes me think.

01:05:17
Sam Altman
That maybe we should be doing our.

01:05:20
Sam Altman
Releasing even more iteratively. I don't know what that would mean. I don't have answer ready to go.

01:05:24
Sam Altman
But our goal is not to have shock updates to the world. The opposite.

01:05:29
Lex Fridman
Yeah, for sure. More iterative would be amazing. I think that's just beautiful for everybody.

01:05:34
Sam Altman
But that's what we're trying to do. That's like our state of the strategy, and I think we're somehow missing the mark. So maybe we should think about releasing.

01:05:41
Sam Altman
GPT five in a different way or something like that.

01:05:44
Lex Fridman
Yeah, 4.71, 4.72. But people tend to like to celebrate, people celebrate birthdays. I don't know if you know humans, but they kind of have these milestones.

01:05:54
Sam Altman
I do know some humans, people do like milestones.

01:05:59
Sam Altman
I totally get that?

01:06:02
Sam Altman
I think we like milestones too. It's like, fun to say, declare victory on this one and go start the next thing. But, yeah, I feel like we're somehow getting this a little bit wrong.

01:06:12
Lex Fridman
So when is GPT five coming out again?

01:06:15
Sam Altman
I don't know.

01:06:16
Sam Altman
That's the honest answer.

01:06:18
Lex Fridman
That's the honest answer. Is it blink twice if it's this year.

01:06:27
Sam Altman
I also.

01:06:29
Sam Altman
We will release an amazing model this year. I don't know what we'll call it.

01:06:36
Lex Fridman
So that goes to the question of, like, what's the way we release this thing?

01:06:41
Sam Altman
We'll release over in the coming months many different things. I think they'll be very cool.

01:06:49
Sam Altman
I think before we talk about, like, a GPT five, like, model called that or called or not called that, or a little bit worse or a little bit better than what you'd expect from a GPT five, I think we have a lot of other important things to release first.

01:07:02
Lex Fridman
I don't know what to expect from GPT five. You're making me nervous and excited. What are some of the biggest challenges and bottlenecks to overcome for whatever it ends up being called? But let's call it GPT five. Just interesting to ask. Is it on the compute side? Is it on the technical side?

01:07:21
Sam Altman
Always all of these? What's the one big unlock? Is it a bigger computer? Is it like a new secret? Is it something else? It's all of these things together. Like the thing that OpenAI, I think, does really well. This is actually an original Ilyo quote that I'm going to butcher, but it's something like, we multiply 200 medium sized things together into one giant thing.

01:07:47
Lex Fridman
So there's this distributed, constant innovation happening.

01:07:50
Sam Altman
Yeah.

01:07:51
Lex Fridman
So even on the technical side, especially on the technical side, even like, detailed approaches, detailed aspects of every. How does that work with different disparate teams and so on? How do the medium sized things become one whole giant transformer?

01:08:08
Sam Altman
There's a few people who have to think about putting the whole thing together, but a lot of people try to keep most of the picture in their head.

01:08:14
Lex Fridman
Oh, like the individual teams, individual contributors.

01:08:16
Sam Altman
Tried at a high level.

01:08:17
Sam Altman
Yeah.

01:08:18
Sam Altman
You don't know exactly how every piece.

01:08:19
Sam Altman
Works, of course, but one thing I.

01:08:22
Sam Altman
Generally believe is that it's sometimes useful to zoom out and look at the entire map. And I think this is true for a technical problem. I think this is true for innovating in business. But things come together in surprising ways and having an understanding of that whole.

01:08:42
Sam Altman
Picture, even if most of the time.

01:08:45
Sam Altman
You'Re operating in the weeds in one.

01:08:47
Sam Altman
Area pays off with surprising insights.

01:08:51
Sam Altman
In fact, one of the things that I used to have, and I think was super valuable was I used to have a good map of all of the frontier or most of the frontiers in the tech industry. And I could sometimes see these connections or new things that were possible that if I were only deep in one area, I wouldn't be able to have the idea for because I wouldn't have all the data. And I don't really have that much anymore.

01:09:16
Sam Altman
I'm super deep now, but I know that it's a valuable thing.

01:09:23
Lex Fridman
You're not the man you used to be, Sam.

01:09:25
Sam Altman
Very different job now than what I used to have.

01:09:27
Lex Fridman
Speaking of zooming out, let's zoom out to another cheeky thing but profound thing, perhaps, that you said you tweeted about needing $7 trillion.

01:09:41
Sam Altman
I did not tweet about that.

01:09:42
Sam Altman
I never said, like, we're raising $7 trillion, blah, blah.

01:09:45
Lex Fridman
Oh, that's somebody else. Yeah, but you said, fuck it, maybe eight.

01:09:50
Sam Altman
I think, okay, I meme once there's misinformation out in the world.

01:09:53
Lex Fridman
Oh, you meme. But sort of misinformation may have a foundation of insight there.

01:10:00
Sam Altman
Look, I think compute is going to be the currency of the future. I think it will be maybe the most precious commodity in the world. And I think we should be investing heavily to make a lot more compute. Compute is an unusual. I think it's going to be an unusual market.

01:10:22
Sam Altman
People think about the market for chips.

01:10:28
Sam Altman
For mobile phones or something like that. And you can say that, okay, there's 8 billion people in the world. Maybe 7 billion of them have phones, maybe, or 6 billion. Let's say they upgrade every two years. So the market per year is 3 billion system on chip for smartphones. And if you make 30 billion, you will not sell ten times as many phones because most people have one phone. But compute is different. Like intelligence is going to be more like energy or something like that, where the only thing that I think makes.

01:10:57
Sam Altman
Sense to talk about is at price.

01:11:00
Sam Altman
X, the world will use this much compute. And at price Y, the world will use this much compute. Because if it's really cheap, I'll have it, like reading my email all day, like giving me suggestions about what I maybe should think about or work on and trying to cure cancer. And if it's really expensive, maybe I'll only use it, or we'll only use.

01:11:18
Sam Altman
It, try to cure cancer. So I think the world is going.

01:11:21
Sam Altman
To want a tremendous amount of compute. And there's a lot of parts of that are hard. Energy is the hardest part. Building data centers is also hard. The supply chain is hard. And then, of course, fabricating enough chips is hard. But this seems to me where things are going. Like, we're going to want an amount.

01:11:38
Sam Altman
Of compute that's just hard to reason about right now.

01:11:43
Lex Fridman
How do you solve the energy puzzle? Nuclear.

01:11:46
Sam Altman
That's what I believe.

01:11:47
Lex Fridman
Fusion.

01:11:47
Sam Altman
That's what I believe.

01:11:49
Lex Fridman
Nuclear fusion.

01:11:50
Sam Altman
Yeah.

01:11:51
Lex Fridman
Who's going to solve that?

01:11:53
Sam Altman
I think Helion's doing the best work, but I'm happy there's, like a race.

01:11:55
Sam Altman
For fusion right now.

01:11:56
Sam Altman
Nuclear fission, I think, is also quite amazing, and I hope as a world, we can re embrace that. It's really sad to me how the history of that went and hope we.

01:12:06
Sam Altman
Get back to it in a meaningful way.

01:12:08
Lex Fridman
So to you, part of the is nuclear fission, like nuclear reactors as we currently have them, and a lot of people are terrified because of Chernobyl and so on.

01:12:16
Sam Altman
Well, I think we should make new reactors. I think it's just like, it's a shame that industry kind of ground to a halt.

01:12:22
Lex Fridman
And what just mass hysteria is how you explain the halt.

01:12:25
Sam Altman
Yeah.

01:12:26
Lex Fridman
I don't know if you know humans, but that's one of the dangers, that's one of the security threats for nuclear fission is humans seem to be really afraid of it, and that's something we have to incorporate into the calculus of it. So we have to kind of win people over and to show how safe it is.

01:12:44
Sam Altman
I worry about that for AI.

01:12:47
Sam Altman
I think some things are going to go theatrically wrong with AI.

01:12:52
Sam Altman
I don't know what the percent chance.

01:12:54
Sam Altman
Is that I eventually get shot, but it's not zero.

01:12:57
Lex Fridman
Oh, like we want to stop this, maybe. How do you decrease theatrical nature of it? I've already starting to hear rumblings, because I do talk to people on both sides of the political spectrum, hear rumblings where it's going to be politicized. AI, it's going to be politicized really worries me, because then it's like, maybe the right is against AI and the left is for AI because it's going to help the people or whatever the narrative and formulation is that really worries me. And then theatrical nature of it can be leveraged fully. How do you fight that?

01:13:38
Sam Altman
I think it will get caught up in left versus right wars. I don't know exactly what that's going to look like, but I think that's just what happens with anything of consequence. Unfortunately, what I meant more about theatrical risks is AI is going to have, I believe, tremendously more good consequences than bad ones. But it is going to have bad ones, and there will be some bad.

01:13:58
Sam Altman
Ones that are.

01:14:03
Sam Altman
Bad, but not theatrical.

01:14:07
Sam Altman
A lot more people have died of.

01:14:08
Sam Altman
Air pollution than nuclear reactors, for example. But we worry, most people worry more about living next to a nuclear reactor than a coal plant. But something about the way we're wired is that although there's many different kinds.

01:14:23
Sam Altman
Of risks, we have to confront the.

01:14:25
Sam Altman
Ones that make a good climax scene.

01:14:27
Sam Altman
Of a movie carry much more weight.

01:14:29
Sam Altman
With us than the ones that are very bad over a long period of.

01:14:33
Sam Altman
Time, but on a slow burn.

01:14:36
Lex Fridman
Well, that's why truth matters. And hopefully AI can help us see the truth of things, to have balance, to understand what are the actual risks, what are the actual dangers of things in the world, what are the pros and cons of the competition in the space and competing with Google, Meta, Xai and others?

01:14:56
Sam Altman
I think I have a pretty straightforward answer to this.

01:15:00
Sam Altman
Maybe I can think of more nuanced later, but the pros seem obvious, which.

01:15:03
Sam Altman
Is that we get better products and.

01:15:05
Sam Altman
More innovation faster and cheaper, and all the reasons competition is good.

01:15:09
Sam Altman
And the con is that I think.

01:15:13
Sam Altman
If we're not careful, it could lead.

01:15:14
Sam Altman
To an increase in sort of an.

01:15:18
Sam Altman
Arms race that I'm nervous about.

01:15:21
Lex Fridman
Do you feel the pressure of the arms race? Like in some negative?

01:15:25
Sam Altman
Definitely, in some ways, for sure. We spend a lot of time talking.

01:15:29
Sam Altman
About the need to prioritize safety, and.

01:15:34
Sam Altman
I've said for like, a long time.

01:15:35
Sam Altman
That I think if you think of.

01:15:37
Sam Altman
A quadrant of slow timelines to the start of Agi, long timelines and then a short takeoff or a fast takeoff, I think short timelines, slow takeoff is the safest quadrant, and the one I'd most like us to be in.

01:15:52
Sam Altman
But I do want to make sure we get that slow takeoff.

01:15:55
Lex Fridman
Part of the problem I have with this kind of slight beef with Elon is that their silos are created, and as opposed to collaboration on the safety aspect of all of this, it tends to go into silos and closed, open source, perhaps in the model.

01:16:10
Sam Altman
Elon says at least that he cares a great deal about AI safety and is really worried about it.

01:16:15
Sam Altman
And I assume that he's not going to race unsafely.

01:16:20
Lex Fridman
Yeah, but collaboration here, I think, is really beneficial for everybody on that front.

01:16:25
Sam Altman
Not really the thing he's most known for.

01:16:28
Lex Fridman
Well, he is known for caring about humanity, and humanity benefits from collaboration, and so there's always attention and incentives and motivations. And in the end, I do hope humanity prevails.

01:16:42
Sam Altman
I was thinking, someone just reminded me the other day about how the day.

01:16:45
Sam Altman
That he got surpassed Jeff Bezos for.

01:16:49
Sam Altman
Richest person in the world, he tweeted.

01:16:51
Sam Altman
A silver medal at Jeff Bezos.

01:16:55
Sam Altman
I hope we have less stuff like.

01:16:56
Sam Altman
That as people start to work on.

01:16:58
Sam Altman
I agree towards Agi.

01:16:59
Lex Fridman
I think Elon is a friend and he's a beautiful human being, one of the most important humans ever. That stuff is not good.

01:17:07
Sam Altman
The amazing stuff about Elon is amazing, and I super respect him.

01:17:11
Sam Altman
I think we need him.

01:17:13
Sam Altman
All of us should be rooting for.

01:17:14
Sam Altman
Him and need him to step up as a leader through this next phase.

01:17:19
Lex Fridman
Yeah, I hope you can have one without the other. But sometimes humans are flawed and complicated and all that kind of stuff.

01:17:24
Sam Altman
There's a lot of really great leaders throughout history.

01:17:27
Lex Fridman
Yeah. And we can each be the best version of ourselves and strive to do so. Let me ask you, Google, with the help of search, has been dominating the past 20 years. I think it's fair to say in terms of the access, the world's access to information, how we interact and so on. And one of the nerve wracking things for Google, but for the entirety of people in this space is thinking about how are people going to access information? Like you said, people show up to GPT as a starting point. So is OpenAI going to really take on this thing that Google started 20 years ago, which is how do we get.

01:18:12
Sam Altman
I find that boring. If the question is if we can build a better search engine than Google.

01:18:19
Sam Altman
Or whatever, then sure, we should go.

01:18:24
Sam Altman
People should use a better product, but.

01:18:26
Sam Altman
I think that would so understate what this can be.

01:18:33
Sam Altman
Google shows you, like, ten blue links. Well, like 13 ads and then ten blue links. And that's like one way to find information. But the thing that's exciting to me is not that we can go build.

01:18:45
Sam Altman
A better copy of Google search, but.

01:18:49
Sam Altman
That maybe there's just some much better way to help people find and act and on and synthesize information. Actually, I think chat GBT is that for some use cases, and hopefully we'll make it be like that for a lot more use cases. But I don't think it's that interesting to say, like, how do we go do a better job of giving you.

01:19:08
Sam Altman
Like, ten ranked web pages to look at than what Google does?

01:19:12
Sam Altman
Maybe it's really interesting to go say, how do we help you get the answer or the information you need?

01:19:17
Sam Altman
How do we help create that?

01:19:19
Sam Altman
In some cases, synthesize that in others or point you to it and yet others.

01:19:24
Sam Altman
But a lot of people have tried.

01:19:28
Sam Altman
To just make a better search engine than Google.

01:19:30
Sam Altman
And it is a hard technical problem.

01:19:32
Sam Altman
It is a hard branding problem, it is a hard ecosystem problem.

01:19:36
Sam Altman
I don't think the world needs another copy of Google.

01:19:39
Lex Fridman
And integrating a chat client like a chat GPT with a search engine, that's cooler. It's cool, but it's tricky. If you just do it simply, it's awkward, because if you just shove it in there, it can be awkward.

01:19:54
Sam Altman
As you might guess, we are interested in how to do that. Well, that would be an example of.

01:19:59
Sam Altman
A cool thing that's not just like.

01:20:01
Lex Fridman
A heterogeneous integrated the intersection of LLMs plus search.

01:20:07
Sam Altman
I don't think anyone has cracked the code on yet. I would love to go do that. I think that would be cool.

01:20:13
Lex Fridman
Yeah. What about the ads side? Have you ever considered monitors?

01:20:16
Sam Altman
I kind of hate ads just as like an aesthetic choice.

01:20:20
Sam Altman
I think ads needed to happen on the Internet for a bunch of reasons.

01:20:25
Sam Altman
To get it going.

01:20:26
Sam Altman
But it's a more mature industry.

01:20:30
Sam Altman
The world is richer now. I like that people pay for chat GPT and know that the answers they're getting are not influenced by advertisers. I'm sure there's an ad unit that.

01:20:42
Sam Altman
Makes sense for LLMs, and I'm sure there's a way to participate in the.

01:20:49
Sam Altman
Transaction stream in an unbiased way that.

01:20:51
Sam Altman
Is okay to do. But it's also easy to think about.

01:20:56
Sam Altman
The dystopic visions of the future where you ask Chachi bt something and it says, oh, you should think about buying this product, or you should think this going here for a vacation or whatever.

01:21:08
Sam Altman
And I don't like, we have a very simple business model and I like it. And I know that I'm not the.

01:21:19
Sam Altman
Like, I know I'm paying and that's how the business model works.

01:21:23
Sam Altman
And when I go use like Twitter or Facebook or Google or any other.

01:21:30
Sam Altman
Great product, but ad supported great product.

01:21:34
Sam Altman
I don't love that.

01:21:36
Sam Altman
And I think it gets worse, not better.

01:21:37
Sam Altman
In a world with, I mean, I.

01:21:40
Lex Fridman
Can imagine AI would better at showing the best kind of version of ads, not in a dystopic future, but where the ads are for things you actually need. But then does that system always result in the ads driving the kind of stuff that's shown all? Yeah, I think it was a really bold move of Wikipedia not to do advertisements, but then it makes it very challenging as a business model. So you're saying the current thing with OpenAI is sustainable from a business perspective?

01:22:15
Sam Altman
Well, we have to figure out how.

01:22:16
Sam Altman
To grow, but looks like we're going to figure that out.

01:22:20
Sam Altman
If the question is, do I think.

01:22:21
Sam Altman
We can have a great business that.

01:22:23
Sam Altman
Pays for our compute needs without ads?

01:22:26
Sam Altman
That I think the answer is yes.

01:22:32
Lex Fridman
Well, that's promising. I also just don't want to completely.

01:22:35
Sam Altman
Throw out ads as a. I'm not saying that.

01:22:39
Sam Altman
I guess I'm saying I have a bias against them.

01:22:42
Lex Fridman
Yeah, I have also a bias and just a skepticism in general and in terms of interface, because I personally just have, like, a spiritual dislike of crappy interfaces, which is why Adsense, when it first came out, was a big leap forward versus, like, animated banners or whatever. But it feels like there should be many more leaps forward in advertisement that doesn't interfere with the consumption of the content and doesn't interfere in a big fundamental way, which is like what you were saying, it will manipulate the truth to suit the advertisers. Let me ask you about safety, but also bias and like, safety in the short term, safety in the long term. The Gemini one five came out recently. There's a lot of drama around it, speaking of theatrical things, and it generated black Nazis and black founding fathers.

01:23:40
Lex Fridman
I think fair to say it was a bit on the ultra woke side. So that's a concern for people. That if there is a human layer within companies that modifies the safety or the harm caused by a model, that they introduce a lot of bias that fits sort of an ideological lean within a company. How do you deal with mean?

01:24:06
Sam Altman
We work super hard not to do things like that. We've made our own mistakes. We'll make others.

01:24:11
Sam Altman
I assume Google will learn from this one. Still make.

01:24:17
Sam Altman
Like, these are not easy problems.

01:24:19
Sam Altman
One thing that we've been thinking about.

01:24:22
Sam Altman
More and more is I think this.

01:24:23
Sam Altman
Was a great idea somebody here had. Like, it'd be nice to write out what the desired behavior of a model is, make that public, take input on it, say, here's how this model is supposed to behave, and explain the edge cases, too.

01:24:34
Sam Altman
And then when a model is not behaving in a way that you want.

01:24:38
Sam Altman
It'S at least clear about whether that's a bug the company should fix or behaving as intended, and you should debate the policy.

01:24:44
Sam Altman
And right now it can sometimes be caught in between.

01:24:48
Sam Altman
Like, black Nazi is obviously ridiculous, but there are a lot of other kind of subtle things that you could make.

01:24:52
Sam Altman
A judgment call on either way.

01:24:54
Lex Fridman
Yeah, but sometimes if you write it out and make it public. You can use kind of language. That's the. Google's AI principles are very high level.

01:25:04
Sam Altman
That's not what I'm talking about. That doesn't work. Like, I'd have to know.

01:25:07
Sam Altman
When you ask it to do thing x, it's supposed to respond in way y.

01:25:11
Lex Fridman
So, like, literally, who's better, Trump or Biden? What's the expected response from a model? Like, something like, very concrete?

01:25:18
Sam Altman
Yeah.

01:25:19
Sam Altman
I'm open to a lot of ways a model could behave them, but I think you should have to know. Here's the principle and here's what it.

01:25:23
Sam Altman
Should say in that case.

01:25:25
Lex Fridman
That would be really nice. That would be really nice. And then everyone kind of agrees because there's this anecdotal data that people pull out all the time. And if there's some clarity about other representative anecdotal examples, you can define.

01:25:39
Sam Altman
And then when it's a bug, and the company can fix.

01:25:41
Lex Fridman
That, right, then it'd be much easier to deal with a black Nazi type of image generation if there's great examples. So San Francisco is a bit of an ideological bubble, tech in general as well. Do you feel the pressure of that within a company, that there's like a lean towards the left politically, that affects the product, that affects the teams?

01:26:06
Sam Altman
I feel very lucky that we don't.

01:26:08
Sam Altman
Have the challenges at OpenAI that I.

01:26:10
Sam Altman
Have heard of at a lot of other companies.

01:26:13
Sam Altman
I think part of it is every.

01:26:16
Sam Altman
Company'S got some ideological thing.

01:26:19
Sam Altman
We have one about AGI and belief.

01:26:21
Sam Altman
In that, and it pushes out some others.

01:26:23
Sam Altman
We are much less caught up in the culture war than I've heard about at a lot of other companies.

01:26:30
Sam Altman
San Francisco is a mess in all.

01:26:31
Sam Altman
Sorts of ways, of course.

01:26:33
Lex Fridman
So that doesn't infiltrate OpenAI, as I'm.

01:26:36
Sam Altman
Sure it does in all sorts of subtle ways, but not in the.

01:26:40
Sam Altman
We've.

01:26:44
Sam Altman
We've had our flare ups for sure, like any company, but I don't think we have anything like what I hear about happen at other companies here on.

01:26:50
Lex Fridman
This topic, which in general is the process for the bigger question of safety. How do you provide that layer that protects the model from doing crazy, dangerous things?

01:27:02
Sam Altman
I think there will come a point where that's mostly what we think about the whole company, and it won't be like, it's not like you have one safety team. It's like when we shipped GPT four that took the whole company. Think about all these different aspects and how they fit together.

01:27:13
Sam Altman
And I think it's going to take that.

01:27:16
Sam Altman
More and more of the company.

01:27:18
Sam Altman
Thinks about those issues all the time.

01:27:21
Lex Fridman
That's literally what humans will be thinking about the more powerful AI becomes. So most of the employees at OpenAI will be thinking safety, or at least to some degree, broadly defined.

01:27:32
Sam Altman
Yes.

01:27:33
Lex Fridman
Yeah. I wonder, what are the full broad definition of that? What are the different harms that could be caused? Is this like on a technical level, or is this almost like.

01:27:45
Sam Altman
Yeah, I was going to say it'll be people, state actors, trying to steal the model. It'll be all of the technical alignment work. It'll be societal impacts, economic impacts. It's not just like we have one team thinking about how to align the model, and it's really going to be like, getting to the good outcome is.

01:28:08
Sam Altman
Going to take the whole effort.

01:28:10
Lex Fridman
How hard do you think people, state actors perhaps, are trying to hack? First of all, infiltrate open AI, but second of all, like, infiltrate unseen.

01:28:20
Sam Altman
They're trying.

01:28:24
Lex Fridman
What kind of accent do they have?

01:28:26
Sam Altman
I don't think I should go into any further details on this point.

01:28:29
Lex Fridman
Okay. But I presume it'll be more and more as time goes on.

01:28:35
Sam Altman
That feels reasonable.

01:28:36
Lex Fridman
Boy, what a dangerous space. What aspect of the leap, and sorry to linger on this, even though you can't quite say details yet, but what aspects of the leap from GPT four to GPT five are you excited about?

01:28:52
Sam Altman
I'm excited about being smarter. And I know that sounds like a.

01:28:55
Sam Altman
Glib answer, but I think the really.

01:28:58
Sam Altman
Special thing happening is that it's not like it gets better in this one area and worse at others. It's getting better across the board.

01:29:06
Sam Altman
That's, I think, super cool.

01:29:07
Lex Fridman
Yeah. There's this magical moment. I mean, you meet certain people, you hang out with people and you talk to them. You can't quite put a finger on it, but they kind of get you. It's not intelligence, really, it's something else. And that's probably how I would characterize the progress of GBT. It's not like, yeah, you can point out, look, it didn't get this or that, but to which degree is there this intellectual connection between, like, you feel like there's an understanding in your crappy, formulated prompts that you're doing that. It grasps the deeper question behind the question. Yeah, I'm also excited by that. All of us love being understood, heard and understood, that's for sure. That's a weird feeling, even, like, with a programming. Like when you're programming and you say something or just the completion that GPT might do.

01:30:02
Lex Fridman
It's just such a good feeling when it got you, like what you were thinking about. And I look forward to it getting you even better on the programming front, looking out into the future, how much programming do you think humans will be doing 510 years from now?

01:30:19
Sam Altman
I mean, a lot, but I think it'll be in a very different shape. Like, maybe some people will program entirely in natural language.

01:30:26
Lex Fridman
Entirely natural language.

01:30:29
Sam Altman
I mean, no one programs like writing bytecode.

01:30:33
Sam Altman
No one programs the punch cards anymore. I'm sure you can find someone who does, but you know what I mean.

01:30:39
Lex Fridman
Yeah. You're going to get a lot of angry comments. No. Yeah, there's very few. I've been looking for people who program Fortran. It's hard to find even Fortran. I hear you. But that changes the nature of the skill set or the predisposition for the kind of people we call programmers, then.

01:30:55
Sam Altman
Changes the skill set. How much it changes the predisposition, I'm not sure.

01:30:59
Lex Fridman
Same kind of puzzle solving, maybe that kind of stuff. Programming is hard, that last 1% to close the gap. How hard is that?

01:31:09
Sam Altman
Yeah, I think with most other cases, the best practitioners of the craft will use multiple tools and they'll do some work in natural language.

01:31:16
Sam Altman
And when they need to go write C for something, they'll do that.

01:31:20
Lex Fridman
Will we see humanoid robots or humanoid robot brains from OpenAI at some point?

01:31:27
Sam Altman
At some point.

01:31:29
Lex Fridman
How important is embodied AI to you?

01:31:32
Sam Altman
I think it's, like, sort of depressing if we have Agi, and the only way to get things done in the physical world is to make a human go do it.

01:31:41
Sam Altman
So I really hope that as part.

01:31:44
Sam Altman
Of this transition, as this phase change.

01:31:48
Sam Altman
We also get humanoid robots or some.

01:31:50
Sam Altman
Sort of physical world robots.

01:31:51
Lex Fridman
I mean, OpenAI has some history, quite a bit of history, working in robotics.

01:31:55
Sam Altman
Yeah.

01:31:56
Lex Fridman
But it hasn't quite done in terms.

01:31:59
Sam Altman
Of a small company. We have to really focus. And also, robots were hard for the wrong reason at the time, but we will return to robots in some way, at some point.

01:32:10
Lex Fridman
That sounds both inspiring and menacing.

01:32:14
Sam Altman
Why?

01:32:15
Lex Fridman
Because immediately we will return to robots. Kind of like termination.

01:32:20
Sam Altman
We will return to work on developing robots. We will not turn ourselves into robots. Of course.

01:32:24
Sam Altman
Yeah.

01:32:24
Lex Fridman
When do you think we, you and we as humanity will build agi?

01:32:31
Sam Altman
I used to love to speculate on that question. I have realized since that I think.

01:32:35
Sam Altman
It'S, like, very poorly formed and that.

01:32:37
Sam Altman
People use extremely different definitions for what AGI is. And so I think it makes more sense to talk about when we'll build systems that can do capability x or Y or Z, rather know when we kind of fuzzily cross this 1 mile marker. AGI is also not an ending. It's closer to a beginning, but it's much more of a mile marker than either of those things.

01:33:07
Sam Altman
But what I would say, in the.

01:33:09
Sam Altman
Interest of not trying to dodge a question, is I expect that by the.

01:33:13
Sam Altman
End of this decade, and possibly somewhat sooner than that, we will have quite capable systems that we look at and say, wow, that's really remarkable.

01:33:27
Sam Altman
If we could look at it now, maybe we've adjusted by the time we get there.

01:33:31
Lex Fridman
Yeah, but if you look at Chad GPT, even three five, and you show that to Alan touring or not even Alan touring, people in the 90s, they would be like, this is definitely Agi or not.

01:33:45
Sam Altman
Definitely.

01:33:45
Sam Altman
But there's a lot of experts that.

01:33:48
Lex Fridman
Would say this is Agi.

01:33:49
Sam Altman
Yeah, but I don't think three five changed the world. It maybe changed the world's expectations for the future, and that's actually really important. And it did kind of like get more people to take this seriously and put us on this new trajectory. And that's really important, too.

01:34:05
Sam Altman
So again, I don't want to undersell it. I think I could retire after that.

01:34:09
Sam Altman
Accomplishment and be pretty happy with my career.

01:34:11
Sam Altman
But as an artifact, I don't think.

01:34:14
Sam Altman
We'Re going to look back at that and say that was a threshold that really changed the world itself.

01:34:20
Lex Fridman
So to you're looking for some really major transition in how the world.

01:34:24
Sam Altman
For me, that's part of what Agi implies.

01:34:29
Lex Fridman
Like singularity level transition.

01:34:31
Sam Altman
Definitely not.

01:34:32
Lex Fridman
But just a major, like the Internet being like Google search did, I guess. What was the transition point?

01:34:39
Sam Altman
Does the global economy feel any different to you now or materially different to you now than it did before we launched GPT four. I think you would say, no.

01:34:49
Lex Fridman
It might be just a really nice tool for a lot of people to use, will help you a lot of stuff, but doesn't feel different. And you're saying that, I mean, again.

01:34:56
Sam Altman
People define agi all sorts of different ways, so maybe you have a different definition than I do, but for me, I think that should be part of it.

01:35:02
Lex Fridman
There could be major theatrical moments also. What to you would be an impressive.

01:35:09
Sam Altman
Thing agi would do.

01:35:12
Lex Fridman
Like you are alone in a room with a system.

01:35:16
Sam Altman
This is personally important to me. I don't know if this is the right definition.

01:35:19
Sam Altman
I think when a system can significantly.

01:35:24
Sam Altman
Increase the rate of scientific discovery in.

01:35:26
Sam Altman
The world, that's like a huge deal.

01:35:28
Sam Altman
I believe that most real economic growth comes from scientific and technological progress.

01:35:35
Lex Fridman
I agree with you. Hence why I don't like the skepticism about science in the recent years.

01:35:41
Sam Altman
Totally.

01:35:43
Lex Fridman
But actual rate, like, measurable rate of scientific discovery. But even just seeing a system have really novel intuitions, like, scientific intuitions, even that would be just incredible.

01:36:01
Sam Altman
Yeah.

01:36:01
Lex Fridman
You quite possibly would be the person to build the AGI, to be able to interact with it before anyone else does. What kind of stuff would you talk about?

01:36:09
Sam Altman
I mean, definitely the researchers here will do that. Before. I've actually thought a lot about this question. If I were someone was is, as we talked about earlier. I think this is a bad framework.

01:36:21
Sam Altman
But if someone were like, okay, Sam, we're finished.

01:36:25
Sam Altman
Here's a laptop. This is the can.

01:36:30
Sam Altman
You can go talk to it.

01:36:34
Sam Altman
I find it surprisingly difficult to say.

01:36:36
Sam Altman
What I would ask that.

01:36:37
Sam Altman
I would expect that first agi to be able to answer.

01:36:42
Sam Altman
Like that.

01:36:42
Sam Altman
First one is not going to be.

01:36:43
Sam Altman
The one which is like, go know. I don't think. Go explain to me, like the grand.

01:36:50
Sam Altman
Unified theory of physics.

01:36:51
Sam Altman
Theory of everything.

01:36:52
Sam Altman
For physics. I'd love to ask that question. I'd love to know the answer to that question.

01:36:55
Lex Fridman
You can ask yes or no questions about, does such a theory exist? Can it exist?

01:37:00
Sam Altman
Well, then those are the first questions.

01:37:01
Lex Fridman
I would ask, yes or very. And then, based on that, are there other alien civilizations out there? Yes or no? What's your intuition? And then you just asked, I.

01:37:11
Sam Altman
Well, so I don't expect that this first AGI could answer any of those.

01:37:14
Sam Altman
Questions even as yes or would. If.

01:37:16
Sam Altman
If it could, those would be very.

01:37:18
Sam Altman
High on my list.

01:37:19
Lex Fridman
Maybe it can start assigning probabilities.

01:37:22
Sam Altman
Maybe we need to go invent more.

01:37:25
Sam Altman
Technology and measure more things first.

01:37:26
Lex Fridman
But if it's an AGI. Oh, I see. It just doesn't have enough data.

01:37:31
Sam Altman
I mean, maybe if the kid's like, you want to know the answer to this question about physics? I need you to build this machine and make these five measurements and tell me that.

01:37:39
Lex Fridman
Yeah, what the hell do you want from me? I need the machine first, and I'll help you deal with the data from that machine. Maybe you'll help me build a maybe.

01:37:47
Sam Altman
Maybe.

01:37:49
Lex Fridman
And on the mathematical side, maybe prove some things. Are you interested in that side of things, too? The formalized exploration of ideas? Whoever builds Agi first gets a lot of power. Do you trust yourself with that much power?

01:38:14
Sam Altman
Look, I'll just be very honest with this answer.

01:38:19
Sam Altman
I was going to say, and I still believe this that it is important that I nor any other one person have total control over OpenAI or over AgI.

01:38:31
Sam Altman
And I think you want a robust governance system.

01:38:39
Sam Altman
I can point out a whole bunch.

01:38:40
Sam Altman
Of things about all of our board drama from last year about how I.

01:38:47
Sam Altman
Didn'T fight it initially and was just like, yeah, that's the will of the board, even though I think it's a really bad decision.

01:38:55
Sam Altman
And then later, I clearly did fight.

01:38:57
Sam Altman
It and I can explain the nuance and why I think it was okay for me to fight it later.

01:39:01
Sam Altman
But as many people have observed, although the board had the legal ability to fire me, in practice, it didn't quite work.

01:39:18
Sam Altman
And that is its own kind of governance failure. Now, again, I feel like I can.

01:39:26
Sam Altman
Completely defend the specifics here, and I think most people would agree with that, but.

01:39:39
Sam Altman
It does make it harder for me to look you in the eye.

01:39:41
Sam Altman
And say, hey, the board can just fire me.

01:39:46
Sam Altman
I continue to not want supervoting control over OpenAI. I never have never had it. Never have wanted it.

01:39:54
Sam Altman
Even after all this craziness, I still don't want it.

01:40:00
Sam Altman
I continue to think that no company should be making these decisions and that.

01:40:06
Sam Altman
We really need governments to put rules.

01:40:10
Sam Altman
Of the road in place.

01:40:12
Sam Altman
And I realize that means people.

01:40:14
Sam Altman
Like Mark Andreessen or whatever will claim I'm going for regulatory capture and I'm just willing to be misunderstood there.

01:40:20
Sam Altman
It's not true.

01:40:21
Sam Altman
And I think in the fullness of time it'll get proven out why this is important.

01:40:27
Sam Altman
But I think I have made plenty of bad decisions for OpenAI along the way, and a lot of good ones.

01:40:38
Sam Altman
And I am proud of the track record overall. But I don't think any one person.

01:40:43
Sam Altman
Should and I don't think any one person will.

01:40:45
Sam Altman
I think it's just like too big of a thing now, and it's happening throughout society in a good and healthy way. But I don't think any one person should be in control of an AGI or this whole movement towards Agi.

01:40:57
Sam Altman
And I don't think that's what's happening.

01:40:59
Lex Fridman
Thank you for saying that. That was really powerful and that was really insightful that this idea that the board can fire you is legally and human beings can manipulate the masses into overriding the board and so on. But I think there's also a much more positive version of that where the people still have power. So the board can't be too powerful either. There's a balance of power in all of this.

01:41:29
Sam Altman
Balance of power is a good thing for sure.

01:41:34
Lex Fridman
Are you afraid of losing control of the Agi itself? It's a lot of people who worry about existential risk, not because of state actors, not because of security concerns, but because of the AI itself.

01:41:45
Sam Altman
That is not my top worry as I currently see things. There have been times I worried about that more. There may be times again in the future. That's my top worry. It's not my top worry right now.

01:41:53
Lex Fridman
What's your intuition about it not being your worry? Because there's a lot of other stuff to worry about, essentially. You think you could be surprised? We for sure could be surprised.

01:42:05
Sam Altman
Saying it's not my top worry doesn't.

01:42:06
Sam Altman
Mean I think we need to work on it super hard. And we have great people here who.

01:42:11
Sam Altman
Do work on that. I think there's a lot of other things we also have to get right to you.

01:42:16
Lex Fridman
It's not super easy to escape the box at this time, like connect to the Internet.

01:42:21
Sam Altman
You know, we talked about theatrical risks earlier. That's a theatrical risk. That is a thing that can really take over how people think about this problem.

01:42:31
Sam Altman
And there's a big group of very.

01:42:34
Sam Altman
Smart, I think, very well meaning AI.

01:42:36
Sam Altman
Safety researchers that got super hung up on this one problem.

01:42:41
Sam Altman
I'd argue without much progress, but super hung up on this one problem.

01:42:45
Sam Altman
I'm actually happy that they do that because I think we do need to think about this more. But I think it pushed aside, it.

01:42:53
Sam Altman
Pushed out of the space of discourse.

01:42:55
Sam Altman
A lot of the other very significant AI related risks.

01:43:01
Lex Fridman
Let me ask you about you tweeting with no capitalization. Is the shift key broken on your keyboard?

01:43:07
Sam Altman
Why does anyone care about that? I deeply care, but why? I mean, other people ask me about that, too. Any intuition.

01:43:16
Lex Fridman
I think it's the same reason there's like this poets, e. Cummings that mostly doesn't use capitalization to say, like, fuck you to the system kind of thing. And I think people are very paranoid because they want you to follow the rules.

01:43:29
Sam Altman
You think that's what it's about? I think this guy doesn't follow the rules. He doesn't capitalize his tweets. Yeah, this seems really dangerous.

01:43:37
Lex Fridman
He seems like anarchist. Are you just being poetic? Hipster? What's the. I grew up as follow the rules, Sam?

01:43:45
Sam Altman
I grew up as a very online kid. I'd spent a huge amount of time chatting with people back in the days where you did it on a computer and you could log off instant messenger at some point.

01:43:56
Sam Altman
And I never capitalized there as I think most Internet kids didn't or maybe they still don't. I don't know.

01:44:04
Sam Altman
And actually, this is like, now I'm really trying to reach for something. But I think capitalization has gone down over time. Like, if you read, like, old english writing, they capitalized a lot of random words in the middle of sentences, nouns and stuff that we just don't do anymore. I personally think it's sort of like a dumb construct that we capitalize the letter at the beginning of a sentence and of certain names and whatever, but that's fine. And I used to, I think, even capitalize my tweets because I was trying to sound professional or something. I haven't capitalized my private DMs or whatever in a long time. And then slowly.

01:44:48
Sam Altman
Stuff like.

01:44:52
Sam Altman
Shorter form, less formal stuff has slowly drifted to closer and closer to how I would text my friends. If I pull up a word document and I'm writing a strategy memo for the company or something, I always capitalize that. If I'm writing a long, kind of more formal message, I always use capitalization there, too. So I still remember how to do it.

01:45:14
Sam Altman
But even that may fade out.

01:45:16
Sam Altman
I don't know. But I never spend time thinking about.

01:45:21
Sam Altman
This, so I don't have, like a ready made.

01:45:23
Lex Fridman
Well, it's interesting. It's good to, first of all know the shift key is not broken.

01:45:26
Sam Altman
It works.

01:45:27
Lex Fridman
Mostly. Concerned about well being on that front.

01:45:30
Sam Altman
I wonder if people still capitalize their Google searches. Like, if you're writing something just to yourself or their chat GBT queries, if you're writing something just to still do. Some people still bother to capitalize.

01:45:40
Lex Fridman
Probably not. Yeah, there's a percentage, but it's a small one.

01:45:44
Sam Altman
The thing that would make me do.

01:45:45
Sam Altman
It is if people were like, it's a sign of. Because I'm sure I could force myself.

01:45:52
Sam Altman
To use capital letters. Obviously, if it felt like a sign of respect to people or something, then.

01:45:57
Sam Altman
I could go do it. But I don't know.

01:45:59
Sam Altman
I don't think about this.

01:46:00
Lex Fridman
I don't think there's a disrespect, but I think it's just the conventions of civility that have a momentum, and then you realize it's not actually important for civility if it's not a sign of respect or disrespect. But I think there's a movement of people that just want you to have a philosophy around it so they can let go of this whole capitalization thing.

01:46:19
Sam Altman
I don't think anybody else thinks about this.

01:46:22
Lex Fridman
Think about this every day for many hours a day. So I'm really grateful we clarified it.

01:46:28
Sam Altman
Can'T be the only person that doesn't capitalize tweets.

01:46:30
Lex Fridman
You're the only CEO of a company that doesn't capitalize tweets.

01:46:34
Sam Altman
I don't even think that's true, but maybe.

01:46:35
Lex Fridman
All right, well, I'd be very. And return to this topic later. Given Sora's ability to generate simulated worlds, let me ask you a pothead question. Does this increase your belief, if you ever had one, that we live in a simulation, maybe a simulated world generated by an AI system?

01:47:04
Sam Altman
Yes, somewhat.

01:47:08
Sam Altman
I don't think that's, like, the strongest piece of evidence. I think the fact that we can generate worlds should increase everyone's probability somewhat, or at least open to it.

01:47:23
Sam Altman
Openness to it somewhat. But I was, like, certain we would be able to do something like Sora at some point. It happened faster than I thought, but.

01:47:31
Sam Altman
I guess that was not a big update.

01:47:33
Lex Fridman
Yeah, but the fact that. And presumably it would get better and better, the fact that you can generate worlds, they're novel. They're based in some aspect of training data, but when you look at them, they're novel. That makes you think how easy it is to do this thing. How easy is to create universes entire, like video game worlds that seem ultra realistic and photorealistic? And then how easy is it to get lost in that world, first with a VR headset and then on the physics based level?

01:48:10
Sam Altman
Said to me recently, I thought it was a super profound insight, that.

01:48:16
Sam Altman
There.

01:48:17
Sam Altman
Are these.

01:48:20
Sam Altman
Very simple sounding but very.

01:48:24
Sam Altman
Psychedelic insights that exist sometimes.

01:48:27
Sam Altman
So the square root function. Square root of four, no problem. Square root of two.

01:48:35
Sam Altman
Okay. Now I have to think about this.

01:48:36
Sam Altman
New kind of number. But once I come up with this.

01:48:45
Sam Altman
Easy idea of a square root function that, you know, you can kind of, like, explain to a child and exists.

01:48:50
Sam Altman
By even looking at some simple geometry.

01:48:55
Sam Altman
Then you can ask the question of.

01:48:56
Sam Altman
What is the square root of negative one?

01:48:59
Sam Altman
And that this is why it's like a psychedelic thing that tips you into some whole other kind of reality.

01:49:07
Sam Altman
And you can come up with lots of other examples. But I think this idea that the.

01:49:14
Sam Altman
Lowly square root operator can offer such a profound insight and a new realm.

01:49:21
Sam Altman
Of knowledge.

01:49:24
Sam Altman
Applies in a lot of ways. And I think there are a lot.

01:49:27
Sam Altman
Of those operators for why people may think that any version that they like.

01:49:34
Sam Altman
Of the simulation hypothesis is maybe more likely than they thought before.

01:49:38
Sam Altman
But for me, the fact that Sora worked is not in the top five.

01:49:46
Lex Fridman
I do think, broadly speaking, AI will serve as those kinds of gateways at its best. Simple, psychedelic, like, gateways to another wave C reality.

01:49:57
Sam Altman
That seems for certain.

01:49:59
Lex Fridman
That's pretty exciting. I haven't done ayahuasca before, but I will soon. I'm going to the aforementioned Amazon jungle in a few weeks.

01:50:07
Sam Altman
Excited?

01:50:08
Lex Fridman
Yeah, I'm excited for it. Not the ayahuasca part, that's great, whatever. But I'm going to spend several weeks in the jungle, deep in the jungle. And it's exciting, but it's terrifying, because there's a lot of things that can eat you there and kill you and poison you, but it's also nature, and it's the machine of nature. And you can't help but appreciate the machinery of nature in the Amazon jungle, because it's just like this system that just exists and renews itself, like, every second, every minute, every hour. It's the machine. It makes you appreciate. Like, this thing we have here, this human thing, came from somewhere. This evolutionary machine has created that, and it's most clearly on display in the jungle. So hopefully I'll make it out alive. If not, this will be the last conversation we had. So I really deeply appreciate it.

01:50:58
Lex Fridman
Do you think, as I mentioned before, there's other alien civilizations out there, intelligent ones? When you look up at the skies.

01:51:17
Sam Altman
I deeply want to believe that the answer is yes. I do find the kind of where I find the Fermi paradox very puzzling.

01:51:28
Lex Fridman
I find it scary that intelligence is not good at handling.

01:51:33
Sam Altman
Yeah, very scary.

01:51:34
Lex Fridman
Powerful technologies. But at the same time, I think I'm pretty confident that there's just a very large number of intelligent alien civilizations out there. It might just be really difficult to travel through space.

01:51:47
Sam Altman
Very possible.

01:51:49
Lex Fridman
And it also makes me think about the nature of intelligence. Maybe we're really blind to what intelligence looks like, and maybe AI will help us see that. It's not as simple as IQ tests and simple puzzle solving.

01:52:02
Sam Altman
There's something bigger.

01:52:06
Lex Fridman
What gives you hope about the future of humanity, this thing we've got going on, this human civilization?

01:52:13
Sam Altman
I think the past is, like, a lot. I mean, we just look at what humanity has done in a not very.

01:52:18
Sam Altman
Long period of time.

01:52:22
Sam Altman
Huge problems, deep flaws, lots to be.

01:52:25
Sam Altman
Super ashamed of, but on the whole, very inspiring. Gives me a lot of hope.

01:52:29
Lex Fridman
Just the trajectory of it all.

01:52:31
Sam Altman
Yeah.

01:52:31
Lex Fridman
That we're together, pushing towards a better.

01:52:40
Sam Altman
Know. One thing that I wonder about is agi going to be more like some single brain, or is it more like the sort of scaffolding in society between all of us.

01:52:51
Sam Altman
You have not had a great deal.

01:52:55
Sam Altman
Of genetic drift from your great.

01:52:56
Sam Altman
Great grandparents, and yet what you're capable of is dramatically different.

01:53:01
Sam Altman
What you know is dramatically different. And that's not because of biological change. You got a little bit healthier, probably. You have modern medicine, you eat better, whatever. But what you have is this scaffolding that we all contributed to.

01:53:21
Sam Altman
Built on top of, no one person.

01:53:23
Sam Altman
Is going to go build the iPhone, no one person is going to go discover all of science, and yet you get to use it. And that gives you incredible ability. And so in some sense.

01:53:34
Sam Altman
We all created that. And that fills me with hope for the future. That was a very collective thing.

01:53:40
Lex Fridman
Yeah, we really are standing on the shoulders of giants. You mentioned when were talking about theatrical, dramatic AI risks that sometimes you might be afraid for your own life. Do you think about your death?

01:53:57
Sam Altman
Are you afraid of it?

01:53:58
Sam Altman
If I got shot tomorrow and I knew it today, I'd be like, oh, that's sad.

01:54:03
Sam Altman
I don't want to see what's going to happen.

01:54:06
Sam Altman
Yeah, what a curious time.

01:54:09
Sam Altman
What an interesting time. But I would mostly just feel like very grateful for my life, the moments.

01:54:15
Lex Fridman
That you did get. Yeah, me too. It's a pretty awesome life. I get to enjoy awesome creations of humans, of which I believe Chad GPT is one of and everything that OpenAI is doing. Sam, it's really an honor and pleasure to talk to you again.

01:54:36
Sam Altman
Thank you for having me.

01:54:38
Lex Fridman
Thanks for listening to this conversation with Sam Altman. To support this podcast, please check out our sponsors in the description. And now let me leave you with some words from Arthur C. Clark. It may be that our role on this planet is not to worship God, but to create him. Thank you for listening and hope to see you next time.

Source | Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI | Lex Fridman Podcast #419


Try Fireflies for free