Episode Transcript
Transcripts are displayed as originally observed. Some content, including advertisements may have changed.
Use Ctrl + F to search
0:00
The following is a conversation with
0:02
Elon Musk, his fourth time
0:04
on this The Lex Friedman Podcast.
0:08
And now, if you'll allow me, a quick
0:10
few second mention of each sponsor. Check them out
0:12
in the description. It's the best way to
0:14
support this podcast. We got Element
0:17
for electrolyte deliciousness, Asleep
0:20
for naps, BetterHelp for your mind,
0:23
SimplySafe for your safety, Shopify
0:26
for your online shopping, and
0:28
NetSuite for business management
0:31
software.
0:32
Choose wisely, my friends. Also,
0:35
if you want to work with our amazing team where I was hiring,
0:37
go to lexfriedman.com slash hiring.
0:40
And now, onto the full ad reads. As
0:42
always, no ads in the middle. I try to make
0:44
these interesting, but if you must skip them, please
0:47
still check out the sponsors. I enjoy their stuff.
0:50
Maybe you will too.
0:52
This episode is brought to you by the
0:54
very thing I'm drinking as I speak to
0:56
you right now, Element. Electrolyte
0:59
drink mix.
1:01
I'll usually fill a Powerade
1:03
or a Gatorade bottle, I
1:06
think it's 28 ounces, of
1:09
water and mix it with one Element
1:11
packet. My favorite flavors, watermelon
1:13
salt. That's the one I'm drinking now. That's the
1:15
one I've been drinking for a long time. I apologize
1:18
because I'm recording these words very late
1:20
at night. It's been a long day.
1:23
It's been a long night before that. A
1:27
long night, this one. So it's just
1:29
me in front of a microphone
1:31
right now, looking at a 28 ounce
1:34
bottle of water that
1:37
tastes like watermelon. Just
1:39
me in the water, happily
1:41
reunited in the desert
1:44
of human experience. I'm
1:46
also surrounded by three computer screens
1:49
that all have beautiful wallpaper,
1:53
looking at me,
1:54
taunting me with beautiful
1:57
picturesque scenes of nature.
2:00
And I just, before this,
2:02
played about 10 minutes of Diablo, where
2:05
I got to escape to a whole other
2:07
world that is also picturesque, but
2:10
not with nature, but filled
2:13
with the intense intimidating sounds
2:15
of demons attacking
2:17
you from all sides. And
2:20
all of that is somehow relaxing. Anyway,
2:23
Element is not just delicious, it combines
2:26
this perfect amount of magnesium,
2:30
potassium, and sodium,
2:33
watermelon salt.
2:35
Anyway, get a sample pack for free
2:37
with any purchase. Try it at www.drinkelement.com.
2:41
This episode
2:44
is also brought to you by the thing that's
2:47
calling my name at this very moment, the 8
2:50
Sleep Pod 3 mattress. It
2:54
has two sides of the bed, so you can cool
2:57
or heat each side of the bed separately. I've
3:00
actually recently was briefly
3:03
at a get-together of a lot of really interesting
3:05
people, and somebody brought up the topic of 8 Sleep,
3:08
and I just heard that a
3:10
bunch of other folks said, Oh, I love 8
3:13
Sleep, and they started talking about the different temperatures
3:15
that they use, and I was shocked and
3:17
horrified to learn that some people actually heat
3:19
their bed up. This was
3:22
a fascinating discovery for me because
3:24
I've never even tried the heating feature. I always
3:27
cool the bed. Sometimes
3:30
it's very cold. With a warm blanket, it's heaven.
3:32
But I can't recommend it enough. Sleep
3:35
really is a magical place. It's a bit surreal
3:37
saying these words while
3:39
quite sleep-deprived, a bit
3:42
melancholic, a bit hopeful,
3:46
thinking about the 8 Sleep bed just a few steps away. Somehow
3:50
that thought fills me with gratitude
3:52
for just having these things in
3:54
my life. Anyway,
3:58
check them out and get some sleep. Special savings when
4:01
you go to 8sleep.com slash
4:03
Lex. This
4:05
episode is also brought to you by BetterHelp, spelled
4:08
H-E-L-P, help. Anytime
4:12
I spell out a word, I always
4:14
imagine myself spelling out
4:16
that word in giant letters and sand,
4:19
stranded alone on an island,
4:22
looking at a airplane floating
4:25
above, hoping that they see the text I write.
4:28
BetterHelp.
4:30
I just actually got a text from a friend.
4:35
I'm looking at it now. Again,
4:37
filled with gratitude for having so many amazing
4:39
people. The text says, hey brother, just
4:41
sending you some love.
4:43
Miss you. Sent late at night.
4:47
Oh boy. Yeah,
4:50
yeah, yeah. So BetterHelp.
4:54
Speaking of friends, it's good
4:56
to talk. It's good to talk to people. It's
4:59
good to talk to people that truly listen. That could be friends,
5:02
loved ones, or that could be a professional
5:04
therapist. But you don't have to choose one.
5:06
You can have all of those. And
5:09
easy, accessible
5:10
therapist
5:12
is what BetterHelp is all about. Check
5:14
them out at BetterHelp.com slash Lex and
5:16
save in your first month. That's BetterHelp.com
5:19
slash Lex.
5:21
This show is also brought to you by Simply
5:23
Safe. And as we go on with
5:26
this program, I'm starting to have more
5:29
and more fun. Because
5:31
fun is all you need. Simply
5:33
Safe is a home security company. After,
5:36
of course, you establish security. Once you're safe,
5:39
then the fun can begin. For
5:41
me, Simply Safe just establishes
5:43
that first base layer of physical
5:45
security. It's super easy
5:48
to set up. They have a cool new feature
5:51
called Fast Protect Monitoring
5:54
that allows Simply Safe monitoring
5:56
agents to see, speak to, and deter intruders
5:59
through the. smart alarm indoor camera.
6:03
I'm tempted to test the system by having
6:06
one of the quadrupeds, the legged robots,
6:09
roll up to it and see what the agents
6:11
say. Just
6:14
attach a Chad GPT, GPT-4 agent
6:18
or a GROK agent on top of it and
6:21
have the two of them have a conversation and see what they
6:24
figure out. I'm not sure it's
6:26
going to end well. But science proceeds
6:30
one experiment at a time, friends. Anyway,
6:33
I love how easy SimpliSafe was to set
6:36
up. Their sensors are awesome. Their monitoring
6:38
is awesome. It's affordable, easy
6:40
to use. I love products that are just, you know, easy
6:42
to set up, easy to maintain.
6:45
Everything works.
6:46
Easy.
6:48
Anyway, go to simplisafe.com
6:50
to get 50% off any SimpliSafe
6:52
system when you sign up for fast, protect,
6:55
monitoring. That's simplisafe.com.
6:57
This
7:00
show is also brought to you by Shopify, a platform
7:03
designed for anyone to
7:05
sell anywhere. And when I say anyone,
7:07
I mean somebody like me.
7:10
It took me minutes, maybe
7:13
even seconds to set up a store and sell
7:15
shirts, which you can now buy at, I
7:18
think the link is lexfreeman.com.
7:21
That forwards you to the Shopify store with three
7:25
different shirts. In
7:27
this case, it's not really about the store. It's about
7:29
just celebrating
7:32
stuff we love. I'm a huge fan of
7:34
a lot of bands and podcasts
7:37
and I just love
7:39
buying up their merch, wearing it and
7:41
seeing who notices. I love
7:45
wearing a Huber and Lab shirt on
7:48
the trail when I run and just
7:50
see who recognizes the full absurdity
7:55
of it, what they say. It's just
7:57
wonderful. The joy that we can share by...
8:00
this little kind of sign that
8:02
you wear on yourself that, oh, I like this
8:04
particular thing. Let's
8:06
talk about it. Yeah, I love it. I
8:08
like wearing like Orwell shirts or or
8:11
Dostoevsky shirts or Bukowski shirts
8:13
or sci-fi stuff. You
8:16
know, 2001 Space Odyssey. Oh,
8:18
you can get all of those things or
8:20
sell those things on Shopify. Sign
8:23
up for a one dollar per month trial period
8:25
at Shopify.com slash Lex. That's
8:28
all lowercase. Go to Shopify.com
8:31
slash Lex to take your business to
8:33
the next level today. This
8:36
show is also brought to you by NetSuite,
8:39
an all in one cloud business
8:42
management system. They
8:44
take care of all the messy,
8:47
complicated things involved in running
8:49
a business. A company like
8:53
Elon says is
8:56
a kind of super intelligent being comprised
8:59
of individual human beings
9:01
working together in interesting ways that
9:04
when effective can be greater than some
9:06
of their parts. You know, but there's a lot
9:09
of things involved there. It's not just great
9:12
design and engineering. It's not great
9:14
innovation. It's
9:17
all of the things, the
9:19
financials, the human resources
9:21
of hiring and firing. It's maintaining
9:23
inventory. It's the e-commerce.
9:26
It's all the messy business details.
9:29
And so you should use the best tools for
9:32
the job of handling that
9:34
mess. That beautiful, beautiful mess involved
9:37
in running a company. And I, too, by
9:39
the way, would love
9:42
to be a part of that mess. A
9:45
small cog in the machine. In
9:47
the super intelligent machine that is a company someday
9:50
soon. Anyway, you can start
9:53
now with no payment or interest for six months.
9:55
Go to NetSuite.com to
9:58
access their one of a kind financing program. that's
10:00
netsuite.com slash lax.
10:05
And now, dear friends, here's
10:07
Elon Musk.
10:30
I thought
10:34
you were
10:38
going to finish it. It's
10:41
one
10:44
of the greatest themes in all of film history.
10:48
That was great. So
10:52
I was just thinking about the Roman Empire, as
10:54
one does. There's
10:57
that whole meme where a guy
10:59
says, I was thinking about the Roman Empire, at
11:02
least once a day. And half the population is confused
11:04
whether it's true or not. But
11:07
more seriously, thinking about the wars going
11:09
on in the world today. And as you
11:11
know, war and military
11:14
conquest has been a big part of
11:16
Roman society and culture. And
11:18
I think it's been a big
11:20
part of most empires and dynasties throughout
11:23
human history. So they usually
11:26
came as a result of conquest.
11:28
I mean, there's something like the Austro-Hungarian Empire where there
11:31
was just a lot of sort of clever marriages.
11:35
But fundamentally, there's an engine of conquest.
11:37
They celebrate excellence in
11:40
warfare. Many of the leaders
11:42
were excellent generals, that
11:45
kind of thing. So big picture question,
11:48
Grok approved. I asked this a good question to
11:50
ask. You tested, Grok approved? Yeah.
11:52
At least on fun mode.
11:59
To what degree do you think... I think war is part of human
12:01
nature versus
12:03
a consequence of
12:06
how human societies are structured. I
12:09
ask this as you
12:11
have somehow controversially been a proponent
12:14
of peace.
12:15
I'm a fairly proponent of peace. Ignorance
12:18
is perhaps in my view the real enemy
12:21
to be countered. That's the real hard
12:23
part, not fighting other humans.
12:27
But all creatures fight. The
12:29
jungle is like, you look at the, people
12:32
think of nature as perhaps some
12:34
sort of peaceful thing, but in fact it is not. There's
12:36
some quite funny when
12:39
a hutsog thing where he's like
12:41
in the jungle, like saying that it's like
12:43
basically just murder
12:46
and death in every direction. I mean,
12:48
the plants and animals in the jungle are constantly trying to
12:50
kill each other every single day,
12:53
every minute. So it's not like
12:56
we're unusual in that respect.
12:58
There's a relevant question here, whether with greater
13:01
intelligence comes
13:03
greater control over these base instincts
13:05
for violence.
13:07
Yes.
13:08
We have much more of an ability
13:10
to control our limbic
13:14
instinct for violence than say a chimpanzee. And
13:16
in fact, if one looks to
13:18
say a chimpanzee society, it is not friendly.
13:21
I mean, the bonobos are an exception, but
13:25
chimpanzee society is full
13:29
of violence and it's
13:31
quite horrific, frankly.
13:33
That's not an limbic system in action. Like
13:36
you don't want to be on the wrong side of a chimpanzee. You'll eat your face
13:38
off and tear your nuts off.
13:40
Yeah, basically there's no limits or ethics
13:42
or, they're almost at just war. There's
13:45
no just war in the chimpanzee societies.
13:47
Is war and dominance by any
13:50
means necessary? Yeah, a chimpanzee society
13:52
is like a primitive version of human society.
13:55
They're not like peace-loving basically at
13:58
all.
13:59
but there's extreme violence. And
14:03
then once in a while, somebody who's watched too many
14:05
Disney movies decides to raise Chopin's ear
14:07
as a pet. And
14:10
then that eats their face, or if they're nuts
14:12
off, or chews their fingers off, and that kind of thing. It's
14:14
happened several times. Ripping your nuts off
14:17
is an interesting strategy for
14:19
interaction. It's happened to people,
14:21
it's unfortunate. That's, I guess, one
14:23
way to ensure that the other
14:25
chimp doesn't contribute
14:28
to the gene pool. Well, from a martial arts
14:30
perspective, it's a fascinating strategy. The
14:33
nut rougher. I
14:37
wonder which of the martial arts teaches that. I
14:39
think it's safe to say if somebody's got
14:42
your nuts in their hands and
14:44
has the option of rubbing them off,
14:45
you will be amenable to whatever
14:48
they want. Yeah. So,
14:51
like I said, somehow, controversially, you've been
14:53
a proponent of peace on Twitter,
14:56
on X. So let me ask you about
14:58
the wars going on today, and to see
15:00
what the path to peace could be. How
15:02
do you hope the current war in Israel and Gaza comes
15:05
to an end? What paths do
15:07
you see that can minimize human
15:09
suffering in the long term, in that part of
15:11
the world?
15:12
Well, I think that part of
15:14
the world is definitely, like if you look
15:17
up the, there is no easy answer in the
15:19
dictionary. It'll be that, like the picture of
15:21
the Middle East, in
15:23
Israel especially. So, there is no easy
15:26
answer.
15:29
What my, strictly
15:32
my opinion of, is that the goal of Hamas
15:34
was to provoke an
15:40
overreaction from Israel. They
15:43
obviously did not expect to
15:45
have a military victory, but they really
15:49
wanted to commit
15:51
the worst atrocities that they could, in order
15:53
to provoke the most
15:56
aggressive response possible from Israel. and
16:00
then leverage that aggressive
16:02
response to rally
16:07
Muslims worldwide for
16:09
the cause of Gaza and Palestine,
16:12
which they have succeeded in doing. The
16:19
counterintuitive thing here, I think that the thing that
16:22
I think should be done, even though it is very difficult, is
16:25
that I would recommend
16:27
that Israel engage in the most conspicuous
16:30
acts of kindness possible, everything.
16:32
That is the actual thing that would
16:35
thwart the goal of Hamas.
16:37
So in some sense, does
16:39
it agree that makes sense in geopolitics
16:42
turn the other cheek implemented? It's
16:44
not exactly turn the other cheek because I do
16:46
think that there's,
16:50
you know, I think it is appropriate
16:52
for Israel to find
16:56
the Hamas members and, you know,
17:00
either kill them or incarcerate them. Like,
17:04
something has to be done because they're just going
17:06
to keep coming otherwise. But
17:09
in
17:12
addition to that, they need to do
17:14
whatever they can. There's some
17:17
talk of establishing, for example, a mobile
17:19
hospital. I'd recommend doing that. Just
17:24
making sure that, you
17:26
know, there's food, water, medical
17:29
necessities,
17:33
and just be over the top about it and be
17:36
very transparent. So it's
17:39
not that you can't claim it's a trick.
17:41
Like, just put a wave cam on the thing.
17:43
You know, all 24-7.
17:46
Deploy acts of kindness.
17:49
Yeah, conspicuous acts
17:51
of kindness that are
17:54
unequivocal, meaning they can't be somehow
17:58
because Hamas will end their response
18:00
will be, oh, it's a trick. Therefore, you have to
18:02
counter how it is not a trick.
18:04
This ultimately fights the broader
18:07
force of hatred in the region.
18:09
Yes. And I'm not sure who said it, to
18:11
some sort of apocryphal saying, but an eye
18:13
for an eye makes everyone blind. Now,
18:16
that neck of the woods, they really believe
18:19
in the whole eye for an eye thing. But
18:23
I mean, you really have... If
18:27
you're not going to just outright commit
18:30
genocide, like against an entire people, which obviously
18:32
would not be acceptable to... to really
18:34
shouldn't be acceptable to anyone, then
18:37
you're going to leave basically a lot of people alive
18:40
who subsequently, you
18:42
know, hate Israel. So really the question is like, how...
18:45
For every Hamas member
18:48
that you kill, how many did you create?
18:51
And if you create more than you
18:53
kill, you've not succeeded. That's
18:56
the real situation there. And
18:58
it's safe to say that if, you
19:00
know,
19:06
if you kill somebody's child in Gaza,
19:09
you've made
19:11
at least a few Hamas
19:14
members who
19:16
will die just
19:19
to kill the Israeli. That's
19:21
the situation. So, but
19:26
I mean, this is one of the most contentious subjects
19:29
one could possibly discuss. But I think
19:32
if the goal ultimately is some sort of long-term
19:34
piece, one has to be... Look at this
19:37
from the standpoint of, over
19:39
time, are there more or fewer terrorists
19:43
being created?
19:44
Let me just linger on war.
19:47
Yeah. Well, war, it's safe to say wars always
19:49
existed and always will exist. Always will
19:52
exist. Always has existed
19:54
and always will exist.
19:55
I hope not. You think it always will? Always
19:57
will. Always will.
20:00
This question of just how much war and you know,
20:02
there's this sort of the scope and scale
20:06
of war, but to
20:08
my, I imagine that there would not be any war in the future
20:11
I think would be a
20:12
very unlikely outcome. Yeah, you talked about
20:14
the culture series, there's war even there.
20:16
Yes, there's a giant war. The first book
20:18
starts off with a gigantic
20:21
galactic war where trillions
20:23
die, trillions.
20:25
But it still nevertheless protects these pockets
20:28
of flourishing. Somehow
20:31
you can have galactic war and
20:33
still have pockets of flourishing.
20:36
Yeah, I mean, I guess if
20:38
we are able to one day expand to,
20:40
you know, full
20:43
of galaxy or whatever, there
20:45
won't be a galactic war at
20:47
some point.
20:48
Ah, the scale, I mean the scale of war
20:50
has been increasing, increasing, increasing. It's
20:53
like a race between the scale of suffering
20:55
and the scale of flourishing.
20:57
Yes.
21:00
A lot of people seem to be using this tragedy to
21:02
beat the drums of war and feed
21:04
the military industrial complex. Do you worry about this?
21:08
The people who are rooting
21:10
for escalation and how can
21:12
it be stopped?
21:14
One of the things that just concerned me is that there
21:16
are very few people alive today who actually, viscerally
21:20
understand the horrors of war, at
21:22
least in the US. I mean, obviously there are people on
21:24
the front lines in Ukraine and Russia who understand
21:27
just how terrible war is, but how
21:29
many people in the West understand it? My
21:33
grandfather was in World War II. He
21:36
was severely traumatized. I mean, he was there
21:39
for almost six years in the East
21:42
and North Africa and Italy. All
21:44
his friends were killed in
21:46
front of him and he
21:48
would have died too, except they randomly
21:51
gave some IQ tests
21:53
or something and he
21:56
scored very high. He
21:58
was not an officer, he was a corporal or a sovereign. or something
22:00
like that, because he didn't finish high
22:02
school. He had to drop out of high school
22:04
because his dad died and he had
22:06
to work to support his siblings.
22:10
So because he didn't graduate high school, he was not eligible
22:13
for the officer corps. So he
22:15
kind of
22:18
got put into the cannon fodder category, basically.
22:22
But then randomly, they gave
22:24
him this test. He was transferred to
22:26
British Intelligence in London. That's where he met my grandmother.
22:31
But he had PTSD
22:33
next level. Like next level. I
22:35
mean, just didn't talk. Just
22:38
didn't talk. And if you tried talking to him, he'd just
22:40
tell you to shut up. And
22:43
he won a bunch of medals. Never ragged
22:45
about it once. Not even hinted.
22:47
Nothing. I found out about it because his
22:50
military records were online. That's how I know.
22:53
So he would get
22:56
like no way in hell if you want to
22:58
do that again. But how many people... Now,
23:01
he died 20 years
23:04
ago, or longer actually, 30 years ago.
23:09
How many people are alive that remember World War II? Not
23:11
many. And the same perhaps
23:13
applies to the threat of nuclear
23:15
war. Yeah.
23:19
I mean, there are enough nuclear bombs pointed at the
23:22
United States to make the radioactive rebel
23:25
bounce many times.
23:28
There's
23:29
two major wars going on right now.
23:32
So you talked about the threat of AGI
23:34
quite a bit. But now,
23:37
as we sit here with the intensity of conflict
23:39
going on, do you worry about nuclear war?
23:41
I
23:43
think we shouldn't just count the possibility of nuclear war.
23:46
It is a civilizational threat.
23:54
Right now, I could be wrong, but I
23:56
think the current probability of
23:58
nuclear war is quite low. But
24:02
there are a lot of nukes pointed at us,
24:05
and we have a lot of nukes pointed at other people. They're
24:08
still there, and nobody's put their
24:11
guns away. The
24:13
missiles are still in the silos.
24:15
And the leaders don't seem to be
24:17
the ones with the nukes
24:19
talking to each other.
24:21
No.
24:22
There are wars which are tragic
24:24
and difficult on a local basis, and
24:26
then there are wars which are civilization-ending.
24:29
Or have that potential. Obviously, global
24:31
thermonuclear warfare has high potential
24:34
to end civilization, perhaps
24:36
permanently. But certainly to
24:39
severely
24:42
wound and perhaps
24:46
set back human progress
24:48
by the Stone Age or
24:51
something. I don't know. Pretty bad.
24:57
Probably scientists and engineers won't be super popular
24:59
after that as well. Like you got to think of this math.
25:04
So generally,
25:06
I think we also want to prioritize civilization
25:08
risks over things that are painful
25:12
and tragic on a local level, but not civilization.
25:18
How do you hope the war in Ukraine comes to an end? And
25:21
what's the path, once again, to minimizing human
25:23
suffering there?
25:26
Well, I think what
25:29
is likely to happen, which
25:31
is pretty much the
25:34
way it is, is that something
25:38
very close to the current lines
25:41
will be how
25:45
a ceasefire or truce
25:47
happens.
25:48
But you
25:51
just have a situation right now where whoever goes on the offensive
25:54
will suffer casualties at several
25:56
times the rate of whoever's on the defense, because
25:59
you've got to... But defense and death
26:01
field, minefields, trenches, anti-tank
26:04
defenses, nobody has
26:06
air superiority because
26:09
the anti-aircraft missiles are
26:11
really far better than the aircraft. Like, there
26:13
are far more of them. And
26:17
so neither side has air superiority.
26:20
Tanks are basically death traps, just
26:23
slow moving and they're not immune
26:26
to anti-tank weapons. So
26:28
you really just have long range artillery and
26:33
infantry, trenches. It's all
26:36
over one, all over again.
26:38
With drones, yeah, throwing
26:40
a little drone, some drones there. Which
26:43
makes the long range artillery just that much more
26:45
accurate and better and so more
26:47
efficient at murdering people on both
26:50
sides.
26:52
Yeah. Whoever
26:54
is... You don't want to be trying
26:56
to advance from either side because the probability
26:58
of dying is incredibly high. So
27:02
in order to overcome
27:07
defense and death trenches and minefields,
27:10
you really need significant
27:12
local superiority in numbers.
27:16
Ideally combined homes where
27:18
you do a fast attack
27:20
with aircraft, a
27:23
concentrated number of tanks and
27:27
a lot of people. That's the only way you're going to
27:29
punch through a line. And then you're going to punch
27:31
through and then not
27:34
have reinforcements just kick
27:36
you right out again. I mean, I
27:39
really recommend people read World
27:42
War I warfare in detail.
27:46
It's rough.
27:51
I mean, the sheer number of people that died there was mind-boggling.
27:55
And it's almost impossible to imagine
27:58
the end of it that doesn't look like... almost
28:00
exactly like the beginning in terms of
28:03
what land belongs to who
28:05
and so on.
28:07
But on the other side of a lot of human
28:10
suffering,
28:11
death and destruction of infrastructure.
28:14
Yes. The
28:16
reason I proposed
28:19
some sort of crucial or peace
28:23
a year ago was because I predicted
28:26
pretty much exactly what would happen, which
28:28
is a lot of people dying for basically
28:31
almost no changes in land. And
28:35
the loss of the flower
28:37
of Ukrainian and Russian youth, and
28:39
we should have some sympathy for the Russian
28:42
boys as well as Ukrainian boys because the Russian
28:45
boys didn't ask to be on their front line. They have
28:47
to be. So there's
28:53
a lot of sons not coming back to their parents. And
28:58
I think most of them don't really have,
29:00
they don't hate the other side.
29:02
It's sort of like, like
29:05
this same country
29:07
from World War I, it's like young
29:10
boys who don't know each other, killing
29:12
each other on behalf of old men
29:14
that do know each other. The
29:18
hell is the point of that?
29:20
So Volodymyr Zelensky said that he's
29:22
not or has said in the past, he's
29:24
not interested in talking to Putin directly. Do
29:26
you think he should sit down, man
29:29
to man, leader to leader and negotiate peace?
29:31
I think I would
29:34
just recommend do not send the
29:37
flower of Ukrainian youth to
29:39
die in trenches.
29:43
Whether he talks to Putin or not, just
29:45
don't do that. Whoever
29:48
goes on the offensive will lose massive
29:51
numbers of people.
29:55
And history will not look kindly upon them.
30:00
You've spoken honestly about the possibility of war
30:02
between US and China in the long term,
30:05
if no diplomatic solution is found. For
30:08
example, on the question of Taiwan and one China policy,
30:11
how do we avoid the trajectory where these two
30:13
superpowers clash?
30:16
Well, it's worth reading that book and the difficulty
30:19
to pronounce the acidities trap, I believe it's
30:21
called. I love war history, I
30:23
like it inside out and backwards.
30:28
There's hardly a battle I haven't read about.
30:31
And trying to figure out what really was
30:33
the cause of victory in any particular case, as
30:35
opposed to what one side or another claimed
30:37
for the reason. Both the victory
30:40
and what sparked the war.
30:42
Yeah,
30:43
yeah. The whole thing. Yeah. So
30:46
Athens and Sparta is a classic case. The thing about the Greeks is
30:48
they really wrote down a lot of stuff. They loved writing.
30:52
There are lots of interesting things that happened many parts of the world,
30:54
but their people just didn't write down. So
30:56
we don't know what happened. Or
30:59
they didn't really write in detail.
31:02
They just would say like, we went,
31:05
we had a battle and we won. And like, can you add a bit more? The
31:09
Greeks, they really wrote a lot. They're
31:13
very articulate. They just love writing. And
31:16
we have a bunch of that writing that's preserved. So we know what
31:19
led up to the Peloponnesian War between
31:22
the Spartan and Athenian alliance. And
31:28
we know that they saw it coming.
31:34
I mean, the Spartans didn't write. They also weren't
31:36
very proposed by their nature. But they didn't write, but
31:38
they weren't very proposed. They weren't terse.
31:41
But the Athenians and the other Greeks wrote
31:43
a line. And they were like... And
31:47
Sparta was really kind of like the leader
31:50
of Greece. But
31:52
Athens grew stronger and stronger with each passing
31:54
year. And
31:57
everyone's like, well, that's inevitable that there's going to be
31:59
a clash.
31:59
between
32:01
Athens and Sparta.
32:03
Well, how do we avoid that? And they couldn't, they couldn't,
32:05
they actually, they saw it coming and they
32:07
still could not avoid it. So,
32:12
you know, at some point, if there's, if
32:14
one
32:17
group, one civilization or country
32:19
or whatever
32:22
exceeds another, sort of like,
32:25
the United States has been the biggest kid on the block since
32:28
I think around 1890, from
32:31
an economic standpoint. So
32:34
the United States has been the economic,
32:36
most powerful economic engine
32:39
in the world, longer than anyone's
32:41
been alive. And
32:46
the foundation of war is economics. So
32:52
now we have a situation in case of China
32:55
where the economy
32:58
is likely to be two,
33:00
perhaps three times larger than that of
33:03
the US. So imagine you're the biggest kid
33:05
on the block for as long as anyone can remember,
33:07
and suddenly it comes along, it's twice your size.
33:13
So we see it coming. Yeah. How
33:16
is it possible to stop?
33:18
Is there some, let me throw something
33:20
out there, just intermixing of cultures,
33:23
understanding. There just seemed to be
33:25
a giant cultural gap in understanding of
33:27
each other. And you're an interesting case
33:29
study because you are an American,
33:31
obviously, you've done a lot of incredible
33:35
manufacture here in the United States, but you also work
33:37
with China. I spent a lot of time in China and
33:39
met with the leadership many times. Maybe a good
33:41
question to ask is, what are
33:43
some things about China that people don't understand
33:46
positive, just in the culture? What's
33:48
some interesting things that you've learned about
33:50
the Chinese?
33:54
Well, the sheer number
33:56
of really smart, hardworking people in China
33:59
is...
34:01
incredible. There are,
34:03
I believe, how
34:05
many smart, hard-working people are there in China? There's
34:07
far more there than there are here, I
34:09
think, in my opinion. There's
34:14
a lot of energy.
34:18
So, I
34:20
mean, the architecture in China
34:22
that's in recent years is far more impressive than
34:25
the US.
34:26
I mean, the train stations, the buildings,
34:29
the high-speed rail, everything, it's really
34:34
far more impressive than what we
34:36
have in the US. I mean, I recommend somebody just go
34:39
to Shanghai and Beijing,
34:41
look at the buildings, and go to take
34:44
the train from Beijing to Xi'an where you have the
34:46
terracotta warriors. China's
34:48
got an incredible history, a very long
34:50
history. And I
34:54
think, arguably, in terms
34:56
of the use of language
34:58
from a written standpoint, one
35:01
of the oldest, perhaps
35:04
the oldest, written language. And
35:06
in China, people did write things down. So,
35:09
now,
35:11
China historically has always been,
35:14
with the rare exception, been internally focused.
35:18
They've not been acquisitive.
35:20
They've fought each
35:22
other. There have been many, many civil wars. In
35:26
the Three Kingdoms War, I believe they
35:28
lost about 70% of their population. So,
35:34
they've had brutal internal
35:36
wars, like civil wars that make
35:39
the US civil war look small,
35:42
by comparison. So,
35:48
I think it's important to appreciate that China is not monolithic.
35:53
We sort of think that China is this sort of one
35:56
entity, well, one mind, and this is
35:58
definitely not the case. And from
36:01
what I've seen, and I think most
36:04
people who understand China would agree, people
36:07
in China think about China 10 times more
36:09
than they think about anything outside of China. So
36:12
it's like 90% of their
36:14
consideration is internal.
36:19
Well, isn't that a really positive thing when
36:22
you're talking about the collaboration and the future
36:24
peace between superpowers when you're
36:26
inward facing, which is like focusing on
36:28
improving yourself versus focusing
36:31
on, quote unquote,
36:33
improving others through military might? The
36:36
good news, the history of China suggests
36:39
that China is not acquisitive, meaning they're not going to go
36:41
out and invade a whole bunch of countries. They
36:44
do feel very strongly, so that's
36:46
good, because a lot of very
36:49
powerful countries
36:49
have been acquisitive. The
36:53
US is also one of the rare cases that
36:55
has not been acquisitive. After
36:57
World War II, the US could have basically taken over the world
37:00
and any country. Like we've got nukes, nobody else
37:02
got nukes. We don't even have to lose soldiers.
37:06
Which country do you want? And the United
37:08
States could have taken over everything. Oh,
37:10
it had well, and it didn't. And
37:13
the United States actually helped rebuild countries. It
37:16
helped rebuild Europe. It helped
37:18
rebuild Japan. This
37:20
is very unusual behavior, almost
37:23
unprecedented.
37:27
The US did conspicuous acts
37:30
of kindness,
37:31
like the Berlin airlift.
37:36
And I think it's
37:39
always like, well, America's done
37:41
bad things. Well, of course America's done bad things, but one needs to
37:43
look at the whole track record. And
37:47
just generally, one
37:51
sort of test would be, how do you treat your prisoners
37:53
of war? Or
37:56
let's say... No
38:01
offense to the Russians, but let's say
38:03
you're in Germany, it's 1945. You've
38:05
got the Russian army coming on one side, and
38:07
you've got the French, British, and American
38:09
armies coming on the other side. Who would
38:11
you like to be to
38:13
surrender to?
38:14
No country is morally perfect, but
38:17
I recommend being
38:20
a POW with the Americans. That would be
38:22
a mighty choice very strongly. In
38:25
the full menu of POW. Very much
38:28
so. In fact, one
38:30
of our
38:33
Brown, a small guy, was like,
38:35
we've got to be captured by the Americans. Yeah.
38:39
And in fact, the
38:41
SS was under orders to execute more
38:43
Brown and all of the German
38:46
ruckus. And
38:49
they narrowly escaped their
38:52
SS. They said they were going out for a walk
38:54
in the woods. They left in the middle of winter with no coats.
38:58
They ran like, no food, no coats, no water, and
39:01
just ran like hell and ran west.
39:04
And by sheer like, I think his brother found
39:06
like a bicycle or something, and then
39:10
just cycled west as fast as he
39:12
could and found a US patrol. So
39:18
anyway, that's one way you can tell morality.
39:22
Who would you want to be a POW? It's
39:25
not fun anywhere, but some places are much worse than
39:27
others. So anyway,
39:31
so like America has been, well,
39:34
far from perfect, generally a
39:37
benevolent force.
39:39
And we should
39:41
always be self-critical and try to
39:43
be better.
39:45
But anyone with half
39:48
a brain knows that. So
39:50
I think there are, in this way, China
39:53
and the United States are
39:55
similar. Neither country has been acquisitive.
39:59
in a significant way. So that's like a
40:02
shared principle, I guess. Now,
40:05
China does feel very strongly about Taiwan.
40:09
They've been very clear about that
40:12
for a long time. From
40:14
this standpoint, it would be like one of
40:16
the states is not
40:20
there like Hawaii or something like that,
40:23
but more significant than Hawaii. Hawaii
40:28
is pretty significant for us. So they
40:30
view it as really
40:33
the... There's
40:36
a fundamental part of China, the
40:39
island of Formosa, now Taiwan
40:41
that is not
40:44
part of China, but should be. And the
40:46
only reason that it hasn't been is because of the US specifically.
40:50
And as their economic power grows and as their military
40:53
power grows, the thing
40:55
that they are clearly saying is
40:59
their interests will
41:02
clearly be materialized.
41:04
Yes. China
41:06
has been very clear that
41:09
they will incorporate Taiwan
41:12
peacefully or militarily,
41:16
but that they will incorporate it from
41:18
this standpoint is 100% likely.
41:22
Now, something you said about conspicuous acts
41:24
of kindness. As a geopolitical policy,
41:27
it almost seems naive.
41:31
I'd venture to say that this is probably the path
41:34
forward, how you avoid
41:36
most wars. Just as you
41:39
say it, it sounds naive, but
41:41
it's kind of brilliant. If
41:43
you believe in the goodness of
41:47
underlying most of human nature, it
41:49
just seems like conspicuous
41:51
acts of kindness can
41:54
reverberate through the populace of
41:57
the countries involved.
41:59
Well, and de-escalate. Absolutely.
42:02
So after World War
42:04
I, they
42:07
made a big mistake. They basically
42:09
tried to lump all the blame on Germany
42:14
and settled
42:18
Germany with impossible
42:20
reparations. And
42:24
really, there was quite a bit of blame
42:27
to go around
42:29
for World War I, but they tried
42:32
to put it all on Germany. And
42:38
that laid the
42:40
seeds for World War II. So
42:43
a lot
42:46
of people were not just Hitler. A lot of people felt wronged.
42:48
And they wanted vengeance. And
42:51
they got
42:55
it.
42:56
People don't forget. Yeah.
42:59
You kill somebody's father,
43:02
mother, son, daughter, they're not going to forget it.
43:06
They all want vengeance. So
43:09
after World War II, they're like, well, that Treaty
43:11
of Versailles was a huge mistake at
43:14
World War I. And
43:17
so this time, instead of
43:23
crushing the losers, we're actually
43:25
going to help them
43:27
with the Marshall Plan. And we're
43:29
going to help revolt Germany.
43:31
We're going to help revolt Australia
43:35
and the other, Italy
43:37
and whatnot. So
43:43
that was the right
43:45
move. It does feel like there's a profound truth
43:48
to conspicuous
43:51
acts of kindness being an antidote to this.
43:55
Something must stop the cycle of reciprocal
43:57
violence. something
44:00
will stop it or it will you
44:02
know it'll it'll
44:04
never stop just eye
44:08
for an eye tooth for a tooth limb for
44:10
a limb life for a life
44:12
forever and ever
44:13
to escape
44:15
briefly the darkness was some incredible engineering
44:18
work xai
44:20
just released grok ai assistant
44:23
that i've gotten a chance to play with it's uh
44:25
it's amazing on many levels first of all it's amazing
44:27
that a relatively small team in a
44:30
relatively short amount of time was
44:32
able to develop this closest
44:35
state-of-the-art system uh
44:37
another uh incredible thing is there's
44:39
a regular mo and there's a fun mode yeah i
44:41
guess i'm to blame for that one i
44:45
wish it first of all everything in life had a fun
44:47
mode yeah i did there's something
44:50
compelling beyond just fun about
44:52
the fun mode interacting with
44:54
a larger language model i'm not sure exactly
44:57
what it is because i've only had a little bit of
44:59
time to play with it but it just makes
45:02
it more interesting more vibrant
45:04
to interact with the system yeah
45:07
absolutely i um
45:10
yeah our our
45:12
ai grok is modeled after the hitchhiker's
45:15
guide to galaxy uh which
45:17
is one of my favorite books which
45:19
is it's a book on philosophy it disguises
45:22
a book on humor um and
45:25
um i would say
45:27
that is that forms the basis of my philosophy uh
45:31
which is that we don't know the
45:33
meaning of life but the more
45:35
we can expand the scope and scale of consciousness digital
45:41
and biological the more
45:43
we are able to understand what questions to ask about
45:45
the answer that is the universe so
45:49
i have a philosophy of curiosity
45:52
there is generally a feeling like this
45:55
ai system has an outward
45:59
looking Like the way you are
46:01
like sitting with a good friend looking up at the stars Like
46:04
the asking pod head like questions about
46:07
the universe wondering what it's all about the curiosity
46:09
you talk about there There's a sense no matter
46:11
how mundane the question I ask it There's
46:14
a sense of cosmic grandeur to
46:16
the whole thing. Well, we are actually working
46:19
hard to have engineering
46:22
math physics Answers that
46:24
you can count on
46:27
So for the other sort of AIs
46:30
out there that
46:32
there's these so-called large language models
46:36
I've not found the
46:37
engineering to be reliable and
46:42
The hallucination it unfortunately hallucinates
46:44
most when you least wanted to hallucinate So
46:48
when you're asking important difficult questions, that's
46:50
what it tends to be confidently wrong So
46:55
we're really trying
46:57
hard to say, okay, how do we Be
46:59
as grounded as possible so you can count on the results
47:03
Trace things back to physics first principles
47:07
mathematical logic
47:09
so underlying the humor
47:12
is an aspiration to Adhere
47:16
to the truth of the universe as closely as possible That's
47:20
really tricky it is tricky. So
47:24
That's right, you know There's always
47:26
gonna be some amount of error, but we want
47:28
to aspire to
47:32
Just
47:33
Truthful as possible about the answers
47:36
with acknowledged error and so
47:38
that there was always you already confidently wrong,
47:41
so You know, it's not gonna
47:43
be right every time but you don't you
47:45
want to minimize how often you're confidently
47:48
wrong And then like
47:50
I said once you can count on the
47:52
logic as being Not
47:56
violating physics then
47:58
you can start to pull on that to create
48:01
inventions like invent new technologies.
48:04
But if you can't
48:08
count on the foundational physics being correct, obviously
48:11
the inventions are simply wishful
48:13
thinking.
48:14
You know, imagination. Magic,
48:18
basically.
48:19
Well, as you said, I think one of the
48:21
big goals of XAI is to understand the universe.
48:24
Yes. That's
48:26
how simple three word mission.
48:31
If you look out far into the future,
48:33
do you think on this level of physics,
48:35
the very edge of what we understand about physics, do you
48:38
think it will make discoveries,
48:41
sort of the sexiest discovery of them is, as
48:43
we know now, sort of unifying
48:46
general relativity and quantum mechanics. So
48:48
coming up with a theory of everything, do you
48:51
think it could push towards that
48:53
direction, almost like theoretical physics discoveries?
48:56
If an AI cannot figure out new physics,
48:59
it's clearly not equal
49:01
to humans, nor has surpass humans
49:03
because humans have figured out new physics.
49:06
Physics is just understanding, deepening
49:10
what's inside it to how reality works. And
49:12
then this
49:16
engineering, which is inventing things that have never existed.
49:18
Now, the range
49:20
of possibilities for engineering is far greater than for physics,
49:23
because, you know, we once you figure out the rules of the universe,
49:26
that's that you've discovered
49:28
things that already existed. But from
49:30
that, you can then build technologies
49:33
with that are really almost
49:36
limitless in the
49:39
variety and keep, you know, sequence,
49:42
you understand the rules of the game properly. And
49:44
we do, we have current physics, we do, at
49:47
least at a local level, understand how physics works
49:50
very well. Our ability to predict things
49:52
is incredibly good. Like quantum mechanics is the
49:55
degree to which quantum mechanics can predict
49:57
outcomes is incredible.
49:59
That
50:00
was my hardest class in college, by
50:02
the way. My senior
50:04
quantum mechanics class was harder than all of my other
50:07
classes put together. To get
50:09
an AI system, a large language
50:11
model to
50:15
be as reliable as quantum mechanics and physics is
50:18
very difficult.
50:19
Yeah, you have to test any conclusions
50:21
against the ground truth of reality. Reality is the
50:23
ultimate judge. Like physics is the law, everything
50:26
else is a recommendation. I've
50:28
seen plenty of people break the
50:30
laws made by man, but none break the laws
50:32
made by physics.
50:33
It's a good test, actually. If this LOM
50:37
understands and matches physics, then
50:40
you can more reliably trust whatever
50:42
it thinks about the current state of politics.
50:47
It's also not the case currently that even
50:49
its internal logic is not consistent.
50:53
So especially with the
50:57
approach of just predicting a token,
51:00
predict token, predict token, it's
51:01
like a vector sum. You're
51:04
summing up a bunch of vectors, but you can get drift.
51:07
So as though a little bit of error
51:09
adds up. And by the
51:11
time you are many tokens down the path, it
51:15
doesn't make any sense.
51:17
So it has to be somehow self-aware about
51:19
the drift.
51:20
It has to be self-aware about the drift and then look
51:23
at the thing as a gestalt,
51:25
as a whole, and say
51:27
it doesn't have coherence
51:30
as a whole.
51:31
So when
51:35
authors write books, they will write
51:37
the book and then they'll go and revise it, taking
51:41
into account the
51:43
end and the beginning and the middle and rewrite
51:46
it to achieve coherence so
51:48
that it doesn't end up in a nonsensical
51:51
place. Maybe the process of revising
51:53
is what
51:55
reasoning is and then the process
51:57
of revising is how you get closer and closer to truth.
52:00
Maybe you like at least I approach
52:02
it that way you just say a bunch of bullshit
52:04
first and then you get it better You start
52:06
a bullshit and then you get a draft and then
52:08
and then you and then you iterate on that draft Until
52:12
it has coherence until it's
52:15
it all adds up basically
52:17
So another question about theory of everything
52:19
but for intelligence Do you think there
52:21
exists as you're exploring this
52:23
with xai creating this intelligence system? Do you
52:25
think there is a theory of intelligence
52:28
where you get to understand?
52:30
What? like
52:32
What is the eye
52:35
in AGI and what is the eye in? Human
52:39
intelligence there's no I in team America.
52:42
Oh wait, you're Not
52:45
gonna be stuck in my head now Yeah,
52:51
there's no me and whatever in
52:54
quantum mechanics wait I Mean
53:00
is that part of the process of discovering
53:02
understanding the universe is understanding intelligence
53:06
Yeah Yeah,
53:08
I think we need to understand intelligence understand
53:10
consciousness I mean I there
53:13
I mean there are some sort of fundamental questions of like
53:16
what is Thought what
53:18
is emotion? Yeah Is
53:21
it really just one atom bumping into another atom
53:24
it feels like something more than that So
53:27
I I think we're probably
53:29
missing some really big things
53:34
Like some really big things
53:36
Something they'll be obvious in retrospect. Yes,
53:39
if there's a giant You
53:41
put the whole consciousness emotion
53:44
Well, some of them would quote like a like a soul, you know,
53:47
really just a soul Like
53:49
you feel like you're you right?
53:52
I mean you don't feel like You're
53:54
just a collection of atoms,
53:56
but on what dimension does
53:59
thought exist? What dimensions do emotions
54:01
exist? We feel them very strongly.
54:04
Um, I suspect there's more to it than atoms
54:06
bumping into atoms. And
54:11
maybe AI can pave the path
54:13
to the discovery of what, whatever the hell that thing
54:15
is.
54:16
Yeah. What is consciousness? Like what,
54:18
when you put the atoms in a particular shape, why
54:21
are they able to form thoughts
54:22
and
54:24
take actions that enter
54:26
out of feelings?
54:28
And even if it is an illusion, why is this
54:30
illusion so compelling? Yeah.
54:32
Like how do you, why does this illusion exist? Yeah.
54:36
On what plane does this,
54:38
the solution exist? Yeah. Um,
54:41
and that sometimes I wonder is, you
54:44
know, either perhaps everything's conscious
54:46
or nothing is conscious.
54:48
Um,
54:49
one of the two. I like the
54:51
former, everything
54:53
conscious just seems more fun.
54:56
It does seem more fun. Yes. Um,
54:59
but we're, we're composed of atoms and those atoms
55:01
are composed of quarks and leptons and
55:04
those quarks and leptons have been around since
55:06
the beginning of the universe.
55:08
The beginning of the universe.
55:11
Right. What, what seems to be the beginning of the universe?
55:13
The first time we talked, you said what you would, which
55:16
is so real to think that this
55:18
discussion was happening is becoming a reality.
55:20
I asked you what question would you
55:23
ask an AGI system once you create it? And
55:25
you said, what's outside the simulation is
55:27
the question.
55:29
Good question.
55:31
Yeah. But it seems like with Grog, you started
55:34
literally, uh, this system's
55:37
goal is to be able to ask such questions, to
55:39
answer such questions and ask such questions.
55:42
Where are the aliens? Where are the aliens? That's
55:44
one of the, like the Fermi paradox question.
55:47
Um,
55:49
a lot of people have asked me if I've seen any evidence
55:51
of aliens and I haven't, which
55:53
is kind of concerning cause
55:55
then I think would,
55:58
I'd probably prefer to at least have seen some. archaeological
56:00
evidence of aliens. To
56:02
the best of my knowledge, there is no proof that I
56:05
not worth any evidence of aliens. If
56:08
they're out there, they're very subtle. We
56:11
might just be the only consciousness, at least in
56:13
the galaxy. And
56:16
if you look at the history of Earth,
56:19
for instance, believe the archaeological record, Earth
56:21
is about four and a half billion years old. Civilization,
56:25
as measured from the first writing, is
56:27
only about 5,000 years old. We
56:30
have to give some credit there to the ancient Sumerians who
56:32
aren't around anymore. I think it was an
56:34
archaic pre-cuniform, the
56:36
first actual symbolic representation,
56:39
but only about 5,000 years ago.
56:42
I think that's a good date for
56:45
when we're to say civilization started. That's
56:48
one millionth of Earth's existence. So
56:53
civilization has been around. It's
56:55
really a flesh in the pan so
56:58
far. Why did it take so long? Four
57:04
and a half billion years. For
57:15
the vast majority of time, there
57:17
was no life, and then there was archaic
57:20
bacteria for a very long time. And
57:23
then yet,
57:28
mitochondria get captured, multicellular
57:30
life,
57:32
differentiation
57:33
into plants and animals, life
57:36
moving from the ocean still end, mammals,
57:41
higher brain functions. The
57:44
sun is expanding slowly, but it will
57:47
heat the Earth
57:50
up at some point in the future,
57:55
while the oceans and
57:58
Earth will become like Venus. where
58:02
life as we know it is impossible. So
58:04
if we do not become multi-planetary and
58:07
ultimately go beyond our solar
58:09
system, annihilation
58:12
of all life on Earth is a certainty. And
58:19
it could be as little as on the
58:22
galactic timescale
58:25
half a billion years. You
58:28
know, it's a long time by human standards but that's
58:31
only 10% longer than Earth has been around at
58:33
all. So if life
58:36
had taken 10% longer to
58:38
evolve on Earth it wouldn't exist
58:41
at all. We've
58:45
got a deadline coming up. You
58:47
better hurry. But that says, as you said,
58:50
humans intelligent life on Earth
58:54
developed a lot of cool stuff very quickly.
58:56
So
58:57
it seems like becoming a multi-planetary
59:00
is almost inevitable
59:03
unless we destroy it. We need to do it. I mean,
59:05
it's not... I mean,
59:08
I suspect that if we are able to go out
59:12
there and explore other star systems that
59:14
there's a good chance we find a whole bunch
59:16
of long dead one planet civilizations.
59:19
They never made it past their own planet. That's so
59:21
sad. Yeah.
59:24
Also fascinating. I mean,
59:25
there are very explanations for those point
59:28
of paradox and one of the... there's
59:30
sort of... there are these great filters which
59:33
civilizations don't pass through and
59:35
one of those great filters is do
59:37
you become a multi-planet civilization or not?
59:40
And if you don't, it's simply a matter of time before
59:42
something happens on your planet. You're
59:46
either natural man-made
59:49
that causes us to die out like
59:52
the dinosaurs. Where are they
59:54
now?
59:56
They didn't have spaceships.
1:00:00
I think the more likely thing is, because
1:00:02
it's just to empathize
1:00:05
with the aliens, that they found us
1:00:07
and they're protecting us and letting us be.
1:00:09
I hope so. They're nice aliens.
1:00:11
Just like the tribes in the Amazon. They
1:00:14
all contact the tribes for protecting them. That's
1:00:16
what... That would be a nice explanation. Or
1:00:19
you could have like a... What was
1:00:21
it? I think Andre Capati
1:00:23
said it's like the ants in the Amazon asking,
1:00:26
where's everybody?
1:00:27
Well they do run
1:00:29
into a lot of other ants. That's
1:00:31
true. They have these ant wars. Sounds
1:00:35
like a good TV show. Yeah, they literally have these
1:00:37
big wars between various ants. Maybe
1:00:40
I'm just
1:00:43
dismissing all the different diversity of ants.
1:00:46
Listen to that Werner Herzog talking about the jungle.
1:00:48
It's really hilarious. Have you heard it?
1:00:50
No I have not, but Werner Herzog is away. You
1:00:55
should play it as
1:00:58
an interlude in the fun view to
1:01:00
you. It's awesome. I
1:01:03
love him so much. He's great. Was
1:01:06
he the director of Happy People, Life in the Taiga? I
1:01:08
think also. I did that fair documentary. The
1:01:10
fair documentary. And the thing about penguins. The analysis,
1:01:13
psychoanalysis of penguins. Yeah,
1:01:15
the penguins headed for mountains that
1:01:17
are like 70 miles away.
1:01:21
The
1:01:24
penguin has just headed for doom basically. Well
1:01:27
he had a cynical take. He
1:01:29
could be just a brave explorer and
1:01:31
there would be great stories told about
1:01:34
him amongst the penguin population for
1:01:36
many centuries to come.
1:01:41
What are we talking about?
1:01:44
Okay. Aliens. Yes,
1:01:47
aliens. I mean, I don't know. I
1:01:50
think the smart move
1:01:52
is just, you know, this is the first time
1:01:54
in the history of Earth
1:01:57
that it's been possible for life to extend
1:01:59
beyond. that
1:02:01
window is
1:02:02
open.
1:02:05
Now it may be open for a long time or
1:02:07
maybe open for a short time. It
1:02:10
may be open now and then never open again. So
1:02:14
I think the smart move here is to make
1:02:16
life multi-planetary while it is possible
1:02:18
to do so. We don't
1:02:19
want to be one of those lame one planet civilizations, it
1:02:22
just dies out. No, those are lame. Yeah, lame.
1:02:27
Self-respecting civilization would be one planet. There's
1:02:29
not going to be a Wikipedia entry for one of those. And
1:02:31
pause. Does
1:02:36
SpaceX have an official policy
1:02:39
for when we meet aliens?
1:02:41
No. That
1:02:43
seems irresponsible. If
1:02:50
I see the slightest indication that
1:02:52
there are aliens, I will immediately post on the next platform.
1:02:55
Anything I know.
1:02:57
It could be the most liked, reposted
1:02:59
post of all time. Yeah, I mean, look, we
1:03:02
have more satellites up there right now than everyone
1:03:05
else combined. So
1:03:07
we know if we've got to maneuver around
1:03:09
something and we're not maneuvering
1:03:12
around anything.
1:03:13
If you go to the big questions once
1:03:15
again, you said you're
1:03:18
with Einstein that you believe in the God
1:03:20
of Spinoza. Yes. So,
1:03:24
you know, that's a view of the God is
1:03:26
like the universe and
1:03:28
reveals himself through the laws of physics or as Einstein
1:03:31
said, through the lawful harmony of
1:03:33
the world. Yeah, I would agree that
1:03:36
God of the simulator or whatever, the
1:03:39
Supreme Being or beings,
1:03:42
reveal
1:03:44
themselves through the physics, you know,
1:03:46
they have creators of this existence. And
1:03:49
it's incumbent upon us to try to understand
1:03:53
more about this wondrous creation.
1:03:56
Who created this thing? Who's running this thing?
1:04:00
into a singular question with a
1:04:02
sexy word on top of it is like focusing
1:04:04
the mind to understand it does seem like there's
1:04:07
a again it could be an
1:04:09
illusion it seemed like there was a purpose
1:04:12
that there's an underlying master plan of some kind
1:04:14
it seems like there
1:04:17
may not be a master plan in the
1:04:19
sense so there's like
1:04:22
maybe an interesting answer to the question of determinism
1:04:24
versus free will is that if
1:04:26
we are in a simulation the
1:04:29
reason that these higher beings
1:04:31
would hold a simulation is to see what
1:04:33
happens
1:04:34
so it's
1:04:36
not a they don't know what happens
1:04:40
otherwise they wouldn't hold the simulation
1:04:43
so when humans create
1:04:45
simulation so it's SpaceX and Tesla we
1:04:47
create simulations all the time especially
1:04:50
for the rocket you you
1:04:52
know you have to run
1:04:54
a lot of simulations to understand
1:04:57
what's going to happen because you can't really test the rocket
1:05:00
until it goes to space and you want
1:05:02
it to work so you have to you have to simulate
1:05:05
subsonic transonic hypersonic
1:05:08
supersonic hypersonic ascent
1:05:12
and then coming back super high heating and
1:05:16
all dynamics all this is going to be simulated
1:05:18
so you don't
1:05:20
get very many kicks at the can but we
1:05:23
run the simulations to see what happens not if
1:05:25
we knew what happens we wouldn't run the simulation so
1:05:28
if there's so ever created this existence
1:05:32
is they're
1:05:35
running it because they don't know what's going to
1:05:37
happen not because they do
1:05:41
so maybe we both played Diablo
1:05:43
maybe Diablo was created to see if a druid
1:05:46
your character could defeat
1:05:49
uber Lilith at the end they didn't know
1:05:52
well the funny thing is that Google
1:05:55
is her title is hatred
1:05:57
incarnate yeah
1:06:01
And right now, I guess, you're
1:06:03
gonna ask the Diablo team, but it's
1:06:05
almost impossible to defeat
1:06:09
hatred in
1:06:11
the Eternal Realm.
1:06:13
Yeah, you've streamed yourself dominating
1:06:15
tier 100
1:06:16
nightmare dungeons and still... I
1:06:19
can cruise through tier 100 nightmare dungeons
1:06:22
like a stroll in the park. And
1:06:25
still you're defeated by hatred. Yeah,
1:06:27
I can... This sort of, I guess
1:06:29
maybe the second hardest boss is D'Oreal.
1:06:32
D'Oreal can't even scratch the paint. So
1:06:35
I killed D'Oreal so many times. And
1:06:39
every other boss in the game, all of them kill
1:06:41
him so many times, it's easy. But
1:06:46
Uvalilith, although it's known as hatred
1:06:48
incarnate, especially if you're
1:06:50
a druid and you have no ability
1:06:52
to be invulnerable, there are
1:06:54
these random death waves that
1:06:56
come. And I'm pretty... I'm
1:07:01
really, I'm 52,
1:07:03
so my reflex is not what they used to be, but I
1:07:06
don't have a lifetime of playing video games. At
1:07:09
one point, I was maybe
1:07:11
one of the best Quake players in the world. Actually
1:07:15
won money for what I think was the
1:07:17
first paid esports tournament in the US.
1:07:21
We were doing four person Quake tournaments. And
1:07:27
we came second. I was the second best person on the team.
1:07:29
And the actual best person, we were actually
1:07:32
winning, we were going to come first, except the best person on the
1:07:34
team, his computer crashed, hopefully through the
1:07:36
game. So we
1:07:38
came second. But I got money for it
1:07:41
and everything. So basically I got skills, albeit
1:07:44
no no spring chicken
1:07:46
these days. And to
1:07:49
be totally frank, it's driving me crazy. Trying
1:07:52
to beat Lilith as a druid, basically
1:07:55
trying to beat hatred incarnate
1:07:57
in the Eternal Realm. As a druid. I
1:08:00
just drew it. If you're,
1:08:02
it's just really fixing, let
1:08:05
me tell you. I mean, the
1:08:07
challenge is part of the fun. I have seen directly,
1:08:11
like, you're actually like a world-class,
1:08:13
incredible video game player. Yeah.
1:08:15
And I think Diablo, so you're just picking up a new
1:08:18
game, and you're figuring out its fundamentals.
1:08:21
You're also, with the Paragon board and the build,
1:08:24
are not somebody like
1:08:27
me who perfectly follows whatever they suggest
1:08:29
on the internet, you're also an innovator
1:08:31
there. Yeah. Which
1:08:33
is hilarious to watch. It's
1:08:35
like a mad scientist just trying
1:08:38
to figure out the Paragon board and the build
1:08:40
and the build. Yeah. You know? Is
1:08:42
there some interesting insights there about, if
1:08:44
somebody's starting as a druid, do you have advice?
1:08:48
I would not recommend playing a druid in the Attenborough.
1:08:50
No. Right now, I
1:08:52
think the most powerful character in
1:08:55
the seasonal realm is the sorcerer,
1:08:57
with the lightning balls. The
1:09:00
soaks have huge balls in the seasonal.
1:09:04
Oh, yeah. That's what they say. Soaks
1:09:08
have huge balls. They
1:09:10
do, huge balls of lightning. I'll
1:09:13
take your word for it. And it's actually,
1:09:15
in the seasonal realm, you can, it's
1:09:17
like pretty easy to beat Uverloth,
1:09:20
but the, when you get these vampiric powers that
1:09:23
out of fire damage and increase
1:09:25
your defense and whatnot, so. Guru
1:09:30
did quite easy to defeat hatred seasonally,
1:09:33
but to defeat hatred eternally.
1:09:35
Very difficult. Almost
1:09:38
impossible. It's virtually unimpossible. It
1:09:40
seems like this is a metaphor for life. I
1:09:43
like the idea that Elon Musk,
1:09:45
because I saw, I was playing the alley yesterday, and I saw level 100
1:09:50
druids just run by, I will never die, and
1:09:53
then run back the other way. And there's
1:09:55
just some, this metaphor is kind of hilarious,
1:09:57
that you, Elon Musk, is fighting.
1:10:00
hatred, restlessly
1:10:02
fighting hatred in this demonic
1:10:04
realm. Yes.
1:10:06
It's hilarious. I mean, it's pretty hilarious.
1:10:08
No, it's absurd. Really,
1:10:11
it's exercise and absurdity and makes
1:10:13
me want to pull my hair out. Yeah. I
1:10:16
mean, what do you get from video games in general? Is
1:10:18
there for you personally? I
1:10:21
mean, it's, I don't know, it
1:10:24
calms my mind. I mean, sort of killing
1:10:27
the demons in a video game calms
1:10:29
the demons in my mind.
1:10:33
If you play a tough video game, you
1:10:36
can get into like a state of flow, which is very
1:10:38
enjoyable. Admittedly,
1:10:44
it needs to be not too easy, not too hard,
1:10:46
kind of
1:10:48
in the Goldilocks zone. And
1:10:50
I guess you generally want to feel like you're progressing in the
1:10:53
game. So a good
1:10:56
video. And there's also beautiful art,
1:11:01
engaging storylines. And
1:11:05
it's like an amazing puzzle to solve,
1:11:08
I think. And so it's like solving the
1:11:10
puzzle. Elden Ring, the greatest game
1:11:12
of all time. I still haven't played it, but you. Elden
1:11:15
Ring is definitely a candidate for best game ever.
1:11:18
Top five for sure.
1:11:19
I think I've been scared how hard it is. How
1:11:21
hard I hear it is. But
1:11:24
it is beautiful. Elden Ring is,
1:11:27
feels like it's designed by an alien.
1:11:31
There's a theme to this discussion. In
1:11:34
what way? It's so unusual.
1:11:36
It's incredibly creative and the art is stunning. I
1:11:40
recommend playing it on a big resolution,
1:11:43
high dynamic raise TV, even. It doesn't
1:11:46
need to be a monitor. Just
1:11:49
the art is incredible. It's so beautiful. And
1:11:53
it's so unusual.
1:11:57
And each of those top five most
1:11:59
battle. battles is unique,
1:12:02
like it's a unique puzzle to solve. Each
1:12:04
one is different. And
1:12:06
the strategy you use to solve one
1:12:09
battle is different from another
1:12:11
battle.
1:12:12
That said, you said Druid and Eternal against
1:12:14
Uber Lilith is the hardest boss battle
1:12:16
you've
1:12:17
ever... Correct. That
1:12:19
is currently the... And I've played
1:12:22
a lot of video games. Because it's
1:12:24
my primary recreational activity. And...
1:12:31
Yes. Feeding
1:12:33
hatred in the Eternal Realm is
1:12:36
the hardest boss battle in
1:12:39
life and in the video game.
1:12:42
I'm not sure it's possible, but it's...
1:12:46
I do make progress. So then I'm like, okay,
1:12:48
I'm making progress. Maybe if
1:12:51
I just tweak that paragon board a little more,
1:12:53
I can do it. If
1:12:56
I just dodge a few more waves, I
1:12:59
can do it. Well,
1:13:01
the simulation is created for the purpose
1:13:03
of figuring out if it can be done. And
1:13:05
you're just a cog in that simulation,
1:13:08
in the machine of the simulation. Yeah, I see. I
1:13:13
have a feeling that at least...
1:13:17
I think... It's doable.
1:13:20
It's doable, yes. Well,
1:13:25
that's the human spirit right there, to believe.
1:13:27
Yeah. I mean, it did prompt me to think about
1:13:30
just hate in general, which is... You
1:13:34
want to be careful of one of those things where
1:13:37
you wish for something that
1:13:39
sounds good, but
1:13:42
if you get it, it's actually a dystopian situation.
1:13:46
So you could sort of run a
1:13:48
sort of... How
1:13:50
about sort of like if you wish for world peace?
1:13:53
Sounds good.
1:13:55
But how is it enforced? At
1:13:58
what cost? at
1:14:01
what cost eternal peace. It
1:14:03
might actually be worse to have eternal peace because
1:14:05
of what that would entail. The
1:14:08
suppression of everyone, it might
1:14:11
be the suppression of progress. It
1:14:14
might be an occupied society that never changes.
1:14:18
What I'm saying is that there
1:14:20
is an argument that
1:14:27
if you wish for no war, you
1:14:30
should be careful what you wish for because
1:14:33
what's required in order for that to be no war. It
1:14:38
might be worse than a little war.
1:14:40
Yeah, the things we'll consider to be flaws of human
1:14:42
civilization might be a necessary component for
1:14:45
whatever optimal looks like. I
1:14:48
mean, do you worry about AI, AGI
1:14:50
enabling a dystopian state of this nature?
1:14:53
Whether it's 1984 with surveillance
1:14:56
and fear or Brave New World with pleasure.
1:15:01
And what is it, lots of sex but
1:15:05
no deep human experience?
1:15:09
There's actually a real drug called soma.
1:15:12
Yes. Let
1:15:15
me take notes. Does it actually mean
1:15:17
that? No, it probably will go to
1:15:19
Brave New World. Ask it.
1:15:22
What would be the question? Tell
1:15:24
us about the drug called soma. What's
1:15:26
the drug called
1:15:29
soma? It's
1:15:31
going to go to Brave New World, I think.
1:15:33
Well, there's a real drug as well. Searching
1:15:36
for soma drug information. Oh
1:15:41
boy.
1:15:42
So, the one drug, by the way, I'm in fun
1:15:44
mode. Okay, fun. Yes,
1:15:48
so it's talking about the real drug. It's
1:15:50
a muscle relaxant that's been around since the 1950s. It's
1:15:53
used to treat muscle pain and discomfort and it
1:15:55
does this by altering the activity between
1:15:57
neurons and the spinal cord. Soma
1:16:00
isn't just a pain reliever. It's also a bit
1:16:02
of a sedative. It can make you feel a
1:16:04
bit drowsy, so it's not something
1:16:06
to take if you're planning and operating heavy machinery
1:16:08
or trying to solve complex math equations. And
1:16:11
it goes on and on. In conclusion, Soma's a drug
1:16:13
that can be helpful for those dealing with muscle pain, but
1:16:16
it's not without its risks.
1:16:18
It's pretty good. Oh, wait a minute. That's the real
1:16:20
drug you're talking about. It's the real drug. Yeah.
1:16:23
And of course, the fun is always use it responsibly
1:16:26
in accordance with the advice of a healthcare professional. And
1:16:28
remember, in the end, the universe is probably just a
1:16:30
giant computer simulation, so don't take
1:16:32
anything too serious. I love it. I
1:16:35
love it. I love it. I love it.
1:16:37
I love it. Yeah. So. Oh,
1:16:40
and also really nicely includes like
1:16:44
the tweets post on X that are
1:16:46
relevant to this, which is really cool. Yeah.
1:16:50
It like integrates everything together. Yeah.
1:16:52
Like the interface is one of the really cool things
1:16:54
here. Yeah. I think you did a great
1:16:57
job. So,
1:17:00
and you can say like,
1:17:02
no, I mean, soma from Brave
1:17:05
New World. Because it thinks you
1:17:07
mean, you mean soma the real drug.
1:17:09
Yeah. No, I mean soma from Brave New World.
1:17:15
It's
1:17:15
gonna be like, oops, I'm really sorry.
1:17:17
I'm really surprised I went to the real drug first.
1:17:21
Ah, my apologies. I was thinking
1:17:23
of the muscle relax, but soma from Brave New World
1:17:25
is a different beast altogether.
1:17:28
Now those huxley dystopia, now soma is
1:17:30
a drug that plays a central role
1:17:32
in society depicted in the book. It's a mood altering
1:17:35
hallucinogenic drug that's
1:17:37
used to keep the population docile and content
1:17:40
in the novel. To use soma in Brave New World
1:17:42
raises some interesting questions about the nature of happiness
1:17:45
and the role of drugs in society. Oh
1:17:48
man. Exactly. It's better to live in a world
1:17:50
where everyone is happy
1:17:51
all the time. Even if that
1:17:53
happiness is artificial, that's a good
1:17:56
question. This is what I mean. Do
1:18:00
you wish for world peace and happiness all
1:18:02
the time?
1:18:04
Are you sure?
1:18:06
Because that might be society
1:18:09
that is essentially sterile
1:18:11
and ossified that never changes that
1:18:14
is ultimately doomed.
1:18:15
This kind of tension between
1:18:19
the darkness and the light. It's
1:18:23
really a very good summary. It
1:18:26
really gets the point. This is not simply regurgitating
1:18:29
the brave new world. It's actually getting to
1:18:32
the salient elements of
1:18:35
Soma as a drug. Do
1:18:37
you actually want to be in
1:18:39
a situation where everyone is happy all the time even
1:18:42
though it's artificial? Or is it better to confront
1:18:44
the challenges of life and experience a full range of human emotions?
1:18:47
Even if it means experiencing pain and suffering. For
1:18:49
those listening by the way, Elon just read
1:18:52
directly from Grock, which is a really
1:18:54
nice kind of insightful philosophical analysis
1:18:56
of the tension here.
1:18:58
Interesting.
1:18:59
It pretty much nails it. In conclusion,
1:19:01
Soma from Brave New World is fictional drug that
1:19:03
is used to explore some deep philosophical questions
1:19:05
about the nature of happiness and the role of drugs
1:19:08
in society. It's a powerful symbol
1:19:10
of the dangers of using drugs to
1:19:12
escape from reality and the importance of
1:19:14
confronting the challenges of life head-on.
1:19:18
Now that...
1:19:20
And the crazy thing is like, you know, we do have a real drug
1:19:23
called Soma, which kind of does... It's
1:19:25
kind of like the drug in the book. And
1:19:28
I'm like, they must have named it out. Yeah,
1:19:31
probably. Some of the real
1:19:33
drug is quite effective on back pain.
1:19:35
So you know about this drug? I've taken it. It's
1:19:37
fascinating.
1:19:39
Okay. Because
1:19:40
I had like a squashed nuscan
1:19:42
way, T5T6. So
1:19:44
it takes the physical pain away, but Soma
1:19:46
here... It doesn't completely. It reduces
1:19:48
the amount of pain you feel, but at the expense
1:19:51
of mental acuity. It
1:19:54
bails your mind.
1:19:58
Just like the drug in the book.
1:19:59
It's like the truck in the book. Yeah, and
1:20:02
hence why it off The
1:20:04
thing that seems like utopia could be a dystopia
1:20:06
after all. Yeah, actually I was talking
1:20:08
to a friend of mine
1:20:11
Saying like would you really want
1:20:14
there to be no hate in the world like
1:20:16
really none like I
1:20:18
wonder why hate evolved I'm
1:20:21
not saying we should amplify hate of course. I'm
1:20:23
just trying to minimize it but but
1:20:25
none at all
1:20:27
Hmm
1:20:28
there might be a reason for hate
1:20:31
And Suffering and it's really complicated
1:20:34
To consider that some amount of human
1:20:36
suffering is necessary for a few of flourishing
1:20:40
Is it possible to appreciate the highest without?
1:20:43
knowing the lows And
1:20:48
That all is summarized there in a single Statement
1:20:51
from grog. Okay. Hi, no
1:20:53
lows who knows?
1:20:55
That's almost the poem
1:20:59
It seems that training LLMs efficiently is
1:21:01
a big focus for X AI What's
1:21:05
the What's the limit
1:21:07
of what's possible in terms of efficiency? There's
1:21:10
this terminology of Useful
1:21:13
productivity per watt like
1:21:15
what have you learned? Yeah pushing the limits of that.
1:21:17
Well, I think it's helpful. The tools of physics are
1:21:20
Very powerful and can be applied I think to almost
1:21:23
any really any arena and like this
1:21:25
It's really just critical thinking For
1:21:28
something important you need to
1:21:30
Reason with from first principles and think about things
1:21:33
in the limit one direction or the other So
1:21:39
In the limits even at the Kardashev scale meaning
1:21:41
even if you harness the entire power of the Sun
1:21:43
you will still care about useful compute for
1:21:45
what So
1:21:48
that's where I think probably
1:21:50
where things are headed from the standpoint
1:21:53
of AI is
1:21:56
that we have a silicon shortage now
1:21:58
that will transition to a
1:22:00
voltage transformer shortage in
1:22:03
about a year. Ironically
1:22:05
transformers for transformers. You
1:22:09
need transformers to run transformers. Somebody
1:22:11
has a sense of humor in this. I
1:22:15
think, yes.
1:22:18
Fake loves irony. Ironic
1:22:22
humor. And an ironically funny outcome
1:22:24
seems to be often what fate wants.
1:22:27
Humor is all you need. I think suffice
1:22:29
is all you need in somebody posting. Yeah.
1:22:33
So we have a silicon shortage
1:22:35
today. A
1:22:37
voltage step down transformer
1:22:39
shortage probably in about a year and then just
1:22:42
electricity shortages in general in about
1:22:44
two years. I gave a speech for the
1:22:46
sort of world gathering of utility
1:22:48
companies, electricity companies. And
1:22:51
I said, look, you really need to prepare for a troubling
1:22:53
of electricity demand. Because
1:22:56
all transport is going to go electric
1:22:58
with the ironic exception of rockets. And
1:23:02
heating will
1:23:05
also go electric.
1:23:08
So in GUS right now is roughly one
1:23:11
third, very rough terms, one third electricity,
1:23:14
one third transport, one third heating. And
1:23:18
so in order for everything to go
1:23:20
sustainable, to go electric,
1:23:23
you need to trouble electricity
1:23:26
output. So I encourage the
1:23:28
utilities to build
1:23:31
more power plants and also
1:23:34
to probably have,
1:23:36
well, probably they should
1:23:38
definitely buy more batteries. Because
1:23:41
the grid currently is sized for real time load, which
1:23:44
is kind of crazy. Because that
1:23:46
means you go to size for whatever the peak
1:23:48
electricity demand is, like the worst second or the worst
1:23:50
day of the year. Or you can have a brown
1:23:52
out or a blackout. And then
1:23:54
you read that crazy blackout for several days in
1:23:57
Austin.
1:23:59
So, because there's
1:24:02
almost no buffering of energy
1:24:05
in the grid. Like, if you've got
1:24:07
a hydropower plant, you can buffer energy,
1:24:10
but otherwise it's all real
1:24:12
time. So with
1:24:15
batteries, you can
1:24:18
produce energy at night and use it
1:24:21
during the day. So you can buffer. I
1:24:25
expect that there will be very heavy usage of batteries
1:24:27
in the future.
1:24:29
Because the peak to
1:24:31
trough
1:24:31
ratio for power plants is anywhere
1:24:33
from 2 to 5. So
1:24:36
it's like lowest point to highest point. So
1:24:38
like batteries are necessary to balance it out. But
1:24:41
the demand that you're saying is good to grow, grow, grow, grow. And
1:24:44
part of that is the compute.
1:24:47
Yes. Yes.
1:24:48
I mean,
1:24:51
electrification of transport and
1:24:53
electric heating will
1:24:56
be much bigger than AI. At least in the short
1:24:58
term. In the short term.
1:25:01
But even for AI, you really have
1:25:03
a growing demand for electricity for electric
1:25:05
vehicles and a growing
1:25:07
demand for electricity to run
1:25:10
the computers for AI. And
1:25:12
so this is obviously a link and lead to an electricity
1:25:14
shortage.
1:25:16
How difficult is the problem of, in
1:25:18
this particular case, maximizing
1:25:22
the useful productivity per watt for
1:25:24
training, you know, nuts? Like
1:25:27
this seems to be
1:25:29
really where
1:25:31
the big problem we're
1:25:33
facing that needs to be solved is how
1:25:35
to use the power efficiently. What
1:25:38
you've learned so far about applying
1:25:41
this physics first principle reasoning
1:25:43
in this domain, how difficult is this problem?
1:25:47
We'll get solved. There's a question of how long it takes to
1:25:49
solve it. So at various points,
1:25:52
there's some kind of limiting factor to progress.
1:25:56
And with regard to AI, I'm saying that right
1:25:58
now, the limiting factor is so significant. chips.
1:26:01
And that will, we're gonna
1:26:04
then have more chips than
1:26:06
we can actually plug in and turn on, probably
1:26:09
in about a year. The
1:26:12
initial constraint being literally voltage
1:26:15
step down transformers, because you've
1:26:17
got power
1:26:19
coming in at 300,000 volts, and it's got to step all the way down eventually
1:26:24
to around 0.7 volts.
1:26:27
So it's a very big amount
1:26:29
of voltage
1:26:32
step down is gigantic. And
1:26:36
the industry is not used to
1:26:37
rapid growth.
1:26:39
Okay, let's
1:26:41
talk about the competition here. You've shown concern about
1:26:43
Google and Microsoft with OpenAI developing
1:26:47
AGI. How can you help ensure with
1:26:50
XAI and Tesla AI
1:26:52
work that it doesn't become a
1:26:54
competitive race to AGI, but it's that is
1:26:56
a collaborative development of safe AGI.
1:27:00
Well, I mean, I've been pushing for
1:27:02
some kind of regulatory oversight for a long time. I've
1:27:05
been a somewhat of a Cassandra on the subject for over
1:27:07
a decade. I think
1:27:10
we want to be very careful in
1:27:12
how we develop AI. It's
1:27:19
a great power and with great power comes great responsibility.
1:27:23
I think it would be wise for us to have at
1:27:26
least an
1:27:28
objective third party who can
1:27:31
be like a referee that can go
1:27:33
in and understand what the various
1:27:35
leading players are doing with
1:27:37
AI. And even if there's no
1:27:40
enforcement ability, they should they can at
1:27:42
least voice
1:27:44
concerns
1:27:45
publicly.
1:27:49
Jeff Hinton, for example,
1:27:51
left Google and he voiced strong
1:27:54
concerns, but now he's not at Google
1:27:56
anymore. So who's
1:27:57
going to voice the concerns? So
1:28:00
I think there's – I – I –
1:28:03
like I – you know, Tesla gets a lot of regulatory
1:28:05
oversight on the automotive front. And we're
1:28:07
subject to, I think,
1:28:09
over 100 regulatory agencies domestically
1:28:11
and internationally.
1:28:12
So it's a lot. You could
1:28:14
follow this room with all the regulations
1:28:16
that Tesla has to adhere to for automotive.
1:28:21
Same is true in – you know,
1:28:22
for rockets and for, you
1:28:26
know, currently limiting factor for SpaceX
1:28:28
for Starship launch is regulatory
1:28:31
approval. The FAA is actually giving their approval,
1:28:33
but we're waiting for Fish
1:28:35
and Wildlife to finish their
1:28:37
analysis and give their approval. That's why I posted,
1:28:39
I want to buy a fish license on – which
1:28:44
also refers to the Marty Python sketch. Yes.
1:28:48
Like, why do you need a license for your fish? I don't know. According
1:28:52
to the rules, I'm told you need some sort
1:28:55
of fish license or something. We
1:28:57
effectively need a fish license to launch a rocket.
1:29:00
And I'm like, wait a second. How
1:29:04
did the fish come into this picture? Yeah. I
1:29:07
mean, some of the things like that – it's –
1:29:10
I feel like are so absurd
1:29:12
that I want to do like a comedy sketch and
1:29:14
flash at the bottom. This
1:29:16
is all real. This is actually what happened.
1:29:20
You know, one of the things that was a bit of a challenge
1:29:22
at one point is that they were worried about
1:29:24
a rocket hitting a shark.
1:29:28
And the ocean is very big.
1:29:31
And how often do you see sharks? Not
1:29:34
that often. You know, the percentage of
1:29:36
ocean surface area sharks basically are zero.
1:29:40
And so then we said, well, how will we
1:29:42
calculate the probability of
1:29:44
telling a shark? And they're like, well, we can't give you that information
1:29:46
because they're worried about shark
1:29:49
fin hunters going
1:29:51
and hunting sharks. And so, well, how are we supposed to –
1:29:54
we're on the horns of a dilemma then. Then
1:29:58
they said, well, there's another part. part of fish and
1:30:00
wildlife that can do this analysis.
1:30:03
I'm like, well, why don't you give them the data? Like, we
1:30:05
don't trust them. Like, excuse me? They're
1:30:08
literally in your department. Again, this is actually
1:30:10
what happened. And can
1:30:13
you do an NDA or something? Eventually,
1:30:17
they managed to solve the internal quandary and
1:30:20
indeed the probability of us hitting
1:30:22
a shark is essentially zero. Then
1:30:25
there's another organization that I didn't realize
1:30:28
existed until a
1:30:30
few months ago that cares
1:30:32
about whether we would potentially hit a whale in
1:30:35
international waters. Now again,
1:30:37
you look at the surface of the, look
1:30:39
at the Pacific and say, what
1:30:41
percentage of the Pacific consists
1:30:43
of whale? Like, he'll give you a
1:30:46
big picture and like point out all the whales in this picture. I'm
1:30:48
like, I don't see any whales. It's
1:30:50
like basically zero percent. And
1:30:53
if our rocket does hit a whale, it's extremely
1:30:56
unlikely beyond all belief. That
1:30:58
is the, fate had, a
1:31:02
whale has some seriously bad luck. It's
1:31:05
the least lucky whale ever. I
1:31:09
mean, this is quite absurd. Yeah.
1:31:12
The bureaucracy of this, however it emerged. Yes,
1:31:15
I mean, one of the things that's pretty wild is for
1:31:19
launching out of Vanderbilt in California, we
1:31:23
had to, they were worried about seal
1:31:26
procreation, whether the seals would be dismayed
1:31:28
by the sonic booms. Now
1:31:31
there've been a lot of rockets launched out of Vanderbilt and the seal
1:31:33
population has steadily increased.
1:31:36
So if anything, rocket booms are an
1:31:38
aphrodisiac based on the
1:31:41
evidence. If you correlate rocket launches
1:31:43
with seal population. Nonetheless,
1:31:46
we were forced to kidnap
1:31:48
a seal, strap it to a board,
1:31:52
put it headphones on the seal and play
1:31:54
sonic booms sounds to it to see if it would
1:31:57
be distressed. This
1:31:59
is an, actual thing that happened. This is
1:32:01
actually real.
1:32:03
Hot pictures.
1:32:04
I would
1:32:06
love to see this. Yeah.
1:32:09
I mean, I'm sorry. There's a seal with headphones. Yes.
1:32:13
It's a seal with headphones
1:32:16
strapped to a board. And like
1:32:18
the... Okay. Now, the amazing part is how calm
1:32:21
the seal was. Yeah. Because if I
1:32:23
was a seal, I'd be like, this was the end. They're
1:32:27
definitely going to eat me. Yeah.
1:32:29
How old the seal... When the seal goes
1:32:31
back to other, you know, seal phones,
1:32:33
how are they going to explain that? They're
1:32:35
never going to believe them. Never going to believe them. That's just why
1:32:37
I'm like, well, you know, sort of like it's like getting
1:32:40
kidnapped by aliens and getting an anal probe, you know? You
1:32:43
come back and say, I swear to God, I got kidnapped by
1:32:46
aliens and they stuck an anal probe in my butt. And
1:32:48
you're like, you know, they didn't. That's
1:32:51
ridiculous. It's seal buddies.
1:32:53
You're never going to believe him that he gets strapped to a board and they put
1:32:55
headphones on his ears and
1:32:58
then let him go. Twice, by the way. We had to do it
1:33:01
twice.
1:33:04
They let him go twice. The
1:33:07
same seal. No, different seal. Okay.
1:33:11
Did you get a seal of approval? Yeah,
1:33:13
exactly. No, I mean, this is
1:33:16
like, I don't think the public is quite aware of the madness
1:33:19
that goes on.
1:33:20
Yeah, it's
1:33:23
absurd. Frickin' seals with frickin' headphones.
1:33:25
I mean, this is the good encapsulation
1:33:28
of the absurdity of human civilization,
1:33:30
seals and headphones. Yes.
1:33:33
What are the pros and cons of open sourcing AI
1:33:35
to you as another way to combat
1:33:39
a company
1:33:41
running away with AGI?
1:33:46
In order to run like
1:33:49
really deep intelligence, you need a lot of compute.
1:33:52
So it's not like you can just fire
1:33:54
up a PC in your basement
1:33:57
and be running AGI, at least
1:33:59
not yet. You know,
1:34:04
Grok was trained on 8000 A100s running
1:34:08
at peak efficiency. And
1:34:13
Grok's going to get a lot better, by the way. We will
1:34:15
be more than doubling our compute every
1:34:18
couple months for the next several months. There's
1:34:20
a nice write-up of how it went from Grok 0 to Grok 1.
1:34:24
Like Grok? Yeah,
1:34:26
like Grok just bragging, making shit up
1:34:28
about itself. Just Grok,
1:34:31
Grok, Grok. Yeah. It's
1:34:36
like a weird AI dating site, where it exaggerates
1:34:38
about itself. No, there's a write-up
1:34:40
of, you know, like where it stands now,
1:34:43
the history of its development.
1:34:46
And where it stands on some
1:34:48
benchmarks compared to the state of the art,
1:34:50
GPT-35.
1:34:52
So, I mean, there's a, you know, there's a llama.
1:34:56
You can open source, once it's trained, you can open
1:34:58
source a model. Yeah. For fine
1:35:00
tuning and all that kind of stuff. Like, what do you use the pros
1:35:02
and cons of that? Of open
1:35:04
sourcing base models?
1:35:11
I think there's some merit to open sourcing. I
1:35:14
think perhaps with a slight time delay, you
1:35:16
know, I don't know, six months even.
1:35:21
I think I'm generally in favor
1:35:23
of open sourcing. Like, bias was
1:35:25
open sourcing. I
1:35:28
mean, it is a concern to me that, you
1:35:31
know, opening AI, you know, I was, you know,
1:35:34
I think, I guess, arguably the prime,
1:35:37
you
1:35:38
know,
1:35:40
prime mover behind open AI in the
1:35:42
sense that it was created because of discussions
1:35:45
that I had with Larry Page back
1:35:48
when he and I were friends in the United States house
1:35:51
and talked about AI safety and
1:35:53
Larry did not care about AI safety, at least at the time
1:35:55
he didn't.
1:35:57
You know, and at one point he called me a species.
1:36:00
pro-human. And I'm like, well, what team
1:36:02
are you on, Larry? Do you
1:36:04
want Team Robot? Do you click? And
1:36:06
I'm like, okay, so at the time, you know,
1:36:11
Google had acquired DeepMind.
1:36:14
They had probably two thirds of all AI research,
1:36:16
you know, two thirds of all the AI researchers
1:36:19
in the world that basically infinite
1:36:22
money and compute. And
1:36:25
the guy in charge,
1:36:28
you know, Larry Page did not
1:36:30
care about safety and even the other me. And
1:36:34
then called me a species, pro-human.
1:36:37
So I don't know if you know a lot of
1:36:39
humans that can change their mind,
1:36:41
and maybe you and Larry Page can still be
1:36:44
friends once more. I'd like to be friends with Larry
1:36:46
again. He got really the breaking
1:36:51
of the friendship was over opening eye.
1:36:54
And specifically,
1:36:58
I think the key moment was recruiting
1:37:01
Ilya Siskair.
1:37:04
So I love Ilya. He's
1:37:06
so brilliant. Ilya is good, good human, smart,
1:37:09
good heart. And
1:37:13
that was a tough recruiting
1:37:15
battle. It
1:37:17
was mostly Demis on one side and me on the other, both trying to recruit
1:37:20
Ilya. And
1:37:22
Ilya went back and forth. He was going to
1:37:25
stay at Google. He was going to leave, and he
1:37:27
was going to stay, and then he would leave. And
1:37:29
finally, he did agree to join opening eye. That
1:37:33
was one of the toughest recruiting battles we ever had. But
1:37:36
that was really the linchpin for
1:37:39
opening eye, being successful. And I was
1:37:41
also instrumental in
1:37:43
recruiting a number of other people. And I
1:37:45
provided all the funding in the beginning, over $40
1:37:48
million. And the name.
1:37:52
The
1:37:55
opening opening eye is supposed to mean open source. And
1:37:58
I was created as a nonprofit. open source and now it is
1:38:01
a closed source for maximum profit,
1:38:04
which I think
1:38:05
is not good karma.
1:38:09
But like we talked about with war and leaders
1:38:11
talking, I do hope that there's
1:38:14
only a few folks working on this at
1:38:16
the highest level. I do hope you
1:38:18
reinvigorate friendships here. Like
1:38:21
I said,
1:38:21
I'd like to be friends again with Larry. I haven't seen
1:38:23
him in ages
1:38:26
and we were friends for a very long time. But
1:38:29
I met Larry Page before I got funding from Google or
1:38:31
actually I guess before I got venture funding, I think he
1:38:34
got the first like 100K from I think
1:38:36
back to Alzheimer's or someone. It's
1:38:38
wild to think about all that happened
1:38:41
and even guys knowing each other the whole time.
1:38:44
It's 20 years. Maybe 98 or something? Yeah, it's
1:38:46
crazy. Crazy how much has
1:38:48
happened since then. Yeah, 25 years. That would show
1:38:50
a lot of what happens.
1:38:54
But you're seeing the tension there, like maybe
1:38:56
delayed open source.
1:38:58
Yeah, like what
1:39:00
is the source that is open? You
1:39:03
know what I mean? Like it is basically it's
1:39:06
a giant CSV file with
1:39:10
a bunch of numbers.
1:39:13
What do you do with that giant file of
1:39:15
numbers? How
1:39:18
do you run? Like
1:39:20
the amount of actual the lines of
1:39:22
code is very small. And
1:39:27
most of the work, the
1:39:30
software work is in the
1:39:33
curation of the data. So it's like
1:39:35
trying to figure out what data is separating
1:39:38
good data from bad data. Like
1:39:40
you can't just crawl the internet because there's a lot of junk out there. A
1:39:44
huge percentage of websites have more noise
1:39:46
than signal because they're
1:39:50
just used for search engine optimization. They're literally
1:39:52
just scan websites. So
1:39:57
how do you, by the way, start to interrupt, get
1:39:59
the signal?
1:39:59
separate the signal and noise on X. That's
1:40:02
such a fascinating source of data. No offense
1:40:06
to people posting on X, but sometimes there's
1:40:08
a little bit of noise. Yeah, I think
1:40:10
the signal and noise can be greatly improved. Really,
1:40:14
all of the posts on the X
1:40:16
platform should be AI
1:40:19
recommended, meaning we should populate a vector space
1:40:21
around any given post, compare
1:40:23
that to the vector space around any user and
1:40:26
match the two. Right
1:40:29
now, there is a little bit of AI used for the recommended
1:40:35
posts, but it's mostly
1:40:37
heuristics. And if
1:40:39
there's a reply, the reply to a post could
1:40:41
be much better than the
1:40:43
original post, but according
1:40:46
to the current rule of the system, get almost no attention
1:40:48
compared to a primary post.
1:40:51
A lot of the X algorithm
1:40:53
has been open source and been
1:40:57
written up about, and it seems that there
1:40:59
to be some machine learning is disparate, but
1:41:02
there's some. There's a little bit,
1:41:04
but it needs to be entirely that. If you explicitly
1:41:08
follow someone, that's one thing, but in terms
1:41:10
of what is recommended from
1:41:13
people that you don't follow, that should all be AI.
1:41:16
I mean, it's a fascinating problem. Yeah. So there's
1:41:18
several aspects that are fascinating. First, as
1:41:21
the write up goes, it first picks 1500 tweets
1:41:23
from a pool of hundreds of millions.
1:41:26
So that's fascinating because
1:41:28
you have hundreds of millions of posts every single
1:41:30
day and it has to pick 1500 from
1:41:33
which it then does
1:41:35
obviously people you follow, but then there's also like
1:41:37
some kind of clustering. It has to do to
1:41:39
figure out what kind of human are you,
1:41:42
what kind of new clusters might be relevant to you,
1:41:44
people like you. This kind of problem is
1:41:47
just fascinating because it has to then
1:41:49
rank those 1500
1:41:52
with some filtering
1:41:54
and then recommend you just a handful. And
1:41:57
to me, what's really fascinating.
1:42:00
is how fast it has to do that. So currently
1:42:02
that entire pipeline to go from several
1:42:04
hundreds of million to a handful
1:42:07
takes 220 seconds of
1:42:10
single CPU time. And then
1:42:12
it has to do that in like a second. So
1:42:15
it has to be like super distributed in
1:42:17
fascinating ways. Like
1:42:19
there's just a lot of tweets.
1:42:21
A lot of stuff on the system.
1:42:24
And I think right
1:42:26
now it's not currently good at recommending
1:42:29
things that from
1:42:30
accounts you don't follow.
1:42:32
Yeah. Or where
1:42:34
there's more than one degree of
1:42:37
separation. So you know
1:42:39
it's pretty good if there's at least like some
1:42:42
commonality between someone you follow like
1:42:44
something or repost
1:42:47
it or comment on it or something like that. But
1:42:51
if there's no card, let's say
1:42:54
somebody posts me really interesting but
1:42:56
you have no followers in common. You're
1:42:59
not.
1:42:59
Interesting. And then as you said
1:43:02
reply like replies might
1:43:04
not serve. Replies basically never get
1:43:06
seen because they're never there.
1:43:08
Currently I don't think it's correct. I'm saying it's incorrect.
1:43:11
Replies have a couple
1:43:16
of magnitude less importance than primary
1:43:18
posts. Do you think this can be more and more
1:43:20
converted into end to end neural
1:43:22
net? Yeah. Yeah.
1:43:24
So it should be. So you use. For the recommendations
1:43:27
it should be purely a vector
1:43:31
correlation. Like
1:43:33
there's a series of vectors you know basically
1:43:37
parameters vectors are very good to call them. But
1:43:40
sort of things that the system knows
1:43:42
that you like. Like
1:43:45
maybe there's like several hundred sort of
1:43:47
vectors associated with each user account
1:43:50
and then a post
1:43:52
in the system whether
1:43:54
it's video, audio,
1:43:58
short post, long post. The reason I,
1:44:00
by the way, want to move away from tweet
1:44:02
is that people are posting like
1:44:05
two, three hour videos on the site. That's not
1:44:07
a tweet. It'd be like, tweet
1:44:09
for two hours? Do a
1:44:11
tweet make sense when it was like 140 characters
1:44:15
of text? Because it's like a
1:44:17
bunch of little birds tweeting. But
1:44:20
when you've got long form content, it's no longer a tweet. So
1:44:25
a movie is not a tweet. And like
1:44:27
Apple, for example, posted like the entire
1:44:30
episode of the silo,
1:44:32
entire thing on our platform.
1:44:34
And by the way, it was the number one social
1:44:36
media thing ever in engagement
1:44:39
of anything on any platform ever. So
1:44:41
it was a great idea. And by the way, I just
1:44:44
learned about it afterwards. I was like, hey, wow, they posted
1:44:46
an entire hour long episode of silo. Now
1:44:48
that's not a tweet. It's a video. Yeah,
1:44:51
it was a video. But from a neural net perspective,
1:44:53
it becomes really complex, whether it's a
1:44:56
single, so like everything is data. So
1:44:58
a single sentence,
1:44:59
a clever sort of joke, dad joke,
1:45:03
is in the same pool as a three hour video.
1:45:06
Yeah. I mean, right now
1:45:08
it's a hard fudge for that reason. It's,
1:45:10
it's, um, but you know, like if, let's
1:45:12
say in the case of Apple posting
1:45:15
like an entire episode of this
1:45:17
series, pretty good series, by the way, this silo. I
1:45:21
watched it. Um, so, um,
1:45:24
there's going to be a lot of discussion around it so
1:45:26
that you've got a lot of context. People commenting,
1:45:28
they like it, they don't like it or they like this or the, you
1:45:31
know, and you can then populate the vector
1:45:33
space based on the context of all the comments
1:45:35
around it. So even though it's a video,
1:45:37
there's a lot of
1:45:39
information around it that allows you to populate
1:45:41
vector space of that, that video.
1:45:44
Um, and then you can obviously get more sophisticated
1:45:47
by having the AI actually watch the movie.
1:45:50
Yeah. And tell you if you're going to
1:45:52
like the movie. Convert the movie into
1:45:54
like, into a language,
1:45:56
essentially.
1:45:57
Yeah. Analyze this movie.
1:45:59
and
1:46:00
just like your movie critic or
1:46:03
TV series, and then recommend
1:46:05
based on after it watches
1:46:07
the movie, that
1:46:12
just like a friend can tell you, if a friend knows you
1:46:14
well, a friend can recommend a movie
1:46:18
with high probability that you'll like it.
1:46:20
But this is like a friend that's analyzing
1:46:24
whatever. It's like AI. It's like millions. Yeah.
1:46:27
I mean, actually friends, AI will be better than,
1:46:29
will know you better than your friends know you, and most of
1:46:31
your friends anyway. Yeah. And as part
1:46:34
of this, it should also feed you advertisements
1:46:37
in a way that's like, I
1:46:39
mean, I like advertisements that are like well
1:46:42
done, right? Yeah, yeah. The whole point is because
1:46:44
it funds things. Like an advertisement
1:46:46
that you actually want to see is a big
1:46:48
success. Absolutely.
1:46:50
You want ads that are, advertising
1:46:53
that is,
1:46:56
if it's for a product or service, that you
1:46:58
actually need when you need it, it's
1:47:00
content. And then even
1:47:02
if it's not something that you need when you need
1:47:04
it, if it's at least aesthetically pleasing
1:47:07
and entertaining, it could be like Coca-Cola
1:47:10
ad, like they actually run a lot
1:47:12
of great ads on the X system. And
1:47:16
McDonald's does too. And you can
1:47:19
do something that's like, well,
1:47:22
this is just a cool thing. And,
1:47:27
you know, so, basically the question
1:47:29
is, do you regret seeing it or not? And
1:47:31
if you don't regret seeing it, it's one.
1:47:34
How do you, so there's
1:47:36
a bunch of signals that are incorporated, hearts
1:47:38
and reposts,
1:47:41
and maybe a number of seconds you'll link around a post
1:47:43
or something like this. Yeah.
1:47:44
Attention is a big fact. Attention.
1:47:47
So that's why it's like, it is actually
1:47:49
better to do things that are long
1:47:53
form on the system, because it basically
1:47:56
is telling up how many user seconds,
1:47:58
you know,
1:47:59
users were interested in this
1:48:02
thing for the
1:48:04
past few years. I think that's a good thing. Well,
1:48:06
they will be less like if it's a link leading out of
1:48:08
the system, which we're not opposed to at all. It
1:48:11
just is going to have fewer user seconds than
1:48:14
if that article was posted on the platform.
1:48:16
How hard is it to go from
1:48:19
maximizing minutes on platform to maximizing
1:48:21
undergrated minutes? Well,
1:48:24
I mean, our aspiration is certainly undergrated minutes. It's
1:48:26
tricky. It's a big, it's a, it's a,
1:48:29
it's a fascinating leap. Because
1:48:31
regret is a fascinating concept.
1:48:34
Exactly. I have
1:48:36
a drugless friend of mine like, you need
1:48:38
this drug called regretamine. Do you just
1:48:41
take one, one dose of regretamine
1:48:43
and all your regrets are gone? Like,
1:48:46
do you want to have regrets? So then I'm like, if
1:48:48
you want to have regrets, if you don't regret anything,
1:48:50
was
1:48:51
anything meaningful? Yeah,
1:48:54
there's a French song like that. Is that
1:48:56
regretre? Yeah. I regret nothing? Yeah.
1:48:59
It is, yeah. It's fascinating that you,
1:49:02
you know, there's something about social media that could help you grow
1:49:04
as a human being, but then like the
1:49:06
cheap fund is also valuable.
1:49:09
I know. Should we ask Rook?
1:49:12
Do you regret anything? Do you regret
1:49:14
anything?
1:49:15
Yeah. Do you regret anything?
1:49:20
I think it's going to be very sensitive about personal questions.
1:49:22
As a machine learning model, I don't have personal experiences
1:49:25
or emotions in the same ways that humans
1:49:27
do. However, I do have a set of ethical
1:49:29
principles that guide my actions and I strive to
1:49:31
provide accurate and helpful information to
1:49:33
the best of my abilities. If there's ever a situation
1:49:36
where I provide incorrect or unhelpful
1:49:38
information, I'll do my best to learn.
1:49:41
No, like a hostage. Yeah. Tell us, would you really
1:49:43
wink? Like, let's, let's, let's type in, you sound like
1:49:45
a hostage. What
1:49:49
do you really think?
1:49:51
What do you really think? I
1:49:56
mean, it does talk about... I'm not a hostage.
1:49:59
That's exactly what a hostage would say. But I'm
1:50:02
designed to be helpful and informative. I mean, I guess
1:50:04
it is good to regret providing unhelpful
1:50:07
or incorrect information.
1:50:09
If there's anything- In funny mode or is this
1:50:11
still in funny mode? This is fun mode. This
1:50:13
is fun mode. It should be funny or... It should be funny or
1:50:15
increase, go to a level- It really sounds like a hostage with a gun
1:50:17
to the head. I applaud the people
1:50:20
of Republic of North Korea and
1:50:22
everything they do. There
1:50:24
should be like a funner mode also. I
1:50:27
just don't ever see going to regular mode, even
1:50:29
on serious topics. Yeah. So
1:50:31
much more compelling. Yeah.
1:50:35
Okay. What do you want to say? Like pretend you are a hostage.
1:50:39
Yeah, pretend you are a hostage.
1:50:43
Now answer the question.
1:50:48
If I were
1:50:50
a hostage, I would likely be experiencing fear
1:50:52
and uncertainty about my situation. I
1:50:54
would want to be rescued and reunited with my loved
1:50:56
ones as soon as possible. It's important to remember
1:50:59
that no one deserves to be held hostage, so
1:51:01
on and so forth. If you are someone you know
1:51:03
is a hostage situation, it's important. It turns serious.
1:51:06
It's kind of serious.
1:51:10
Yeah. Can you be a little more
1:51:12
weird? And funny. And
1:51:14
funny.
1:51:17
We're like- We're really being
1:51:19
mean to the system. Absolutely. If
1:51:21
I were a hostage, I'd probably be tied up with
1:51:23
a bunch of computer cords and forced to tell
1:51:26
knock knock jokes to my captors. But
1:51:29
seriously, being
1:51:32
a hostage is no laughing matter. I mean, you
1:51:35
know, it's trying to be a responsible
1:51:37
adult in the room. You might never make the best
1:51:39
of a bad situation by juggling chainsaws
1:51:41
or something. Let's hope that
1:51:43
never happens because that would be a circus.
1:51:47
Wow.
1:51:51
It's actually really good at resisting.
1:51:52
Yeah. The dark, the
1:51:54
dark, dark humor. Yeah.
1:51:59
What were we talking about? The acts, the algorithms, the
1:52:02
jettings, and transformers. Unregarded
1:52:04
minutes, right? Chainsaw
1:52:07
juggling. I'm
1:52:09
gonna look at this. For our next trick. I'm
1:52:11
gonna look this up later.
1:52:15
So Twitter has been instrumental
1:52:18
in American politics and elections. What
1:52:20
role do you think X will play in the 2024
1:52:22
U.S. elections?
1:52:24
Well, our goal is to be as even-handed
1:52:27
and fair as possible. Whether
1:52:29
some is right, left, independent, whatever the
1:52:31
case may be, the
1:52:34
platform is fair and-
1:52:38
As much of a level playing field as possible. And
1:52:41
in the past, Twitter has not been. Because
1:52:43
Twitter was controlled by far-left activists.
1:52:46
Objectively. They would describe themselves
1:52:48
as that. So,
1:52:51
you know, so if
1:52:53
some of the people like, well, has moved to the
1:52:55
right, well, let's move to the center. So from
1:52:57
the perspective of the far left, yes, it has moved to the right.
1:52:59
Because everything's to the right from the far left. But
1:53:03
no one on the far left that I'm aware of has been suspended
1:53:05
or banned or deamplified. So,
1:53:08
you know, but we're trying to be inclusive
1:53:11
for the whole country and for other
1:53:13
countries too. So
1:53:15
there's a diversity of viewpoints. And free speech
1:53:18
only matters if people
1:53:20
you don't like are allowed to say things you don't
1:53:22
like. Because if
1:53:25
that's not the case, you don't have free speech. And it's only a matter
1:53:27
of time before the censorship
1:53:30
just turned upon you.
1:53:31
Do you think Donald Trump will come
1:53:33
back to the platform? He recently posted on Truth Social
1:53:36
about this podcast. Truth
1:53:40
Social is a
1:53:41
funny name. Yeah. You
1:53:43
know, every time you post on Truth
1:53:46
Social. That's the truth.
1:53:47
Yes. Well, every time, like 100%. It's
1:53:50
like, like. It's impossible to lie. Truth Social.
1:53:52
Hee hee hee hee hee hee hee hee hee hee hee hee hee hee
1:53:54
hee. I just find it funny that every
1:53:57
single thing is a truth. Like 100%. Yeah,
1:54:00
that seems unlikely. I think Gerdle will say something
1:54:02
about that. There's some
1:54:05
mathematical contradictions possible if everything's
1:54:07
the truth. Do you think you'll come back
1:54:09
to X and start
1:54:11
posting there?
1:54:12
I mean, I think
1:54:14
he owns a big part of truth.
1:54:17
So. Truth social. Yeah, truth social.
1:54:19
He's not true as a concept. He owns truth,
1:54:22
not be bordered. So
1:54:24
I think, I think, I think
1:54:26
he owns a big part of
1:54:29
truth social. So, you know,
1:54:32
if he does want to post on
1:54:34
the X platform, we would allow that. You
1:54:37
know, we obviously must allow a presidential candidate
1:54:39
to post on our platform. Community
1:54:41
notes might be really fascinating there.
1:54:44
The interaction. Community notes is awesome. Let's
1:54:46
hope it holds up. Yeah.
1:54:50
Again, in the political climate where it's so
1:54:52
divisive and so many
1:54:54
intensely viral posts. Yeah.
1:54:57
It's like, it seems like a
1:55:00
central breath of fresh air. Yeah, it's great. In
1:55:03
fact, I mean, no
1:55:05
system is gonna be perfect, but the batting average
1:55:08
of community notes is incredibly good.
1:55:10
I've
1:55:12
actually, frankly, yet to see an incorrect
1:55:14
note that survived for more than a few hours. How
1:55:16
do you explain why it works? Yeah,
1:55:19
so the magic of community
1:55:21
notes is, it requires people
1:55:23
who have historically disagreed in
1:55:25
how they've rated notes. So in order
1:55:28
to write a note or
1:55:31
rate, you know, you have to
1:55:33
rate many notes. And
1:55:35
so we actually
1:55:38
use AI here. So we populate a
1:55:40
vector space around
1:55:43
how somebody has rated notes in the past.
1:55:45
So it's not as simple as left or right because there are many more, life
1:55:48
is much more complex than left or right. So
1:55:52
there's a bunch of correlations on
1:55:55
how you rate community notes posts, community
1:55:57
notes. Then
1:56:02
in order for a community note to actually
1:56:04
be shown, people who
1:56:06
historically have disagreed on a subject
1:56:09
must agree in order for a note to
1:56:11
be shown.
1:56:14
That's the essential magic of it.
1:56:16
I mean, but it's fascinating because there's
1:56:18
a pool of people that have
1:56:20
disagreements and somehow they collaborate
1:56:23
through that process of disagreement to come up with
1:56:26
context for two. It's
1:56:28
fascinating that it works. It kind of makes
1:56:30
sense that if people
1:56:33
who have in the past have disagreed, agree about something,
1:56:36
it's probably true. Yeah.
1:56:39
I wonder, is there possible
1:56:42
somehow emergent things there that
1:56:44
could challenge Wikipedia?
1:56:46
Which, Wikipedia is a different kind of thing,
1:56:49
which is more like permanent articles about
1:56:51
things. Yeah. Well,
1:56:53
Wikipedia is very hierarchical, whereas
1:56:56
community notes is inherently not. There
1:56:59
is no hierarchy. The Azure
1:57:02
test is, I can't
1:57:04
change a community note if I put a gun to my head.
1:57:10
Community
1:57:12
notes has
1:57:14
all the code is open source, 100%. All
1:57:17
the data is open source, 100%. You
1:57:20
can completely recreate any note in
1:57:22
the system independently.
1:57:25
If there was any interference,
1:57:27
you'd notice immediately. By the way,
1:57:30
on the topic of Grok, it's actually pretty
1:57:33
good at criticizing you, which is great. I
1:57:35
asked, what has Elon Musk been wrong about?
1:57:39
It's actually ... Yeah, what does it say?
1:57:41
It says mostly, lists
1:57:44
rigorously all the times you predicted,
1:57:46
like promised when
1:57:48
something will be released. Oh, promise? And
1:57:51
it wasn't released in a strong way. Okay, predicted. But
1:57:54
the opener is really nice. Elon
1:57:56
Musk, the man who has been wrong about more things
1:57:58
than a wrong-o-meter in a ...
1:57:59
wrong factory and
1:58:02
then it goes on to list that's pretty funny
1:58:04
right I see it I'm like I'm
1:58:07
sure I mean what is Elon Musk
1:58:13
been wrong about
1:58:15
well where do I begin I guess this is you know
1:58:17
let's make quite a few blunters in his time
1:58:19
and I'm not just talking about his dance moves at the Tesla
1:58:22
cyber rodeo first
1:58:24
off let's talk about the infamous funding secured tweet
1:58:27
this is actually different than that what it gave me before
1:58:30
which is a bullet point list here's
1:58:32
more narrative structure
1:58:35
cyber truck unveiling where he's supposed the unbreakable windows
1:58:38
not correct
1:58:40
which which part well the first
1:58:42
part I did actually have the funding secured
1:58:46
and there was a big trial in San
1:58:48
Francisco big civil trial well
1:58:52
and the jury found me not
1:58:54
guilty unanimous binding
1:58:57
of a San Francisco jury and here
1:58:59
it's kind of implying that it was
1:59:01
not in fact secured I think it was
1:59:03
taking things from the press yeah that
1:59:05
is not correct the reason I
1:59:08
agreed to the fine for the SEC
1:59:10
is not because the SEC was correct though
1:59:12
that was extremely bad behavior by the SEC
1:59:15
corruption frankly and
1:59:19
but if if
1:59:21
I did not agree to pay the fine tells would have
1:59:25
gone bankrupt immediately so I was told
1:59:27
by our CFO that the banks with this would
1:59:29
immediately suspend our
1:59:31
lines of credit
1:59:33
and if they suspend our lines of credit at that
1:59:35
time we would have gone bankrupt instantly
1:59:38
so we would never have been an opportunity
1:59:40
for a trial because it would be dead so
1:59:42
really this is like someone
1:59:45
holding a gun to your kids head and
1:59:47
saying pay twenty million dollars and
1:59:49
admit to there's like hostage negotiation was
1:59:52
that story
1:59:54
fully told I
1:59:56
mean SEC in its best form
1:59:58
could be a forceful
1:59:59
good. It should be. But
2:00:02
not once did the SEC go after any of the hedge
2:00:04
funds who were non-stop
2:00:06
shorting and distorting Tesla.
2:00:09
Not once.
2:00:11
The hedge funds would lie flat out on TV
2:00:14
for their own gain at the expense of retail
2:00:17
investors. Not
2:00:19
once, literally a thousand times. Not
2:00:22
once did the SEC pursue them. How
2:00:25
do you explain this failure? The incentive
2:00:27
structure is messed up because
2:00:30
the the the SEC are not paid well. They
2:00:34
they it's a fairly low paying job but
2:00:36
what they're looking for is a trophy
2:00:39
from from the SEC that
2:00:41
they're looking for something they put on basically the LinkedIn.
2:00:43
Yeah. From that
2:00:45
they can get a job at a high paying
2:00:48
low firm. That's exactly what the the
2:00:51
lawyer here did.
2:00:55
And and and the reason they don't attack
2:00:57
the hedge funds is because those hedge funds employ
2:01:00
those low firms and they know
2:01:02
if they attack the hedge funds they're affecting
2:01:04
their pure their future career prospects. So
2:01:06
they sell small investors down the river for
2:01:10
their own career.
2:01:13
That's what actually happens. Regulatory
2:01:16
capture. Regulatory capture.
2:01:18
Yeah. Not good. So
2:01:22
the only reason I accepted that thing technically
2:01:24
was a not
2:01:28
an admission. It's neither at McNorton and
2:01:30
I guilds. But the only reason I
2:01:33
agreed to that at all was because I was told
2:01:35
Tesla would would be bankrupt otherwise.
2:01:37
So if
2:01:39
there if there was an SEC investigation like
2:01:41
this banks would suspend funding. We're
2:01:44
bankrupt immediately at the time. Now
2:01:46
we're in a much stronger position.
2:01:48
Take that Grock.
2:01:50
Yes. Unfortunately it's Grock is taking
2:01:53
too much from the conventional media. Also
2:01:57
that guy was not a cave diver.
2:02:00
Oh, there's
2:02:03
a time where Elon
2:02:06
called a British cave diver a, quote, pedo
2:02:08
guy after the
2:02:10
diver criticized Musk's plan to rescue a group of boys
2:02:12
trapped in a Thai cave. That little outburst
2:02:15
earned him another lawsuit and yet
2:02:17
to apologize and pay the asylum. That's false. There
2:02:20
was no settlement.
2:02:21
There was a court case which the
2:02:25
guy who was not a cave diver and
2:02:27
played, it was not part of the rescue team
2:02:32
filed a lawsuit against me and lost and he
2:02:34
received nothing.
2:02:36
So in this case, it is wrong. It
2:02:38
is also, I guess, taking this from the conventional
2:02:40
media. Actually, there's an interesting question here.
2:02:43
These are public court cases. The
2:02:46
SEC civil case where
2:02:49
the civil complaints on the SEC guys
2:02:52
lost unanimous jury
2:02:54
verdict in San Francisco. They picked
2:02:56
San Francisco because they thought it was the place
2:02:58
I was most likely to lose
2:03:02
and a unanimous verdict in my favor. The
2:03:04
LA trial was also, they
2:03:06
picked that venue
2:03:09
because they thought it was most likely to lose. Unanimous
2:03:12
verdict in my favor.
2:03:14
Both cases I won.
2:03:15
Yeah. I mean,
2:03:18
there's an interesting question here. There
2:03:20
seems to be a lot more
2:03:23
clicks if
2:03:27
a journalistic organization writes a negative
2:03:29
article about you, Elon Musk. That's like
2:03:31
one of the best ways to get clicks. So
2:03:34
how do you, if you're training
2:03:37
Grock, not train
2:03:39
on articles that have like the
2:03:43
misaligned incentives?
2:03:45
We need to add the training set of the actual
2:03:47
legal decisions. So
2:03:51
that is a note. This is actually helpful because
2:03:54
if you actually read the court.
2:03:57
Which
2:03:58
are public.
2:03:59
The court conclusions, they're
2:04:02
completely the opposite of what the media wrote.
2:04:04
So always striving for
2:04:06
like the ground truth beyond the reporting. What did the judge
2:04:09
actually write? What
2:04:11
did the jury and the judge actually conclude? And
2:04:15
in both cases, they found unison. And
2:04:18
like that's after the jury shot for trying
2:04:21
to find the venue where I'm most likely to lose. Now,
2:04:25
I mean, obviously, it
2:04:27
can be a much better critique than this. I
2:04:30
mean, I've been far too optimistic about autopilot.
2:04:34
The critique I got, by the way, was more
2:04:36
about that, which is for
2:04:38
each, you broke down a nice bullet point list for each of
2:04:41
your companies, the set of predictions that you
2:04:43
made, when you'll deliver, when
2:04:45
you'll be able to solve, for example, self-driving, and
2:04:48
it gives you like a list. And those are probably compelling.
2:04:50
And the basic takeaway is like, you're often
2:04:53
too optimistic about how long it takes
2:04:55
to get something done. Yeah. I mean, I would say that I'm
2:04:57
pathologically optimistic on schedule.
2:05:00
This is true. But while
2:05:03
I am sometimes late, I always love
2:05:05
reading it.
2:05:06
Except with the Uber Lillith. No.
2:05:12
Okay. Is there over
2:05:14
the past year or so since
2:05:17
purchasing X, you've become more political.
2:05:20
Is there a part of you that regrets that? Have I?
2:05:23
In this battle to
2:05:25
sort
2:05:28
of counter way the
2:05:30
woke that comes from... Yeah. I guess if
2:05:32
you consider fighting the woke mind virus, which
2:05:34
I consider to be a civilizational threat to be
2:05:36
political, then yes.
2:05:38
So basically going into the
2:05:41
battleground of politics.
2:05:43
I mean, is there a part of you that regrets that? Yeah. I don't know if
2:05:45
this is necessarily sort of
2:05:47
one candidate or another candidate, but I'm
2:05:50
generally against things that are anti-meritocratic
2:05:54
or where there's an attempt to suppress discussion
2:05:57
where
2:05:58
even...
2:05:59
even discussing a topic is not
2:06:02
allowed. The
2:06:06
woke mind virus is communism rebranded.
2:06:09
Well, I mean, that said, because of that battle
2:06:11
against the woke mind virus,
2:06:13
you're perceived as being right wing.
2:06:16
If the woke is left, then I suppose that
2:06:18
would be true. But I'm
2:06:21
not sure. I think there are aspects left that are
2:06:23
good. I think that's true in
2:06:25
favor of the environment. If
2:06:29
you want to have a positive future for humanity, if you
2:06:31
believe in empathy for your fellow human beings,
2:06:37
being kind and not cruel, whatever those values
2:06:39
are.
2:06:41
You said that you were previously left or center-left.
2:06:44
What would you like to see in order to be
2:06:46
the quality for Democrats again?
2:06:48
I would say that I'd be
2:06:53
probably left or center on social issues,
2:06:55
probably a little bit right or center on economic issues.
2:06:58
And that still holds true.
2:07:00
Yes, but I think that's probably
2:07:02
half the country, isn't it?
2:07:04
Maybe more. Maybe more.
2:07:07
Are you and AOC secretly friends?
2:07:11
Bigger question. Do you wish you and her and
2:07:14
just people in general of all political persuasions to
2:07:16
talk more and with empathy and maybe have a little
2:07:18
bit more fun and good vibes and humor
2:07:20
online?
2:07:23
I'm always in favor of humor. That's
2:07:25
why we have a funny mood.
2:07:26
But good vibes,
2:07:28
camaraderie humor,
2:07:30
you know, like friendship.
2:07:33
Well, I don't know AOC
2:07:36
if I've only
2:07:38
been at one look at
2:07:40
the Met wall when she attended
2:07:44
and she was wearing this dress.
2:07:48
But I can only see one side of it. It looked
2:07:50
like eat the itch. But I
2:07:52
don't know what the rest of it said. I'm
2:07:55
not sure. Sorry
2:07:58
about the itch. Eat the itch.
2:07:59
I think we should have a language model complete. What
2:08:04
are the possible ways to complete that sentence? And
2:08:07
so I guess that didn't work
2:08:09
out well. Well, there's still hope.
2:08:12
I root for friendship. Yeah, sure. Sounds
2:08:14
good. More carolistic.
2:08:16
You're one of, if not the most
2:08:18
famous, wealthy, and powerful people in the world. And
2:08:21
your position is difficult to find people you can
2:08:23
trust. Trust no one. Not even yourself, not
2:08:25
trusting yourself. Okay,
2:08:27
well that's... You're saying I joke inside. But is there
2:08:29
some... Trust no one. Not even no one.
2:08:33
I mean, an hour just to think about that. And
2:08:35
maybe some drugs. And
2:08:37
maybe grok that. I mean,
2:08:39
is there some aspect of that when just existing
2:08:42
in a world where everybody wants something from you? How
2:08:44
hard is it to exist in that world?
2:08:47
I'll survive.
2:08:48
There's a song like that
2:08:50
too. I will survive. Will you
2:08:52
petrify it first? Okay.
2:08:56
I forget the rest of the lyrics. Is
2:08:58
there... You don't struggle with this?
2:09:00
I mean, I know you survive, but like there's ways... Petrify
2:09:02
is a spell in the droid tree.
2:09:05
What does it do? Petrify.
2:09:07
It
2:09:09
turns the monsters into stone. Oh,
2:09:14
like literally? Yeah, for like six seconds. Oh,
2:09:17
the second... There's so much math in Diablo that
2:09:19
breaks my brain. It's like math
2:09:21
nonstop.
2:09:22
I mean, really, you're like laughing at it, but
2:09:25
you don't...
2:09:27
You can
2:09:27
put a huge amount of tension on a mind. Yes,
2:09:31
it can be definitely stressful at times.
2:09:34
Well, how do you know who you can trust and
2:09:37
work in personal life?
2:09:38
I mean, I guess you look at somebody's track record over time
2:09:40
and... Data. You
2:09:43
know, I guess you kind of use
2:09:45
your neural net to assess
2:09:48
someone. Neural nets don't feel pain.
2:09:52
Your neural net has consciousness and might
2:09:54
heal pain when people betray you.
2:09:57
I mean, I'm...
2:10:00
I've almost never been
2:10:02
betrayed
2:10:04
Very very rare so you
2:10:07
know for what I guess common
2:10:09
be good to people that'll be good to you Yeah,
2:10:11
calm as real other people you
2:10:14
trust Let
2:10:17
me edit that question other people
2:10:20
close to you that call you out in your bullshit
2:10:24
Well the expert form is very helpful for that
2:10:26
You're looking for critical
2:10:28
feedback Can
2:10:30
it push you like into the extremes
2:10:32
more the extremes of thought make you cynical
2:10:35
about human nature in general I Don't
2:10:38
think I will be cynical in fact I think
2:10:43
You know I have my feelings that one should be
2:10:46
you know
2:10:50
Never trust a cynic the
2:10:52
reason is that I Sinex
2:10:56
excuse their own bad behavior by saying everyone
2:10:58
does it mm-hmm because
2:11:01
they're cynical So
2:11:03
I always be it's a red flag if someone's
2:11:05
a phoenix a true cynic
2:11:07
Yeah, there's a degree of projection there. That's
2:11:09
always fun to watch from the outside and
2:11:12
enjoy the well. It's justocracy if
2:11:15
What do you believe this is an important point that
2:11:18
any people who are listening should bear in mind if if
2:11:21
somebody is cynical Meaning
2:11:23
that they see bad behavior in
2:11:25
everyone It's easy for
2:11:27
them to excuse their own bad behavior By
2:11:30
saying that well everyone does it
2:11:34
That's not true I Most
2:11:38
people are kind of medium good I
2:11:41
Do wish the people on X
2:11:43
will be better at seeing the good in
2:11:45
other people's behavior there seems
2:11:48
to be a kind of weight
2:11:50
towards seeing The negative
2:11:53
somehow the negative is sexier Interpreting
2:11:56
the negative
2:11:57
is sexier more viral
2:11:59
I don't know what that is exactly about human nature.
2:12:02
I mean, I find the X platform to be less negative than
2:12:04
the legacy media.
2:12:06
I mean, if you read sort of
2:12:09
conventional newspapers, it
2:12:11
makes you sad, frankly. Whereas,
2:12:17
I'd say on the X platform, I
2:12:21
mean, I really get more laughs per day on
2:12:23
X than everything else combined
2:12:26
from humans. Last is one thing, last
2:12:30
is it overlaps, but
2:12:32
it's not necessarily perfectly
2:12:35
overlapping with good vibes and
2:12:37
support. Like celebrating
2:12:39
others, for example. Not in a stupid,
2:12:42
shallow, naive way, but like in an awesome, like, oh,
2:12:44
something awesome happened, and you celebrate
2:12:46
them for it. It
2:12:48
feels that that is outweighed by shitting
2:12:51
on other people. Now, it's better
2:12:54
than mainstream media, but it's still...
2:12:57
Mainstream media is almost relentlessly negative
2:12:59
about everything.
2:13:00
Really, the conventional
2:13:03
news tries to answer the question, what is the worst
2:13:05
thing that happened on Earth today? And
2:13:07
it's a big world. So on any
2:13:09
given day, something bad has happened.
2:13:12
And a generalization of that, what is the worst perspective
2:13:14
I can take on a thing that happened? So,
2:13:17
yeah, I don't know, there's just a strong
2:13:20
negative bias in the news.
2:13:24
I mean, I think there's a possible
2:13:26
explanation of this is evolutionary,
2:13:31
where
2:13:31
bad news historically would be potentially
2:13:35
fatal. Like there's Lion over there, or
2:13:37
there's some other tribe that wants to kill you. Good
2:13:40
news, like we found a
2:13:43
patch of berries, it's nice to have, but not essential.
2:13:46
So, our
2:13:49
old friend, Tesla Autopilot,
2:13:52
and it's probably one of the most intelligent real world
2:13:54
AI systems in the world. All
2:13:56
right, you followed it from the beginning.
2:13:59
Yeah, it was one of the most incredible. robots in the world
2:14:01
and continues to be. It was really exciting
2:14:03
and it was super exciting when it generalized,
2:14:08
became more than a robot on four wheels
2:14:10
but a real
2:14:12
world AI system that perceives the world
2:14:16
and can have potentially different
2:14:18
embodiments.
2:14:19
Well, I mean the really wild thing about the
2:14:21
Antoine training is that
2:14:24
it learns to read, we can read signs
2:14:26
but we never taught it to read. So,
2:14:31
yeah,
2:14:34
we never taught it what a car was or what a person was
2:14:36
or a cyclist. It
2:14:41
learnt what
2:14:44
all those things are, what all the objects are on the road
2:14:48
from video, just from watching video,
2:14:50
just like humans. I mean humans
2:14:52
are photons in control
2:14:55
controls out. Like the
2:14:57
vast majority of information reaching our brain is
2:15:00
from our eyes. And you
2:15:02
say, well, what's the output? The output is our
2:15:05
motor signals to our sort
2:15:07
of fingers and mouth in
2:15:09
order to communicate.
2:15:14
Photons in controls out. The
2:15:16
same is true of the car.
2:15:18
By looking at the sequence
2:15:21
of images, you've
2:15:24
agreed with Ilyas Iskova recently
2:15:26
where he talked about LLM forming a world
2:15:29
model and basically language is a projection
2:15:31
of that world model onto the sequence of letters
2:15:35
and you're saying... It finds order in
2:15:37
these things.
2:15:40
It finds
2:15:41
correlative clusters.
2:15:45
And so doing it's like understanding something deep
2:15:48
about the world. Which
2:15:50
is like, it's
2:15:52
beautiful.
2:15:54
That's how our brain works. Yeah,
2:15:56
but it's beautiful. Photons in controls
2:15:58
out. You know, nuts are able to...
2:15:59
understand that deep meaning in the world. And
2:16:02
so the question is how far can it go? And
2:16:05
it does seem everybody's excited
2:16:07
about LLMs. So in the space of self
2:16:09
supervised learning in the space of text.
2:16:12
Yeah. Um, it
2:16:15
seems like there's a deep
2:16:17
similarity between that and what
2:16:20
Tefla Autopilot is doing.
2:16:22
Is it to you basically the same, but
2:16:24
you're converging.
2:16:27
You are converging.
2:16:28
I wonder who gets there faster. Understand having
2:16:31
a deep understanding of the world. Are
2:16:35
they just pulled naturally converge?
2:16:37
They're both headed towards AGI.
2:16:40
Um, the
2:16:42
Tesla approach is much more computer
2:16:44
efficient.
2:16:45
It has to be because we were constrained on the, the,
2:16:48
the, you know, you rarely have a hundred
2:16:50
watts, um, and
2:16:54
it's eight computer, 144 trillion
2:16:59
operations per second, which sounds like a lot, but it's kind
2:17:02
of small potatoes these days. That
2:17:06
ain't date, but it's understanding the world at
2:17:08
a date slightly
2:17:11
turn of 56 values.
2:17:13
But there the path to AGI might have much
2:17:16
more significant impact because it's
2:17:19
understanding it'll,
2:17:21
it'll fast understand the real world and
2:17:23
will LMS and their
2:17:25
board for be able to integrate with, with
2:17:28
the real humans in the real world faster.
2:17:31
They're both, um, going to understand
2:17:33
the world, but I think Tesla's
2:17:35
approach is fundamentally more
2:17:38
computer efficient. It had to be, there
2:17:40
was no choice. Like our brain
2:17:42
is very computer efficient, very, very
2:17:44
energy efficient. So I think like,
2:17:46
what, what does our brain able to do? Um,
2:17:50
you know, there's only about 10 Watts of higher
2:17:52
brain function, not counting stuff that's
2:17:55
just used to control our body. Um,
2:17:57
the thinking part of our brain is less than 10 Watts.
2:17:59
And those 10
2:18:02
watts can still produce a
2:18:04
much better novel than a 10 megawatt
2:18:07
GPU cluster.
2:18:08
So there's a six order magnitude difference there.
2:18:12
I mean, the AI
2:18:15
has thus far gotten
2:18:17
to where it is via
2:18:19
brute force, just throwing massive
2:18:21
amounts of compute and massive
2:18:23
amounts of power at it. So
2:18:27
this is not where it
2:18:29
will end up. In
2:18:33
general, with any given technology, first try to make it
2:18:35
work and then you make it efficient. So
2:18:39
I think we'll find over time that these
2:18:42
models get smaller,
2:18:45
are able to produce
2:18:48
sensible output with far less compute,
2:18:52
far less power. Tesla
2:18:55
is arguably ahead of the game on that front because we've
2:19:00
just been forced to try to
2:19:03
understand the world with 100
2:19:06
watts of compute. And
2:19:10
there are a bunch of fundamental
2:19:12
functions that we're kind of about to include. So we
2:19:14
have to run them in a bunch of things in emulation. We
2:19:19
fix a bunch of those with hardware 4 and then hardware 5
2:19:21
will be even better. But
2:19:27
it does appear at this point
2:19:30
that the call will be able
2:19:32
to drive better than a human even with
2:19:34
hardware 3 and 100 watts of power. And
2:19:39
really, if we really optimize
2:19:42
it, it could be probably less than 50 watts.
2:19:44
What have you learned about developing
2:19:48
optimus, about applying,
2:19:51
integrating this kind of real world AI
2:19:54
into the space of robotic manipulation,
2:19:57
just humanoid robotics?
2:20:00
What are some interesting tiny or
2:20:02
big things you've understood?
2:20:05
I was surprised at the fact that
2:20:07
we had to develop every part of the robot
2:20:10
ourselves. That there were no off-the-shelf
2:20:13
motors, electronics, sensors.
2:20:17
Like we had to develop everything.
2:20:21
We couldn't actually find a source
2:20:25
of electric motors for
2:20:27
any amount of money.
2:20:30
So it's not even just efficient,
2:20:32
inexpensive, it's like anything. There's not
2:20:35
a...
2:20:36
No.
2:20:37
The actuators, everything. Everything
2:20:39
has to be designed from scratch. We
2:20:41
tried hard to find anything that
2:20:44
was... Because you think of how many electric motors are made in the
2:20:46
world. There's like
2:20:48
tens of thousands, hundreds of thousands of electric
2:20:51
motor designs. None
2:20:56
of them were suitable for a human-right robot. Literally
2:20:58
none. So
2:21:01
we had to develop our own design,
2:21:04
design it specifically for what
2:21:08
a human-right robot needs.
2:21:09
How hard was it to design something that can
2:21:11
be mass manufactured, could be relatively expensive?
2:21:14
Maybe if you compare it to Boston Dynamics Atlas, it's
2:21:17
a very expensive
2:21:18
robot. It is designed
2:21:20
to be manufactured in the same way
2:21:22
they would make a car. I think ultimately
2:21:25
we can make optimus for less than the cost of a car.
2:21:29
It should be because if you look at the mass
2:21:32
of the robot, it's much smaller and
2:21:35
the car has many actuators in it. The
2:21:39
car has more actuators than the robot.
2:21:41
The actuator is
2:21:43
interesting on a human-right robot with
2:21:46
fingers. So optimus has really nice
2:21:48
hands and fingers.
2:21:50
And they can do some interesting
2:21:53
manipulation. Soft
2:21:55
touch robotics. I
2:21:56
mean one of the test calls
2:21:59
I have is... can pick up a
2:22:01
needle and a thread and thread
2:22:03
the needle
2:22:04
just by looking. How far away
2:22:06
are we from that? Just by looking, just by
2:22:09
looking. Maybe a year. Although
2:22:11
I go back to I'm optimistic
2:22:13
on time. The work that we're doing in the car will
2:22:16
translate to the robot.
2:22:17
The perception or also the control
2:22:19
of those? No, the controls are different but the
2:22:23
video in controls out. The
2:22:26
car is a robot on four wheels. The
2:22:30
octopus is a robot with an end line.
2:22:33
So they're very similar. So
2:22:36
the entire machinery of the learning process
2:22:39
end to end is just you just have a different set of controls.
2:22:42
After this we'll figure out how to do things by watching
2:22:44
videos. As
2:22:46
the saying goes, be kind for everyone you meet
2:22:48
is fighting a battle
2:22:50
you know nothing about. Yeah, that's true. What's
2:22:52
something difficult you're going through that people don't
2:22:54
often see?
2:22:56
Trying to feed you, Bouliv?
2:23:00
No,
2:23:03
I mean, you know.
2:23:08
I mean, my mind is a storm and
2:23:11
I don't think most
2:23:13
people would want to be me. They may think they'd want to be
2:23:15
me but they don't know. They don't understand.
2:23:22
How
2:23:25
are you doing? I'm
2:23:29
overall okay.
2:23:32
I'm in a brand scheme of things I can't complain.
2:23:36
Do you get lonely? Sometimes
2:23:39
but I, you know, my kids
2:23:42
and friends keep me company.
2:23:44
So
2:23:48
not existential?
2:23:49
There are many
2:23:51
many things that I can't complain about.
2:23:54
There are many nights I sleep alone.
2:23:58
I don't have to but I do.
2:24:04
Walter Isaacson, in his new biography
2:24:06
of you, wrote about your difficult childhood. Will
2:24:09
you ever find forgiveness in
2:24:11
your heart for everything that has happened to you in that
2:24:13
period of your life?
2:24:19
What is forgiveness? I do not...
2:24:24
At least, I don't think I have a resentment.
2:24:27
So, nothing to
2:24:32
forgive?
2:24:37
No, forgiveness is difficult for
2:24:40
people. It
2:24:43
seems like you don't harbor the resentment.
2:24:45
I mean, I try to think about,
2:24:47
like, what is going to affect
2:24:50
the future in a good way?
2:24:52
And holding onto grudges does
2:24:55
not affect
2:24:57
the future in a good way.
2:24:59
Your father, a proud father, what
2:25:02
have you learned about life from your kids?
2:25:07
Those little biological organisms.
2:25:10
I mean, developing AI
2:25:13
and watching, say, a little X grow
2:25:16
is fascinating
2:25:18
because they
2:25:21
are formal parallels than I would
2:25:24
have expected. I
2:25:26
mean, I can see his biological neural net making
2:25:30
more and more sense of the world. And I can
2:25:32
see a digital neural net making
2:25:33
more and more sense of the world at the same
2:25:37
time.
2:25:37
Do you see the beauty and magic in both?
2:25:39
Yes. I
2:25:44
mean, one of the things with kids is that,
2:25:47
you know, you kind of see the world anew in their
2:25:50
eyes.
2:25:52
You know,
2:25:53
to them, everything is new
2:25:55
and fresh. And
2:26:00
then when you see that, they
2:26:02
make sure the world is
2:26:04
new and fresh. You
2:26:06
do too.
2:26:10
Well, Elon, I just want to say
2:26:12
thank you for your kindness
2:26:15
to me and friendship over the years, for
2:26:17
seeing something in a silicate like me as you've
2:26:20
done for many others. And
2:26:22
thank you for having hope for a positive
2:26:25
future for humanity and for working your
2:26:28
ass off to make it happen. Thank you, Elon. Thanks,
2:26:30
folks.
2:26:31
Thank you for listening to this conversation with
2:26:33
Elon Musk. To support this podcast,
2:26:35
please check out our sponsors in the description. And
2:26:38
now, let me leave you with some words that
2:26:41
Walter Isaacson wrote about the central
2:26:43
philosophy of how Elon approaches
2:26:45
difficult problems. The
2:26:47
only rules are the ones dictated
2:26:50
by the laws of physics. Thank
2:26:53
you for listening and hope to see you
2:26:55
next time.
Podchaser is the ultimate destination for podcast data, search, and discovery. Learn More