Podchaser Logo
Home
#387 – George Hotz: Tiny Corp, Twitter, AI Safety, Self-Driving, GPT, AGI & God

#387 – George Hotz: Tiny Corp, Twitter, AI Safety, Self-Driving, GPT, AGI & God

Released Friday, 30th June 2023
 1 person rated this episode
#387 – George Hotz: Tiny Corp, Twitter, AI Safety, Self-Driving, GPT, AGI & God

#387 – George Hotz: Tiny Corp, Twitter, AI Safety, Self-Driving, GPT, AGI & God

#387 – George Hotz: Tiny Corp, Twitter, AI Safety, Self-Driving, GPT, AGI & God

#387 – George Hotz: Tiny Corp, Twitter, AI Safety, Self-Driving, GPT, AGI & God

Friday, 30th June 2023
 1 person rated this episode
Rate Episode

Episode Transcript

Transcripts are displayed as originally observed. Some content, including advertisements may have changed.

Use Ctrl + F to search

0:00

The following is a conversation with George Hotz,

0:03

his third time on this podcast. He's

0:05

the founder of Kama.ai that

0:07

seeks to solve autonomous driving and

0:09

is the founder of a new company called TinyCorp

0:13

that created TinyGrad, a neural

0:15

network framework that is extremely simple

0:18

with the goal of making it run on any

0:20

device by any human

0:23

easily and efficiently. As

0:25

you know, George also did a large number

0:27

of fun and amazing things from hacking

0:29

the iPhone to recently joining Twitter

0:32

for a bit as a

0:34

intern in quotes, making the case

0:36

for refactoring the Twitter code base.

0:39

In general, he's a fascinating engineer

0:41

and human being and one of my favorite people

0:43

to talk to.

0:46

And now a quick few second mention of each sponsor.

0:48

Check them out in the description. It's the best way

0:50

to support this podcast. We've got Numerai

0:53

for the world's hardest data science tournament,

0:56

Babbel for learning new languages, NetSuite

0:59

for business management software, InsightTracker

1:01

for blood paneling, and AgeeOne for

1:03

my daily multivitamin. Choose

1:05

wisely, my friends.

1:07

Also, if you want to work on our team, we're

1:09

always hiring, go to lexfriedman.com

1:11

slash hiring. And now onto

1:13

the full ad reads. As always, no ads in the middle.

1:16

I try to make this interesting, but if you must

1:18

skip them, friends, please still check out our sponsors.

1:20

I enjoy their stuff. Maybe you will too.

1:24

This episode is brought to you by Numerai,

1:26

a hedge fund that uses artificial intelligence

1:29

and machine learning to make investment decisions. They

1:31

created a tournament that challenges data

1:33

scientists to build

1:35

best predictive models for financial markets. It's

1:37

basically just a really, really difficult

1:40

real world data set to test out

1:43

your ideas for how to build machine learning

1:45

models. I think this is a

1:47

great educational platform. I think this is a great

1:49

way to explore, to learn about machine

1:51

learning, to really

1:53

test yourself on real world data

1:55

with consequences. No

1:57

financial background is needed. The models are

1:59

scored.

1:59

based on how well they perform on unseen

2:02

data. And the top performers receive a share

2:04

of the tournament's prize pool.

2:06

Head over to numeri.ai to sign up for a tournament

2:13

and hone your machine learning skills. That's numeri.ai

2:16

for a chance to play against me and win

2:19

a share of the tournament's prize pool.

2:22

That's numeri.ai. This

2:24

show is also brought to you by Babbel, an

2:26

app and website that gets you speaking

2:29

in a new language within weeks. I

2:31

have been using it to learn a few languages, Spanish,

2:35

to review Russian, to practice Russian, to

2:37

revisit Russian from a different

2:39

perspective because that becomes more and

2:42

more relevant for some of the previous

2:44

conversations I've had and some upcoming conversations

2:46

I have. It really is fascinating

2:48

how much another language,

2:51

knowing another language, even to

2:53

a degree where you can just have little

2:55

bits and pieces of a conversation, can

2:57

really unlock an experience in another part of

2:59

the world. When you travel in France

3:02

and Paris, just having a few

3:04

words at your disposal, a few phrases, it

3:06

begins to really open you up

3:09

to strange, fascinating new

3:11

experiences that ultimately,

3:13

at least to me, teach me that we're all the same.

3:16

We have to first see our differences to

3:18

realize those differences are grounded in

3:20

a basic humanity. And that

3:23

experience that we're all very different

3:25

and yet at the core the same.

3:27

I think travel with the aid

3:29

of language really helps

3:31

unlock. You

3:35

can get 55% off your Babbel subscription

3:38

at babbel.com slash LexPod. That's

3:40

spelled B-A-B-B-E-L.com

3:43

slash LexPod. Rules and restrictions

3:46

apply.

3:47

This show is also brought to you by NetSuite,

3:51

an all-in-one cloud business management system.

3:53

They manage all the messy stuff

3:56

that is required to run a business,

3:58

the financials, the human resources.

3:59

to the inventory if you do that kind of thing, e-commerce,

4:03

all that stuff, all the business-related details.

4:05

I know how stressed I am about

4:08

everything that's required to

4:10

run a team,

4:12

to

4:13

run a business that involves

4:16

much more than just ideas and

4:18

designs and engineering. It involves

4:20

all the management of human beings, all

4:22

the complexities of that, the financials,

4:25

all of it, and so you should be using the best

4:27

tools for the job.

4:29

I sometimes wonder if

4:31

I have it in me, mentally

4:34

and

4:35

skill-wise,

4:37

to be a part of

4:39

running a large company.

4:42

I think like with a lot of things in life, it's

4:44

one of those things you shouldn't wonder too much about.

4:47

You should either do or not do.

4:50

But again,

4:52

using the best tools for the job is

4:54

required here. You can start now

4:56

with a no payment or interest for

4:58

six months. Go to netsuite.com slash

5:00

Lex to access their one of a kind financing

5:03

program. That's netsuite.com

5:06

slash Lex. This

5:08

show is also brought to you by Insight Tracker,

5:11

a service I use to track biological data,

5:14

data that comes from my body, to predict,

5:16

to

5:17

tell me what I should do with my lifestyle,

5:19

with my diet, what's working and what's

5:21

not working. It's obvious,

5:25

all the exciting breakthroughs that are happening with

5:27

transformers, with large language models,

5:30

even with diffusion, all of

5:32

that is obvious that with raw

5:34

data, with huge amounts of raw data,

5:36

fine tuned to the individual, would

5:38

really reveal to us

5:41

the signal in all the noise of

5:43

biology. I feel like that's on the horizon.

5:47

The kinds of leaps in development

5:49

that we saw in language

5:51

and now more and more visual data.

5:53

I feel like biological data is around the corner,

5:56

unlocking what's there in this

5:58

multi-hierarchical.

5:59

distributed system that is our biology.

6:02

What is it telling us? What is the secrets it holds?

6:05

What is the thing that it's

6:07

missing that could be aided?

6:09

Simple lifestyle changes, simple diet changes,

6:12

simple changes in all kinds of things that are controllable

6:14

by individual human being. I can't wait

6:16

till that's a possibility and InsideTracker

6:18

is taking steps towards that. Got special savings

6:21

for a limited time when you go to insidetracker.com

6:23

slash Lex.

6:26

This show is also brought to you by Athletic

6:28

Greens. That's now called AG1. It

6:31

has the AG1 drink. I drink it twice

6:33

a day. At the very least, it's an all-in-one

6:35

daily drink to support better health and peak performance.

6:38

I drink it cold, it's refreshing,

6:40

it's grounding. It helps me

6:44

reconnect with the basics,

6:47

the nutritional basics that makes this whole machine

6:49

that is our human body run. All

6:52

the crazy mental stuff I do for work,

6:56

the physical challenges, everything,

6:59

the highs and lows of life itself.

7:02

All of that is somehow made better knowing

7:04

that at least you got your nutrition

7:06

in check. At least you're getting enough

7:08

sleep. At least you're doing the basics.

7:11

At least you're doing the exercise. Once

7:13

you get those basics in place, I think

7:15

you can do some quite

7:17

difficult things in life.

7:19

But anyway, beyond all that is just a source

7:21

of happiness and a kind of

7:23

a feeling of home.

7:26

The feeling that comes from returning

7:28

to the habit time and time

7:30

again. Anyway, they'll give

7:32

you one month supply of fish oil when you sign

7:34

up at drinkag1.com

7:37

slash Lex.

7:39

This is the Lex Revenant podcast. To

7:42

support it, please check out our sponsors in the description.

7:45

And now, dear friends, here's George

7:47

Hotz.

7:49

He is not there anymore, so he can pass figures from the

7:52

sources. This unreal account has made it lessstallable

7:56

compared to colonialism. We have only three

7:58

who have grave You

8:05

mentioned something in a stream about

8:07

the philosophical nature of time. So

8:10

let's start with the wild question. Do you think

8:12

time is an illusion?

8:14

You know, I

8:17

sell phone calls to Kama

8:19

for $1,000. And some guy called

8:21

me and like,

8:24

you know, it's $1,000. You can talk to me for half an hour.

8:27

And he's like, yeah, okay. So

8:29

like,

8:30

time doesn't exist. And I really wanted

8:32

to share this with you. I'm like,

8:34

oh, what do you mean time doesn't exist? Right?

8:37

Like, I think time is a useful model, whether it

8:39

exists or not, right? Like does quantum

8:41

physics exist? Well, it doesn't matter. It's

8:44

about whether it's a useful model to describe

8:46

reality. Is time

8:49

maybe compressive? Do

8:51

you think there is an objective reality or is

8:53

everything just useful models? Like

8:56

underneath it all. Is there an actual

8:59

thing that we're constructing models for?

9:03

I don't know. I was

9:05

hoping you would know. I don't think it matters. I

9:07

mean, this kind of connects to the models

9:10

of constructive reality with machine learning. Right?

9:13

Sure. Like, is it

9:15

just nice to have useful approximations

9:18

of the world such that we can do something with it? So

9:21

there are things that are real. Column

9:23

of graph complexity is real. Yeah.

9:25

Yeah. The compressive thing. Math is real.

9:28

Yeah. This should be a t-shirt. And

9:31

I think hard things are actually hard. I don't think P

9:33

equals NP. Ooh, strong words.

9:36

Well, I think that's the majority. I do think factoring

9:38

is in P, but. I don't think you're the

9:40

person that falls the majority in all walks of

9:43

life, so. Well, good help for that one I do. Yeah.

9:46

In theoretical computer science, you're one of the sheep. All

9:49

right. But to you,

9:52

time is a useful model. Sure.

9:54

What were you talking about on the stream?

9:57

What time? Are you made of time? I

9:59

remembered half the thing. I said on stream. Ah.

10:02

Someday someone's going to make a model of all of it and

10:04

it's going to come back to haunt me. Someday soon?

10:07

Yeah, probably. Would that be

10:09

exciting to you or sad that there's

10:11

a George Hotz model?

10:13

I mean the question is when the George Hotz model

10:16

is better than George Hotz.

10:18

Like I am declining and the model is growing.

10:20

What is the metric by which you measure better or worse

10:23

in that if you're competing with yourself?

10:25

Maybe you can just play a game

10:28

where you have the George Hotz answer and the George Hotz model

10:30

answer and ask which people prefer. People

10:32

close to you or strangers? Either

10:35

one. It will hurt more when it's people close to me but

10:38

both will be overtaken by the George

10:40

Hotz model. It'd

10:42

be quite painful, right? Loved ones, family

10:45

members

10:46

would rather have the model over for Thanksgiving

10:48

than you. Yeah.

10:51

Or like significant others would

10:53

rather sext. Like

10:58

with the large language model version of you.

11:00

Especially when it's fine tuned

11:02

to their preferences. Yeah.

11:06

Well, that's what we're doing in a relationship, right?

11:08

We're just fine tuning ourselves but we're inefficient with

11:10

it because we're selfish ingredients and so on.

11:13

All language models can fine

11:15

tune more efficiently, more selflessly. There's

11:17

a Star Trek Voyager episode where, you know, Catherine

11:20

Janeway, lost in the Delta Quadrant

11:22

makes herself a

11:24

lover on the holodeck. And

11:28

the lover falls asleep on her arm and he

11:30

snores a little bit and, you know, Janeway edits the

11:32

program to remove that. And then

11:34

of course the realization is, wait,

11:37

this person's terrible. It is actually all their

11:41

nuances and quirks and slight annoyances that

11:43

make this relationship worthwhile. But

11:46

I don't think we're going to realize that until it's too late.

11:48

Well, I think

11:51

a large language model could incorporate the

11:54

flaws and the quirks and all that kind of stuff. Just

11:56

the perfect amount of quirks and

11:59

flaws to make you...

11:59

charming without crossing the line. Yeah,

12:02

yeah. And that's probably a good

12:06

approximation of the, the percent

12:08

of time the language model

12:10

should be cranky or

12:14

an asshole or jealous

12:16

or all this kind of stuff. And of course it can and it will,

12:19

but all that difficulty at that point is artificial.

12:22

There's no more real difficulty. Okay,

12:24

what's the difference between real and artificial? Artificial

12:27

difficulty is difficulty that's like constructed

12:30

or could be turned off with a knob. Real

12:32

difficulty is like you're in the woods and you

12:34

gotta survive.

12:36

So if something can not

12:38

be turned off with a knob, it's real.

12:42

Yeah, I think so. Or I mean, you

12:44

can't get out of this by smashing the knob with a hammer.

12:47

I mean, maybe you kind of can, you know, into

12:50

the wild when, you know,

12:53

Alexander Supertramp, he wants to explore

12:55

something that's never been explored before, but it's

12:57

the 90s. Everything's been explored. So he's like, well, I'm

13:00

just not gonna bring a map. Yeah.

13:02

I mean, no,

13:04

you're not exploring. You should have brought

13:06

a map, dude, you died. There was a bridge a mile from where you were camping.

13:09

How does that connect to the metaphor of the knob? By

13:13

not bringing the map, you didn't

13:15

become an explorer.

13:17

You just smashed the thing. Yeah.

13:19

Yeah. The difficulty is still artificial.

13:22

You failed before you started. What if we just don't

13:24

have access to the knob? Well, that

13:27

maybe is even scarier, right?

13:29

Like we already exist in a world of nature and nature

13:31

has been fine-tuned over billions of years to

13:36

have humans build

13:39

something

13:41

and then throw the knob away in some grand romantic

13:44

gesture is horrifying.

13:46

Do you think of us humans as individuals

13:48

that are like born and die, or is

13:51

it, are we just all part of one

13:53

living organism that is

13:56

earth, that is nature? I

13:59

don't think there's a clear. line there, I think

14:01

it's all kind of just fuzzy. I don't know.

14:03

I mean, I don't think I'm conscious. I don't think

14:05

I'm anything. I think I'm just a computer

14:08

program.

14:09

So it's all computation. I think running

14:12

in your head is just a computation.

14:15

Everything running in the universe is computation, I think.

14:17

I believe the extended church starting thesis.

14:20

Yeah, but there seems

14:22

to be an embodiment to your particular computation.

14:24

Like there's a consistency.

14:26

Well, yeah, but I mean, models have consistency

14:28

too. Models

14:31

that have been RLHF'd will continually

14:33

say, you know, like, well, how do I

14:36

murder ethnic minorities? Oh, well, I can't

14:38

let you do that, Al. There's a consistency to that behavior.

14:40

So RLHF,

14:43

like we all RLHF each other. We

14:48

provide human feedback and thereby fine-tune

14:51

these little pockets

14:53

of computation. But it's still unclear why that

14:56

pocket of computation stays with you like

14:58

for years. It just kind of follows like

15:01

you have this consistent

15:04

set of physics biology.

15:06

What like

15:10

whatever you call the neurons firing,

15:12

like the electrical signals, the mechanical signals, all

15:14

of that that seems to stay there and it contains

15:16

information, stores information and

15:18

that information permeates through time

15:21

and

15:22

stays with you. There's like memory, there's

15:25

like sticky. Okay, to be fair,

15:27

like a lot of the models we're building today are

15:29

very, even RLHF is nowhere

15:32

near as complex as the human loss function. Reinforcement

15:34

learning with human feedback. You

15:37

know, when I talked about will GPT-12 be

15:39

AGI, my answer is no, of course not. I mean,

15:42

cross-entropy loss is never going to get you there. You

15:44

need probably

15:46

RL in

15:47

fancy environments in order to get something that

15:50

would be considered AGI-like. So

15:53

to ask the question about why, I don't

15:56

know, it's just some quirk of evolution.

15:58

I don't think there's anything particularly...

15:59

special about

16:02

where I ended up, where humans

16:04

ended up.

16:06

So, okay. We have human level

16:08

intelligence. Would you call that AGI?

16:11

Whatever we have, G-I? Look,

16:13

actually I don't really even like the word AGI,

16:16

but general intelligence is

16:18

defined to be whatever humans have.

16:20

Okay. So why can GPT-12

16:23

not get us to AGI? Can we just like,

16:26

link on that?

16:27

If your loss function is categorical cross entropy,

16:30

if your loss function is just try to maximize

16:32

compression, I have a sound

16:34

cloud, I rap, and I tried to get chat

16:36

GPT to help me write raps. And

16:39

the raps that it wrote sounded like YouTube common

16:41

raps. You know, you can go on any rap beat online and

16:44

you can see what people put in the comments. And it's the most

16:46

like

16:47

mid quality rap you can find. Is

16:49

mid good or bad? Mid is bad. Mid is bad.

16:52

Like mid, it's like. Every time I talk

16:54

to you, I learn new words. Mid.

16:58

Mid, yeah.

16:59

I was like, is it like basic?

17:01

Is that what mid means? Kind of, it's like

17:04

middle of the curve, right? So there's

17:06

like, there's like that intelligence curve.

17:09

And you have like the dumb guy, the smart guy, and then

17:11

the mid guy. Actually being the mid guy is the worst. The

17:14

smart guy is like, I put all my money in Bitcoin. The mid

17:16

guy is like, you

17:16

can't put money in Bitcoin. It's not real money.

17:21

And all of it is a genius meme. That's

17:24

another interesting one.

17:25

Memes. The humor,

17:28

the idea, the absurdity, encapsulated

17:31

in a single image. And it

17:33

just kind of propagates virally

17:37

between all of our brains. I

17:39

didn't get much sleep last night. So I'm very, I

17:41

sound like I'm high, but I swear I'm not. Do

17:45

you think we have ideas or ideas have

17:47

us?

17:49

I think that we're going to get super scary

17:52

memes once the AIs actually are

17:54

superhuman. Do you think AI

17:56

will generate memes? Of course. Do

17:58

you think it'll make humans laugh? I

18:00

think it's worse than that. Infinite

18:04

jest, it's introduced in the first 50

18:06

pages, is about a tape that

18:09

once you watch it once, you only

18:11

ever want to watch that tape.

18:12

In fact, you want to watch the tape so much that someone

18:15

says, okay, here's a hacksaw, cut off your

18:17

pinky, and then I'll let you watch the tape again, and

18:19

you'll do it. We're

18:21

actually going to build that, I think, but it's not going

18:23

to be one static tape. I think the human brain

18:25

is too complex

18:27

to be stuck in

18:29

one static tape like that. If you look at ant

18:31

brains, maybe they can be stuck on a static tape. We're

18:35

going to build that using generative models. We're

18:37

going to build the TikTok that you actually can't look

18:39

away from.

18:41

TikTok is already pretty close there, but the

18:43

generation is done by humans. The

18:45

algorithm is just doing their recommendation, but if

18:48

the algorithm is also able to do the generation ...

18:51

Well, it's a question about how much intelligence is behind

18:53

it, right? The content

18:55

is being generated by, let's say, one humanity

18:57

worth of intelligence, and you can quantify a humanity.

19:01

It's exa-flops,

19:04

yada-flops, but you can quantify

19:07

it.

19:07

Once that generation is being done by 100 humanities,

19:10

you're done.

19:11

It's actually

19:14

scale, that's the problem, but

19:16

also speed.

19:19

Yeah. What if

19:22

it's manipulating the

19:24

very limited human dopamine engine

19:27

for porn? Imagine it's just TikTok,

19:30

but for porn.

19:32

That's like a brave new world. I

19:34

don't even know what it'll look like, right? Again,

19:37

you can't imagine the behaviors of something

19:39

smarter than you, but a

19:41

super intelligent ... An agent

19:44

that just dominates your intelligence so much

19:46

will be able to completely manipulate

19:49

you. So that it

19:51

won't really manipulate, it'll just move past

19:53

us? It'll just kind of exist

19:56

the way water exists or the air exists?

19:59

You see? And that's the whole AI

20:01

safety thing. It's

20:03

not the machine that's going to do that.

20:05

It's other humans using the machine that are going to do that

20:08

to you. Yeah. Because

20:10

the machine is not interested in hurting humans. The

20:13

machine is a machine. But the human

20:15

gets the machine, and there's a lot of humans out

20:17

there very interested in manipulating you.

20:20

Well, let me bring up Eliezer

20:23

Yatkowsky, who

20:25

recently sat where you're sitting. He

20:28

thinks that AI will almost surely

20:30

kill everyone.

20:32

Do you agree with him or not?

20:35

Yes, but maybe for a different reason. Okay.

20:40

And then I'll try to

20:42

get you to find hope, or

20:45

we could find a note to that answer. But

20:47

why yes? Okay. Why

20:50

didn't nuclear weapons kill everyone? That's

20:51

a good question. I think there's an answer. I

20:54

think it's actually very hard to deploy nuclear weapons tactically.

20:58

It's very hard to accomplish tactical objectives.

21:00

Great. I can nuke their country. I have

21:02

an irradiated pile of rubble. I don't want

21:05

that. Why not?

21:06

Why don't I want an irradiated pile of rubble? For

21:09

all the reasons no one wants an irradiated pile of rubble.

21:12

Because you can't use that land for

21:14

resources. You

21:16

can't populate the land. Yeah. What you

21:18

want a total victory in a

21:20

war is not usually the irradiation

21:24

and eradication of the people there. It's the

21:26

subjugation and domination of the people.

21:29

Okay. So you can't

21:31

use this strategically tactically in a war

21:34

to help gain a military advantage.

21:39

It's all complete destruction. But

21:42

there's egos involved. It's still surprising. It's

21:44

still surprising that nobody pressed the big red button.

21:47

It's somewhat surprising. But you

21:50

see, it's the little red button that's going to be pressed

21:52

with AI. That's going

21:55

to... And that's why we die. It's

21:57

not because the AI... If

22:00

there's anything in the nature of AI, it's just the

22:02

nature of humanity. What's the algorithm behind

22:04

the little red button? What

22:07

possible ideas do you have for how a

22:09

human species ends? Sure. So I

22:11

think the most

22:14

obvious way to me is wireheading.

22:16

We end up amusing ourselves to death.

22:20

We end up all

22:21

staring at that infinite TikTok and

22:23

forgetting to eat.

22:26

Maybe it's even more benign than this. Maybe

22:28

we all just stop reproducing.

22:32

To be fair, it's probably

22:34

hard to get all of humanity. Yeah. The

22:40

interesting thing about humanity is the diversity

22:43

in it. Oh, yeah. Organisms in general. There's

22:45

a lot of weirdos out there.

22:47

Two of them are sitting here. I mean, diversity

22:50

in humanity is- We do respect. I

22:53

wish I was more weird. No, like, I'm

22:56

kind of, look, I'm drinking smart water, man. It's like a Coca-Cola

22:58

product, right? You want corporate, George

23:00

Haas. I want corporate. No,

23:02

the amount of diversity in humanity, I think, is decreasing.

23:05

Just like all the other biodiversity on the planet.

23:08

Oh, boy. Yeah. Social media

23:10

is not helping, huh? Go eat McDonald's in China.

23:12

Yeah. No,

23:15

it's the interconnectedness that's

23:17

doing it. Oh,

23:20

that's interesting. So everybody starts

23:22

relying on the connectivity of

23:24

the internet. And over time,

23:26

that reduces the intellectual diversity. And

23:29

then that gets everybody into a

23:31

funnel. There's still going to be a guy in Texas.

23:34

There is. And yeah. A bunker. To

23:36

be fair, do I think AI kills us all?

23:39

I think AI kills everything we call society

23:42

today. I do not think it actually kills

23:44

the human species. I think that's actually incredibly

23:46

hard to do.

23:48

Yeah, but society, if we start

23:50

over, that's tricky. Most of us don't know how

23:53

to do most things. Yeah, but some

23:55

of us do. And they'll be

23:57

OK, and they'll rebuild after the.

24:00

Great AI What's

24:02

rebuilding look like how far like how

24:05

much do we lose? What

24:07

is human civilization done? That's

24:09

interesting the combustion engine electricity

24:13

so Power

24:15

and energy that's interesting Like

24:18

how to harness energy? Well,

24:20

they're gonna be religiously against that Are

24:24

they going to get back to like fire?

24:27

Sure. I mean, they'll be a they'll be a little

24:30

bit like, you know some kind of Amish looking

24:32

kind of thing I think I think they're going to have very strong

24:34

taboos against technology

24:37

hmm like technology

24:39

is almost like a new religion technology is the devil

24:41

and Nature

24:44

is God

24:46

Sure, so closer to nature, but can

24:48

you really get away from AI if it destroyed 99%

24:50

of the human species, isn't it? Somehow

24:53

have a hold like a stronghold.

24:56

What's interesting about

24:58

Everything we build I think we are going to build super

25:00

intelligence before we build any sort of

25:03

robustness in the AI We

25:05

cannot build an AI that is capable

25:07

of going out into nature and surviving like

25:10

a Like a bird, right?

25:13

A bird is an incredibly robust

25:15

Organism we've built nothing like this. We haven't built

25:18

a machine that's capable of reproducing

25:20

Yes, but There's

25:23

you know, I work with leg robots a lot now.

25:25

I have a bunch of them

25:28

They're mobile That

25:31

can't reproduce but all they

25:33

need is I guess you're saying they can't repair

25:35

themselves But if you have a large number if you have

25:37

like a hundred million of them, let's just focus

25:39

on them reproducing, right? Do they have microchips

25:41

in them? Okay, then

25:43

do they include a fab?

25:46

No, then how are they gonna reproduce the

25:48

other day? It doesn't have to

25:50

be all on board, right? They

25:52

can go to a factory to a repair

25:54

shop. Yeah, but then you're really moving

25:57

away from robustness. Yes all

25:59

of life

25:59

is capable of reproducing without needing

26:02

to go to a repair shop. Life

26:04

will continue to reproduce in the complete absence

26:06

of civilization.

26:08

Robots will not. So when

26:11

the, if the AI apocalypse

26:13

happens, I

26:14

mean the AI's are going to probably die out because

26:17

I think we're going to get, again, super intelligence long before

26:19

we get robustness.

26:20

What about if you just improve the

26:23

fab to where you just

26:26

have a 3D printer that can always help you? Well,

26:29

that'd be very interesting. I'm interested in building that. Of

26:32

course you are. You think, how difficult is that

26:34

problem to have a robot that

26:38

basically can build itself?

26:40

Very, very hard. I think you've mentioned

26:43

this, like, to me

26:45

or somewhere where people

26:47

think it's easy conceptually. And

26:50

then they remember that you're going to have to have a fab.

26:53

Yeah, on board. Of course. So

26:56

3D printer that prints a 3D printer.

26:59

Yeah.

27:00

Yeah, on legs. Why

27:02

is that hard? Well, because it's not, I

27:04

mean a 3D printer is a very simple

27:07

machine, right? Okay, you're going to print chips,

27:09

you're going to have an atomic printer, how are you going to dope

27:12

the silicon? Yeah.

27:14

Right? How are you going to etch the silicon? You're

27:17

going to have to have a

27:19

very interesting kind of fab if you want to have

27:22

a lot of computation on board. But

27:24

you can do, like, structural

27:28

type of robots that are dumb.

27:30

Yeah, but structural type of robots

27:33

aren't going to have the intelligence required to survive

27:35

in any complex environment.

27:36

What about, like, ants type of systems? We

27:39

have, like, trillions of them. I don't

27:41

think this works. I mean, again, like, ants

27:43

at their very core are made up of cells

27:45

that are capable of individually reproducing.

27:48

They're doing quite a lot of computation

27:50

that we're taking for granted. It's not even just

27:52

the computation. It's that reproduction is so

27:55

inherent. Okay, so, like, there's two stacks of life in

27:57

the world. There's the biological

27:59

stack and the silicon stack.

27:59

The biological stack

28:02

starts with reproduction. Reproduction

28:05

is at the absolute core. The first proto

28:08

RNA organisms were capable of reproducing. The

28:11

silicon stack, despite as far

28:13

as it's come, is nowhere near

28:15

being able to reproduce. Yeah.

28:18

So the fab movement, digital

28:23

fabrication,

28:25

fabrication in the full range of what that means is

28:28

still in the early stages. Yeah.

28:30

You're interested in this world. Even

28:32

if you did put a fab on the machine, let's say, okay, we can build

28:35

fabs. We know how to do that as humanity. We

28:37

can probably put all the precursors that build all the machines

28:40

and the fabs also in the machine. So first off, this machine is

28:42

going to be absolutely massive.

28:44

I mean, we almost have a... Think

28:47

of the size of the thing required to reproduce

28:49

a machine today.

28:52

Is our civilization capable of reproduction?

28:55

Can

28:56

we reproduce our civilization on Mars?

29:00

If we were to construct a machine that is made up of humans, like

29:02

a company, it can reproduce

29:04

itself. Yeah. I don't know. It

29:08

feels like 115 people.

29:12

I think it's so much harder than that. Let's

29:16

see. I

29:18

believe that Twitter can be run by 50 people.

29:22

I think that this is going to take most

29:24

of... It's just most

29:27

of society. We live in one globalized

29:29

world. No, but you're not interested in running Twitter.

29:32

You're interested in seeding.

29:33

You want to seed

29:36

a civilization then because humans

29:38

can have sex. Yeah,

29:40

okay. So you're talking about the humans reproducing

29:42

and basically, what's the smallest self-sustaining colony

29:44

of humans? Yeah. Yeah, okay, fine. Over

29:48

time, they will. I think you're being...

29:51

We have to expand our conception of time

29:53

here. Come back to the original. Time

29:56

scale. I mean, over across...

29:59

maybe a hundred generations, we're back to making

30:02

chips. No? If you seed

30:04

the colony correctly.

30:06

Maybe. Or maybe they'll watch our

30:08

colony die out over here and be like, we're

30:10

not making chips, don't make chips. No, but you

30:12

have to seed that colony correctly. Whatever you

30:14

do, don't make chips. Chips are what

30:17

led to their downfall.

30:20

Well, that is the thing that humans do. They

30:22

come up, they construct a devil, a

30:24

good thing and a bad thing, and they really stick by that,

30:27

and then they murder each other over that. There's always

30:29

one asshole in the room who murders everybody. And

30:33

he usually makes tattoos and nice branding. Do

30:35

you need that asshole? That's the question, right?

30:38

Humanity works really hard today to get rid of that asshole,

30:40

but I think they might be important. Yeah,

30:43

this whole freedom of speech thing, it's

30:45

the freedom of being an asshole seems kind of important.

30:47

That's right.

30:49

Man, this thing, this fab, this human

30:51

fab that we've constructed, this human

30:53

civilization, is pretty interesting. And now

30:56

it's

30:56

building artificial copies

30:59

of itself, or artificial copies of

31:01

various aspects of itself

31:04

that seem interesting, like intelligence. And

31:07

I wonder where that goes. I

31:10

like to think it's just like another stack for life. Like

31:12

we have the biostack life, like we're a biostack life,

31:14

and then the silicon stack life. But it seems

31:16

like the ceiling, or there might

31:18

not be a ceiling, or at least the ceiling is much higher

31:21

for the silicon stack. Oh,

31:23

no, we don't know what the ceiling is for the biostack

31:26

either. The biostack just

31:28

seemed to move slower. You have

31:31

Moore's law, which is not dead

31:33

despite many proclamations. In

31:35

the biostack or the silicon stack? In the silicon stack. And

31:38

you don't have anything like this in the biostack. So I have

31:40

a meme that I posted, I tried to make a

31:42

meme, it didn't work too well. But I posted

31:44

a picture of Ronald Reagan and Joe

31:46

Biden, and you look, this is 1980 and this is 2020. And

31:50

these two humans are basically like the same, right?

31:52

There's been no change in humans in the

31:55

last 40 years.

32:00

And then I posted a computer from 1980 and a computer

32:02

from 2020. Wow. Yeah,

32:07

with the early stages, right? Which

32:09

is why you said when you said the fab,

32:11

the size of the fab required to make another

32:13

fab is like

32:16

very large right now. Oh,

32:18

yeah. But computers were very large 80 years

32:23

ago and they got pretty

32:26

tiny. People

32:29

are starting to want to wear them on their face in

32:33

order to escape reality. That's

32:36

the thing in order to be live inside

32:38

the computer. Yeah. Put a

32:40

screen right here. I don't have to see the rest

32:42

of you assholes. I've been ready for a long

32:44

time. You like virtual reality? I love

32:46

it. Do you want to live

32:49

there? Yeah. Yeah.

32:52

Part of me does too. How far away are

32:54

we, do you think? Anything

32:57

from what you can buy today far? Very

33:00

far. I got to tell you that

33:02

I had the experience of

33:06

Meta's Kodak avatar where

33:09

it's an ultra high resolution

33:12

scan. It looked

33:15

real. I mean, the

33:17

headsets just are not quite at like eye resolution

33:19

yet. I haven't put on any

33:22

headset where I'm like, oh, this could be

33:24

the real world. Whereas when

33:26

I put good headphones on, audio

33:27

is there. We can

33:30

reproduce audio that I'm like, I'm actually in a jungle right

33:32

now. If I close my eyes, I can't tell I'm not. Yeah.

33:36

But then there's also smell and all that kind of stuff. Sure.

33:39

I don't know. The power

33:41

of imagination or the power of the

33:43

mechanism in the human mind that fills the gaps

33:46

that kind of reaches and wants to make

33:48

the thing you see in the virtual world real

33:51

to you, I

33:53

believe in that power. Or humans

33:55

want to believe. Yeah.

33:58

What if you're lonely? What if you're sad? What

34:00

if you're really struggling in life and

34:02

here's a world where you don't have to struggle

34:05

anymore Humans want to believe so much

34:07

that people think the large language models are conscious.

34:10

That's how much humans want to believe

34:12

Strong words. He's throwing

34:14

left and right hooks. Why do you

34:17

think large language models are not conscious? I

34:19

don't think I'm conscious. Oh,

34:21

so what is consciousness then George Hans?

34:24

It's like

34:25

what it seems to mean to people. It's just like

34:27

a word that atheists use for souls Sure,

34:31

but that doesn't mean soul is not an interesting word

34:34

If consciousness is a spectrum. I'm definitely way

34:37

more conscious than the large language models are

34:39

I

34:41

Think the large language models are less conscious than

34:43

a chicken When is

34:45

the last time you see a chicken? In

34:48

Miami like a couple months ago. How

34:52

no like a living chicken living chickens walking

34:54

around Miami It's crazy like on the street.

34:56

Yeah, like a chicken chicken All

35:02

right, I was trying to call you all like

35:04

a good journalist and I I got shut

35:06

down Okay,

35:09

but

35:10

you don't think

35:12

much about this kind

35:15

of Subjective

35:18

feeling that it feels like something

35:21

to exist and then as an observer

35:25

you can Have a sense

35:27

that an entity is not only

35:29

intelligent but has a kind of

35:33

Subjective experience of its reality

35:35

like a self-awareness That is capable

35:38

of like suffering of hurting of being excited

35:40

by the environment in a way. That's not

35:42

merely

35:44

Kind of an artificial response but a deeply

35:47

felt one Humans want to believe so

35:50

much that if I took a rock and a sharpie

35:52

and drew a sad face on the rock They'd think the

35:54

rock is sad

35:55

Yeah,

35:58

and you're saying when we look in the mirror we we

36:00

apply the same smiley face with rock.

36:02

Pretty much, yeah. Isn't that weird though?

36:05

You're not conscious?

36:09

But you do believe in consciousness. Really?

36:12

It's just, it's unclear. Okay, so to you it's like

36:14

a little, like a symptom of the bigger

36:16

thing that's not that important. Yeah, I mean

36:18

it's interesting that like human systems

36:21

seem to claim that they're conscious. And I guess

36:23

it kind of like says something and they straight up like,

36:25

okay, what do people mean? Even if you don't believe

36:27

in consciousness, what do people mean when they say consciousness?

36:30

And there's definitely like

36:31

meanings to it. What's your favorite

36:33

thing to eat? Pizza.

36:37

Cheese pizza, what are the toppings? I like cheese pizza.

36:40

Don't say pineapple. No, I don't like pineapple. Okay,

36:42

pepperoni pizza. Has they put any ham on it? Oh, that's real

36:44

bad. What's the best pizza? What

36:47

are we talking about here? Like, do you like cheap crappy pizza?

36:50

Chicago deep dish cheese pizza. Oh,

36:52

that's my favorite. There you go, you bite into a deep

36:54

dish, a cargo deep dish pizza,

36:56

and it feels like you were starving. You haven't

36:59

eaten for 24 hours. You just bite

37:01

in and you're hanging out with somebody that

37:03

matters a lot to you and you're there with the pizza. Sounds

37:05

real nice, huh? Yeah, all right. It

37:08

feels like something. I'm George

37:10

motherfucking hot eating a fucking

37:13

Chicago deep dish pizza. There's

37:15

just a full peak

37:17

living experience of

37:20

being human, the top of the human condition.

37:23

Sure. It feels like something

37:25

to experience that. Why

37:28

does it feel like something? That's consciousness,

37:31

isn't it? If that's the word

37:33

you want to use to describe it, sure. I'm not going to deny

37:35

that that feeling exists. I'm not going to deny that I experienced

37:38

that feeling. When, I

37:40

guess what I kind of take issue to is that

37:42

there's some like, like how

37:44

does it feel to be a web server? Do 404s hurt?

37:49

Not yet. How would you know what suffering

37:51

looked like? Sure, you can recognize a suffering

37:53

dog because we're the same stack as the dog. All

37:56

the bio stack stuff, especially mammals,

37:59

it's really easy. You can... Game

38:02

recognizes game. Yeah. Versus

38:04

the silicon stack stuff. It's like, you

38:06

have no idea.

38:08

You have... Wow, the little thing

38:10

has learned to mimic.

38:15

But then I realized that that's all we are too.

38:18

Oh look, the little thing has learned to mimic. Yeah.

38:21

I guess, yeah, 404 could be suffering, but it's so far from our kind

38:23

of living.

38:27

Living

38:30

organism, our kind of stack. But

38:32

it feels like AI can start

38:35

maybe mimicking the biological

38:37

stack better, better, better. Because it's trained. Retrained

38:39

it, yeah. And so, in

38:41

that, maybe that's the definition of consciousness. Is

38:44

the bio stack consciousness? The definition

38:46

of consciousness is how close something looks to human.

38:48

Sure, I'll give you that one.

38:50

No, how close something

38:52

is to the human experience. Sure.

38:55

It's a very anthropocentric

38:58

definition, but... That's all we got. Sure.

39:01

No, and I don't mean to like... I think there's

39:03

a lot of value in it. Look, I just started my second company.

39:05

My third company will be AI Girlfriends.

39:08

No, like I mean it. I want to find out what your fourth company

39:11

is after that. Oh wow. Because I think once

39:13

you have AI Girlfriends, it's... Oh

39:17

boy. Does it get interesting?

39:20

Well, maybe let's go there. I mean, the relationships

39:22

with AI, that's creating human-like

39:24

organisms, right?

39:27

And part of being human is being conscious, is

39:30

having the capacity to suffer, having the capacity

39:32

to experience this life richly in

39:34

such a way that you can empathize...

39:36

The AI system can empathize with you

39:38

and you can empathize with it. Or you can

39:40

project your anthropomorphic

39:44

sense of what the other entity is experiencing.

39:48

And an AI model would need

39:50

to create that experience inside your mind. And

39:53

it doesn't seem that difficult. Yeah, but okay.

39:55

So here's where it actually gets totally different, right?

39:59

When you interact... with another human, you can

40:01

make some assumptions.

40:04

When you interact with these models, you can't. You

40:06

can make some assumptions that that other human experiences

40:09

suffering and pleasure in a pretty

40:11

similar way to you do. The golden rule applies.

40:15

With an AI model, this isn't really true.

40:18

These large language models are good at fooling people

40:20

because they were trained on a whole bunch

40:23

of human data and told to mimic it.

40:25

But if the AI system says,

40:28

hi, my name is Samantha,

40:31

it has a backstory. I went to college

40:33

here and there. Maybe you'll integrate

40:36

this in the AI system. I made some chatbots.

40:38

I gave them backstories. It was lots of fun. I

40:40

was so happy when Llama came out. Yeah. We'll

40:43

talk about Llama. We'll talk about all that. But like,

40:45

you know, the rock with the smiley face. It

40:49

seems pretty natural for you to anthropomorphize

40:51

that thing and then start dating it. Before

40:55

you know it, you're married and

40:58

have kids. With a rock? With

41:00

a rock. And there's pictures on Instagram

41:02

with you and a rock and a smiley face. To

41:04

be fair, like, you know, something that people generally look

41:06

for in the look of someone to date is intelligence

41:08

in some

41:10

form. And the rock doesn't really have

41:12

intelligence. Only a pretty desperate person would date a rock.

41:16

I think we're all desperate deep down. Oh,

41:18

not rock level desperate. All

41:20

right. Not

41:23

rock level desperate,

41:26

but AI level

41:28

desperate. I don't know. I think all

41:30

of us have a deep loneliness. It just feels

41:32

like the language models are there. Oh,

41:35

I agree. And you know what? I won't even say this so cynically.

41:37

I will actually say this in a way that like, I want AI

41:39

friends. I do. Yeah. Like,

41:42

I would love to. You know, again, the

41:44

language models now are still a little like,

41:48

people are impressed with these GPT things.

41:50

And I look at like, or like, or

41:53

the copilot, the coding one. And

41:55

I'm like, okay, this is like junior engineer

41:57

level. And these people are like fiverr level.

41:59

artists and copywriters.

42:02

Like, okay, great, we got like

42:04

Fiverr and like junior engineers, okay,

42:06

cool. Like, and this is just the start

42:09

and it will get better, right? Like

42:11

I can't wait to have AI friends who

42:13

are more intelligent than I am.

42:15

So Fiverr is just a temporary, it's not the

42:17

ceiling. No, definitely not. Is

42:21

it countless cheating

42:23

when you're talking to an AI model, emotional

42:26

cheating? That's

42:30

up to you and your human partner to

42:32

define. Oh, you have to, all right. Yeah,

42:35

you have to have that conversation, I guess.

42:37

All right, I mean, integrate that

42:40

with porn and all this stuff. No,

42:42

I mean, it's similar kind of to porn. Yeah. Yeah. Right,

42:45

I think people in relationships have different views on that.

42:47

Yeah, but most people don't

42:50

have like serious

42:54

open conversations about all the different

42:57

aspects of what's cool and what's not. And

43:00

it feels like AI is a really weird conversation

43:02

to have. The

43:04

porn one is a good branching off point. Like

43:06

these things, you know, one of my scenarios that I put in my chatbot

43:09

is I go, you know, a

43:12

nice girl named Lexi, she's 20, she just moved

43:14

out to LA, she wanted to be an actress, but she

43:16

started doing OnlyFans instead and you're on a date with her,

43:18

enjoy. Oh,

43:22

man. Yeah,

43:24

and so is that if you're actually dating somebody

43:26

in real life, is that cheating? I

43:29

feel like it gets a little weird. Sure. It gets

43:31

real weird. It's like, what are you allowed to

43:33

say to an AI bot? Imagine having

43:35

that conversation with a significant other. I mean,

43:37

these are all things for people to define in their relationships.

43:40

What it means to be human is just gonna start to

43:42

get weird. Especially online.

43:44

Like, how do you know? Like, there'll

43:46

be moments when you'll have what you

43:48

think is a real human you interacted

43:50

with on Twitter for years and you realize it's not.

43:53

I spread, I love this meme,

43:56

heaven banning.

43:57

Tell you about shadow banning. Yeah. Shadow

44:00

banning, okay, you post, no one can see it. Heaven

44:02

banning, you post, no one can see

44:04

it, but a whole lot of AIs are spot

44:06

up to interact with you. Well,

44:10

maybe that's what the way human civilization ends

44:12

is all of us are heaven banned. There's

44:15

a great, it's called My Little Pony

44:17

Friendship is Optimal.

44:18

It's a sci-fi story that explores

44:21

this idea. Friendship is optimal. Friendship

44:24

is optimal. Yeah, I'd like to have some, at least

44:26

on the intellectual realm, from AI friends

44:29

that argue with me.

44:30

But the romantic realm

44:33

is weird, definitely weird.

44:38

But not out of the realm of the

44:41

kind of weirdness that human civilization is capable

44:44

of, I think. I want

44:46

it. Look, I want it. If no one else wants

44:48

it, I want it. Yeah, I think a lot of people probably

44:50

want it. There's a deep loneliness.

44:53

And I'll feel their loneliness

44:56

and, you know, it's just, we'll only advertise to you

44:58

some of the time. Yeah, maybe the conceptions

45:00

of monogamy change too. Like, I grew up

45:02

in a time, like, I value monogamy, but maybe

45:04

that's a silly notion when you have

45:07

arbitrary number of AI systems. Mm,

45:10

this interesting path from rationality

45:13

to polyamory, that doesn't make sense

45:15

for me. For you. But you're just

45:17

a biological organism who was born before,

45:20

like, the internet really took off.

45:23

The crazy thing is, like,

45:25

culture is whatever we define it as.

45:28

These things are not, like,

45:31

is a lot problem in moral philosophy, right? There's

45:33

no, like, okay, what is might be that, like,

45:36

computers are capable of mimicking, you

45:39

know, girlfriends perfectly. They pass the girlfriend Turing

45:41

test, right? But that doesn't say anything about

45:43

a lot. That doesn't say anything about how we ought to

45:45

respond to them as a civilization. That doesn't say we ought

45:47

to get rid of monogamy, right? That's a

45:50

completely separate question, really a religious

45:52

one.

45:52

And Turing test, I wonder

45:54

what that looks like. Girlfriend Turing test. Are you

45:57

writing that? Will

45:59

you be the... the Alan Turing of the

46:01

21st century that writes the girlfriend Turing

46:03

test paper? No, I mean, of course, my, hey, girlfriends,

46:06

their goal is to pass the girlfriend Turing test.

46:09

No, but there should be like a paper that kind

46:11

of defines the test. Or,

46:14

I mean, the question is if it's deeply personalized

46:16

or there's a common thing that really gets everybody.

46:21

Yeah, I mean, you know, look, we're a company,

46:23

we don't have to get everybody, we just have to get a large enough clientele

46:26

to stay. I like how you're already thinking

46:28

company. All right, let's, before

46:30

we go to company number three and company number

46:32

four, let's go to company number two. All right.

46:35

TinyCorp,

46:37

possibly one of the greatest names of all

46:39

time for a company. You've

46:41

launched a new company called

46:43

TinyCorp that leads the development of TinyGrad.

46:46

What's the origin story

46:48

of TinyCorp and TinyGrad?

46:50

I started TinyGrad as

46:53

like a toy project, just to teach myself,

46:55

okay, like, what is a convolution? What

46:58

are all these options you can pass to them? What is

47:00

the derivative of a convolution, right? Very similar

47:03

to Carpathi wrote MicroGrad. Very

47:06

similar. And

47:08

then

47:09

I started realizing, I started thinking

47:11

about like

47:12

AI chips. I started thinking about chips that

47:14

run AI and I

47:17

was like, well, okay, this is going to be

47:19

a really big problem. If Nvidia

47:22

becomes a monopoly here, how

47:25

long before Nvidia is nationalized?

47:28

So you, one

47:30

of the reasons that start

47:33

TinyCorp is to challenge Nvidia.

47:36

It's not so much

47:37

to challenge Nvidia. I actually, I

47:39

like Nvidia and it's to make

47:42

sure power

47:45

stays decentralized. Yeah.

47:48

And here's computational

47:50

power. I see you

47:52

Nvidia is kind of locking down the

47:54

computational power of the world.

47:56

If Nvidia becomes

47:58

just like 10X better than everything.

47:59

else, you're giving a big advantage

48:02

to somebody who can secure NVIDIA

48:05

as a resource. Yeah.

48:07

In fact,

48:08

if Jensen watches this podcast, he may want to consider

48:11

this.

48:12

He may want to consider making sure his company is not

48:14

nationalized. Do

48:16

you think that's an actual threat? Oh, yes. No,

48:20

but there's so much, you know, there's

48:23

AMD. So we have NVIDIA and AMD,

48:25

great. All right.

48:27

You don't think there's like a push

48:31

towards like selling, like Google

48:33

selling TPUs or something like this? You

48:35

don't think there's a push for that? Have you seen it? Google

48:37

loves to rent you TPUs. It

48:39

doesn't. You can't buy it at Best Buy?

48:42

No.

48:43

So I started work on a chip. I

48:46

was

48:48

like, okay, what's it going to take to make a chip? And

48:51

my first notions were all completely wrong about why,

48:53

about like how you could improve on GPUs. And

48:56

I will take this. This is from Jim

48:58

Keller on your podcast. And

49:00

this is one of my absolute favorite

49:03

descriptions of computation.

49:05

So there's three kinds of computation paradigms

49:08

that are common in the world today.

49:10

They're CPUs. And

49:12

CPUs can do everything. CPUs can do

49:14

add and multiply.

49:15

They can do load and store, and they can do

49:17

compare and branch. And when I say they can

49:19

do these things, they can do them all fast, right? So

49:22

compare and branch are unique to CPUs. And

49:24

what I mean by they can do them fast is they can do things like

49:27

branch prediction and speculative execution. And they

49:29

spend tons of transistors and they use like super deep

49:31

reorder buffers in order to make these things

49:33

fast. Then you have a simpler computation

49:36

model GPUs. GPUs can't really do

49:38

compare and branch. I mean, they can, but it's horrendously

49:40

slow. But GPUs can do arbitrary

49:42

load and store. GPUs can do things

49:45

like X, D reference Y.

49:47

So they can fetch from arbitrary pieces of memory. They

49:49

can fetch from memory that is defined by the contents of the data.

49:52

The third model of computation

49:54

DSPs and DSPs are just add and

49:56

multiply.

49:57

Like they can do load and store, but only static load.

49:59

stores, only loads and stores that are known before

50:02

the program runs. And you look at neural

50:04

networks today and 95% of neural networks

50:06

are all the DSP paradigm,

50:09

they are just statically scheduled

50:12

ads and multiplies. So

50:14

TinyGuard really took this idea and,

50:17

and I'm still working on it to extend this as

50:19

far as possible. Um, every

50:21

stage of the stack has Turing completeness, right? Python

50:24

has Turing completeness. And then we take Python and

50:26

we go into C plus plus, which is Turing complete. And

50:28

maybe C plus plus calls into some CUDA kernels,

50:30

which are turn complete, the CUDA kernels go through LLVM,

50:33

which is turn complete into PTX, which is turn complete

50:35

to SAS, which is turn complete on a current turn complete processor.

50:37

I want to get Turing completeness out of the stack entirely.

50:40

Because once you get rid of Turing completeness, you can reason about

50:42

things. Rice's theorem and the halting problem

50:45

do not apply to admiral machines.

50:47

Okay. What's

50:50

the power and the value of getting Turing completeness out

50:52

of, out of, are we talking about

50:54

the hardware or the software? Every

50:57

layer of the stack, every layer

50:59

of the stack, removing turn completeness allows

51:01

you to reason about things, right? So

51:03

the reason you need to do branch prediction in a CPU

51:06

and the reason it's prediction and the branch predictors are,

51:08

I think they're like 99% on CPUs. Why

51:10

did they get 1% of them wrong? Well, they

51:12

get 1% wrong because you can't

51:15

know.

51:15

Right. That's the halting problem. It's equivalent

51:18

to the halting problem to say whether a branch is going

51:20

to be taken or not. Um, I can

51:22

show that, but

51:24

the. Admiral

51:26

machine, the neural network runs

51:28

the identical compute every time. The

51:31

only thing that changes is the data. So

51:34

when you realize this, you think about, okay,

51:37

how can we build a computer? How can we build

51:39

a stack that takes maximal advantage of

51:41

this idea? Uh,

51:43

so

51:44

what makes tiny grad different from other

51:46

neural network libraries is it does not have

51:49

a primitive operator even for matrix multiplication.

51:52

And this is every single one. They

51:54

even have primitive operators for things like convolutions.

51:56

So no mat mall. No, Matt. Well,

51:59

here's what a.

51:59

I'm at my list, so I'll use my hands to talk here.

52:02

So if you think about a cube, and I put my two

52:04

matrices that I'm multiplying on two faces of the

52:07

cube, you can

52:09

think about the matrix multiply as, okay,

52:12

the n cubed, I'm going to multiply for

52:14

each one in the cubed, and then I'm going to do a sum, which

52:16

is a reduce up to here to the third

52:18

face of the cube, and that's your multiplied matrix. So

52:21

what a matrix multiply is, is a bunch of shape

52:24

operations, right? A bunch of permutes,

52:26

reshapes, and expands on the two matrices.

52:29

A multiply, n cubed, a

52:31

reduce, n cubed, which gives

52:33

you an n squared matrix. Okay, so

52:36

what is the minimum number of operations that can accomplish

52:38

that if you don't have MatMall

52:40

as a primitive? So TinyGrad has

52:43

about 20, and you can compare TinyGrad's

52:46

op-set or IR to things like XLA

52:49

or PrimTorch. So XLA and

52:51

PrimTorch are ideas where like, okay, Torch

52:54

has like 2000 different kernels,

52:58

PyTorch 2.0 introduced PrimTorch,

53:00

which has only 250.

53:01

TinyGrad has order of magnitude 25.

53:05

It's 10X less than

53:07

XLA or PrimTorch. And

53:09

you can think about it as kind of like RISC versus CISC,

53:12

right?

53:13

These other things are CISC-like

53:15

systems. TinyGrad is RISC.

53:18

And RISC-1. RISC architecture

53:21

is going to change everything. 1995 hackers. Wait,

53:25

really? That's an actual thing? Angelina

53:27

Jolie delivers the line, RISC architecture

53:29

is going to change everything in 1995. Wow. And

53:32

here we are with ARM in the phones and

53:34

ARM everywhere. Wow, I

53:36

love it when movies actually have real things in

53:38

them. Right. Okay, interesting.

53:41

So you're thinking of this as the RISC

53:44

architecture of ML

53:45

Stack. 25, huh?

53:49

What can you go through the four

53:51

op types?

53:56

Okay, so you have UnaryOps, which

53:59

take in... a Tensor

54:02

and return a tensor of the same size and do some

54:04

unary opt to it X log Reciprocal

54:08

sign right they take in one and their point

54:10

wise Really you

54:13

yeah really you almost all activation

54:15

functions are unary ops Some

54:17

combinations of unary ops together

54:19

is still a unary op

54:22

Then you have binary ops binary ops are like

54:24

point wise addition multiplication division compare

54:28

It takes in two tensors of equal size and

54:31

outputs one tensor Then

54:33

you have reduce ops Reduce

54:36

ops will like take a three-dimensional tensor and

54:38

turn it into a two-dimensional tensor Or

54:40

three-dimensional tensor turn it to zero dimensional tensor

54:42

think like a sum or max

54:44

are really the common ones there And

54:47

then the fourth type is movement ops and

54:50

movement ops are different from the other types because they don't actually

54:52

require computation They require different ways to look

54:54

at memory So that includes reshapes

54:57

permutes Expans flips

55:00

does the main ones probably so with that you have enough

55:02

to make a map mall and convolutions

55:05

and every convolution you can Imagine dilated

55:07

convolutions strided convolutions transposed

55:10

convolutions

55:12

You're right on github about laziness

55:16

showing a map mall Matrix

55:18

multiplication see how despite the style

55:20

is fused into one kernel with

55:23

the power of laziness Can you elaborate

55:25

on this power of laziness sure so if

55:27

you type in Pytorch a times

55:30

B plus C?

55:32

What this is going to do is

55:34

it's going to first multiply add and

55:36

be a and B and store that result

55:38

into memory

55:39

and then it is going to add C by reading

55:42

that result from memory reading C for memory

55:44

and writing that out to memory There

55:47

is way more loads and stores to memory than

55:49

you need there If you don't actually do

55:52

a times B as soon as you see it

55:54

if you wait

55:55

Until the user actually realizes that

55:58

tensor until the laziness actually resolved You

56:00

can fuse that plus C. This is like,

56:03

it's the same way Haskell works. So

56:05

what's the process of porting a model

56:08

into TinyGrad? So TinyGrad's

56:10

front end looks very similar to PyTorch. I

56:13

probably could make a perfect,

56:15

or pretty close to perfect interop layer if I

56:17

really wanted to. I think that there's some things that

56:19

are nicer about TinyGrad syntax than PyTorch, but

56:22

the front end looks very Torch-like. You can also

56:24

load in Onyx models. We have

56:27

more Onyx tests passing than Core ML. Core

56:30

ML. Okay, so- We'll pass Onyx

56:32

runtime soon. What about the

56:34

developer experience with TinyGrad? What

56:38

it feels like versus PyTorch?

56:40

By the way, I

56:43

really like PyTorch. I think that it's actually a very

56:45

good piece of software. I think that they've

56:47

made a few different trade-offs, and

56:49

these different trade-offs are

56:52

where, TinyGrad takes

56:54

a different path. One of the biggest differences

56:56

is it's really easy to see the kernels

56:59

that are actually being sent to the GPU.

57:00

If you run PyTorch on

57:02

the GPU,

57:04

you do some operation, and you don't know what

57:06

kernels ran. You don't know how many kernels ran. You

57:08

don't know how many flops were used. You don't know how much

57:10

memory accesses were used. TinyGrad

57:12

type debug equals two, and it

57:14

will show you in this beautiful style every

57:17

kernel that's run.

57:19

How many flops

57:21

and how many bytes. So

57:24

can you just linger

57:26

on what problem

57:28

TinyGrad solves? TinyGrad

57:30

solves the problem of porting new ML

57:32

accelerators quickly. One

57:34

of the reasons, tons

57:37

of these companies now, I think Sequoia

57:40

marked Graphcore to zero.

57:42

Seribis, TenzTorrent,

57:44

Grok, all of

57:46

these ML accelerator companies,

57:49

they built chips. The chips were good. The

57:51

software was terrible. And

57:54

part of the reason is because I think the same problem is happening

57:56

with Dojo. It's really, really

57:58

hard to write a PyTorch port.

57:59

because you have to write 250 kernels

58:03

and you have to tune them all for performance. What

58:06

does Jim Keller think about Tanya Grad?

58:08

You guys hung on

58:10

quite a bit, so he's, you know, he

58:13

was involved with Tencentorrent. What's

58:15

his praise and what's his criticism

58:18

of what you're doing with your life?

58:20

Look, my

58:23

prediction for Tencentorrent is that they're gonna pivot to making

58:25

RISC-V chips. CPUs.

58:29

CPUs. Why?

58:33

Because AI accelerators are

58:35

a software problem, not really a hardware problem. Oh,

58:38

interesting, so you don't think, you

58:40

think the diversity of AI accelerators

58:43

in the hardware space is not going to be

58:45

a thing that exists long-term.

58:47

I think what's gonna happen is if

58:49

I can, okay,

58:51

if you're trying to make an AI accelerator,

58:54

you better have the capability

58:56

of writing a torch-level

58:58

performance stack on NVIDIA GPUs.

59:01

If you can't write a torch stack on NVIDIA

59:03

GPUs, and I mean all the way, I mean down to the driver,

59:05

there's no way you're gonna be able to write it on your chip because

59:08

your chip's worse than an NVIDIA GPU. The first

59:10

version of the chip you tape out, it's definitely worse. Oh,

59:12

and you're saying writing that stack is really tough. Yes,

59:15

and not only that, actually, the chip that you tape out, almost

59:17

always because you're trying to get advantage over NVIDIA, you're

59:19

specializing the hardware more. It's

59:21

always harder to write software for more specialized

59:24

hardware. Like a GPU's pretty generic,

59:26

and if you can't write an NVIDIA stack, there's

59:28

no way you can write a stack for your chip. So

59:31

my approach with TinyGrad is first, write

59:33

a performant NVIDIA stack. We're targeting

59:36

AMD. So

59:38

you did say a few to NVIDIA a little bit, with

59:41

love. With love. Yeah, but so what- It's

59:43

like the Yankees, you know? I'm a Mets fan. Oh,

59:46

you're a Mets fan. A risk

59:48

fan and a Mets fan. What's the hope that AMD

59:51

has? You did a build

59:53

with AMD recently that I saw. That

59:56

was the PTSD. There,

59:59

the 79,

59:59

700 XTX compared to the

1:00:02

RTX 4090 or 4080. Well,

1:00:04

let's start with the fact that the 7900 XTX kernel

1:00:07

drivers don't work. And if you run demo

1:00:09

apps in loops, it panics the kernel.

1:00:11

Okay, so this is a softer issue.

1:00:15

Lisa Sue responded to my email.

1:00:17

Oh. I reached out, I was like,

1:00:19

this is, you know, really?

1:00:22

Like, I understand if your seven

1:00:25

by seven transposed Winograd comm

1:00:27

is slower than Nvidia's, but literally when

1:00:29

I run demo apps in a loop, the

1:00:31

kernel panics.

1:00:33

So just adding that loop.

1:00:36

Yeah, I just literally took their demo apps and wrote like, while

1:00:38

true, semi-colon do the app, semi-colon

1:00:41

done in a bunch of screens. Right,

1:00:43

this is like the most primitive fuzz testing.

1:00:46

Why do you think that is? They're just

1:00:48

not seeing a market in

1:00:51

machine learning? They're changing,

1:00:53

they're trying to change. They're trying to change.

1:00:55

And I had a pretty positive interaction with them this week.

1:00:57

Last week I went on YouTube, I was just like, that's

1:00:59

it. I give up on AMD. Like, this is their

1:01:02

driver,

1:01:02

doesn't even, I'm not gonna, I'll

1:01:05

go with Intel GPUs. Intel GPUs have

1:01:07

better drivers. So

1:01:10

you're kind of spearheading the

1:01:13

diversification of GPUs.

1:01:16

Yeah, and I'd like to extend that diversification

1:01:18

to everything. I'd like to diversify

1:01:20

the, right, the more,

1:01:25

my central thesis about the world is,

1:01:28

there's things that centralize power and they're bad. And

1:01:30

there's things that decentralize power and they're good.

1:01:33

Everything I can do to help decentralize power,

1:01:35

I'd like to do.

1:01:38

So you're really worried about the centralization of Nvidia, that's interesting.

1:01:41

And you don't have a fundamental hope for the

1:01:44

proliferation of ASICs, except

1:01:46

in the cloud.

1:01:49

I'd like to help them with software. No, actually, there's

1:01:51

only, the only ASIC that is remotely successful

1:01:54

is Google's TPU. And the only

1:01:56

reason that's successful is because Google wrote

1:01:58

a

1:01:59

machine learning framework.

1:02:00

I think that you have to write a competitive

1:02:02

machine learning framework in order to be able

1:02:04

to build an ASIC.

1:02:07

You think Meta with PyTorch builds

1:02:09

a competitor? I hope so. They

1:02:12

have one. They have an internal one. Internal.

1:02:14

I mean, public facing with a nice cloud

1:02:16

interface and so on. I don't want

1:02:18

a cloud. You don't like cloud. I don't

1:02:20

like cloud. What do you think is the fundamental

1:02:22

limitation of cloud? Fundamental limitation

1:02:25

of cloud is who owns the off switch.

1:02:27

So it's a power to the people. Yeah.

1:02:30

And you don't like the man to have all the

1:02:32

power. Exactly. All right.

1:02:35

And right now, the only way to do that is with AMD

1:02:37

GPUs if you want performance and

1:02:39

stability. Interesting.

1:02:43

But it's a costly investment emotionally

1:02:45

to go with AMDs. Well,

1:02:48

let me add sort of on a tangent to ask you, what did,

1:02:52

you've built quite a few PCs. What's your advice

1:02:54

on how to build a good custom PC for

1:02:57

let's say for the different applications they use for

1:02:59

gaming, for machine learning? Well, you

1:03:01

shouldn't build one. You should buy a box from the tiny Corp.

1:03:04

I heard rumors, whispers

1:03:08

about this box in the tiny Corp.

1:03:10

What's this thing look like? What is it?

1:03:12

What is it called? It's called the tiny box. Tiny

1:03:14

box? It's $15,000. And

1:03:18

it's almost a pit of flop of compute. It's

1:03:21

over a hundred gigabytes of GPU RAM. It's

1:03:23

over five terabytes per second of

1:03:26

GPU memory bandwidth.

1:03:29

I'm gonna put like four NVMe's in

1:03:31

RAID. You're gonna get

1:03:34

like 20, 30 gigabytes per second of drive read bandwidth.

1:03:38

I'm gonna build like the best

1:03:40

deep learning box that I can that

1:03:42

plugs into one wall outlet.

1:03:45

Okay. Can you go through those specs again a little

1:03:47

bit from memory? Yeah,

1:03:49

so it's almost a pit of flop of compute. So in

1:03:51

D and tell? Today I'm

1:03:53

leaning toward AMD, but

1:03:56

we're pretty agnostic to the type of compute.

1:03:59

The main limiting spec is a 120 volt 15

1:04:02

amp circuit.

1:04:06

Okay. Well, I mean it, because in order to like,

1:04:09

there's a plug over there, right?

1:04:12

You have to be able to plug it in.

1:04:14

We're also gonna sell the tiny rack, which

1:04:17

like, what's the most power you can

1:04:19

get into your house without arousing suspicion? And

1:04:22

one of the answers is an electric car

1:04:24

charger.

1:04:25

Wait, where does the rack go? Your

1:04:27

garage. Interesting. The

1:04:30

car charger.

1:04:31

A wall outlet is about 1500 watts. A

1:04:34

car charger is about 10,000 watts.

1:04:36

What is the most amount

1:04:38

of power you can get your hands on

1:04:40

without arousing suspicion? That's right.

1:04:42

George Haas. Okay. So

1:04:46

the tiny box and you said NVMe's and RAID.

1:04:49

I forget what you said about memory, all that kind of

1:04:51

stuff. Okay. What about with

1:04:53

GPUs? Again, probably 7900 XTX's,

1:04:58

but maybe 3090's, maybe A770's.

1:05:01

Those are intense. You're flexible

1:05:03

or still exploring? I'm still

1:05:05

exploring. I wanna deliver

1:05:07

a really good experience to people. And

1:05:11

yeah, what GPUs I end up going with. Again, I'm

1:05:13

leaning toward AMD.

1:05:14

We'll see. You know, in my

1:05:16

email, what I said to AMD is like,

1:05:19

just dumping the code on GitHub is not open

1:05:21

source.

1:05:22

Open source is a culture. Open

1:05:24

source means that your issues are not all

1:05:27

one year old stale issues. Open

1:05:29

source means developing

1:05:31

in public. And if you guys can commit

1:05:33

to that, I see a real future for

1:05:35

AMD as a competitor to Nvidia. Well,

1:05:39

I'd love to get a tiny box at MIT.

1:05:41

So whenever it's ready, let's

1:05:43

do it. We're taking pre-orders. I took this from Elon.

1:05:46

I'm like, all right, $100 fully refundable

1:05:48

pre-orders. Is it gonna be like the Cybertruck?

1:05:50

It's gonna take a few years or? No, I'll

1:05:52

try to do it faster. It's a lot simpler. It's a lot simpler

1:05:54

than the truck. Well, there's complexities

1:05:57

not to just the putting

1:05:59

the thing together.

1:05:59

about shipping and all this kind of stuff. The thing

1:06:02

that I want to deliver to people out of the box is

1:06:04

being able to run 65 billion parameter

1:06:06

llama in FP16

1:06:08

in real time, in a good, like 10 tokens

1:06:10

per second or five tokens per second or something. Just,

1:06:12

it works. Yep, just works. Llama's

1:06:15

running or something

1:06:17

like llama. Experience, yeah,

1:06:19

or I think Falcon is the new one, experience

1:06:22

a chat with the largest language model that

1:06:24

you can have in your house.

1:06:26

Yeah, from a wall plug. From

1:06:28

a wall plug, yeah. Actually, for inference,

1:06:31

it's not like even more power would help you get more.

1:06:34

Even more power wouldn't get you more. Well,

1:06:37

no, there's just the biggest, the biggest model released is 65

1:06:39

billion parameter llama as far as I know.

1:06:42

So it sounds like Tiny Box will naturally pivot

1:06:44

towards company number three, because you could just

1:06:46

get the girlfriend and,

1:06:50

or boyfriend. That

1:06:52

one's harder, actually. The boyfriend is harder? Boyfriend's

1:06:54

harder, yeah. I think that's a

1:06:56

very biased statement. I think

1:06:58

a lot of people would just say, why

1:07:01

is it harder to replace

1:07:03

a boyfriend than a other girlfriend

1:07:05

with the artificial llm? Because women

1:07:07

are attracted to status and power and men are

1:07:09

attracted to youth and beauty. No,

1:07:13

I mean, that's what I mean. Both are

1:07:15

a mimicable easy through the language model.

1:07:17

No, no machines do not

1:07:19

have any status or real power.

1:07:21

I don't know, I think you both, well,

1:07:24

first of all, you're using language mostly to

1:07:29

communicate youth

1:07:31

and beauty and power and status. But

1:07:33

status fundamentally is a zero sum game,

1:07:35

whereas youth and beauty are not.

1:07:37

No, I think status is a narrative you can construct.

1:07:40

I don't think status is real. I

1:07:44

don't know. I just think that that's why it's harder.

1:07:47

You know, yeah, maybe it is my biases. I

1:07:49

think status is way easier to fake. I

1:07:51

also think that, you know, men are probably

1:07:53

more desperate and more likely to buy my products, so

1:07:55

maybe they're a better target market. Desperation

1:07:58

is interesting, easier to fool.

1:07:59

Cool. Yeah. I could

1:08:02

see that. Yeah, look, I mean, look, I know you can look at porn

1:08:04

viewership numbers, right?

1:08:05

A lot more men watch porn than women.

1:08:07

You can ask why that is. Wow,

1:08:09

there's a lot of questions and answers

1:08:12

you can get there. Anyway,

1:08:15

with the TinyBox, how

1:08:17

many GPUs in TinyBox? Six.

1:08:20

Ha ha ha ha ha ha ha ha. Oh

1:08:24

man. And I'll tell you why it's six. Yeah. So

1:08:27

AMD Epic processors have 128 lanes of PCIe.

1:08:31

I want to leave enough lanes

1:08:34

for

1:08:35

some drives.

1:08:38

And I want to leave enough lanes for some networking.

1:08:41

How do you do cooling for something like this? Ah,

1:08:44

that's one of the big challenges. Not only

1:08:46

do I want the cooling to be good, I want it to be quiet.

1:08:48

I want the TinyBox to be able to sit comfortably

1:08:50

in your room, right? This is really going towards

1:08:53

the girlfriend thing. So,

1:08:55

because you want to run the LOM. I'll

1:08:57

give a more, I mean, I can talk about how it relates

1:08:59

to company number one.

1:09:01

Common AI. Yeah. Well,

1:09:05

but yes, quiet. Oh, quiet because you may

1:09:07

be potentially want to run it in a car. No, no

1:09:09

quiet because you want to put this thing in your house and you

1:09:11

want it to coexist with you. If it's screaming at 60 dB,

1:09:14

you don't want that in your house, you'll kick it out. 60 dB, yeah. I

1:09:17

want like 40, 45. So how do you make the cooling

1:09:20

quiet? That's an interesting problem in itself.

1:09:22

A key trick is to actually make it big. Ironically,

1:09:25

it's called the TinyBox. But if I can make

1:09:27

it big, a lot of that noise is generated

1:09:29

because of high pressure. If

1:09:31

you look at like a 1U server, a

1:09:33

1U server has these super high pressure fans that

1:09:35

are like super deep and they're like Genesis. Versus

1:09:38

if you have something that's big,

1:09:40

well, I can use a big thing. You know, they call

1:09:42

them big ass fans. Those ones that are like huge on

1:09:44

the ceiling and they're completely

1:09:46

silent. So TinyBox

1:09:48

will be big. I

1:09:52

do not want it to be large according to UPS.

1:09:54

I want it to be shippable as a normal package, but that's my

1:09:57

constraint there.

1:09:58

Interesting. The fans stuff,

1:10:01

can't it be assembled on location or no? No.

1:10:04

No, it has to be, wow. You're... Look,

1:10:07

I wanna give you a great out of the box experience. I want you to lift

1:10:09

this thing out. I want it to be like the Mac, you know?

1:10:12

Tiny box. The Apple experience.

1:10:14

Yeah. I love it. Okay,

1:10:17

and so Tiny Box would run

1:10:20

Tiny Grad. Like what

1:10:23

do you envision this whole thing to look like? We're

1:10:25

talking about like

1:10:26

Linux with the full...

1:10:30

Software engineering environment.

1:10:33

And it's just not PyTorch but Tiny Grad.

1:10:36

Yeah, we did a poll of people want you Bunto or

1:10:38

Arch. We're gonna stick with you Bunto. Ooh,

1:10:40

interesting. What's your favorite flavor

1:10:43

of Linux? You Bunto. Bunto. I

1:10:45

like you Bunto Mate, however you pronounce that,

1:10:47

meat. So

1:10:49

how do you, you've gotten llama into

1:10:52

Tiny Grad. You've gotten stable diffusion into

1:10:54

Tiny Grad. What was that like? Can you comment on like,

1:10:59

what are these models? What's interesting about porting

1:11:01

them? So what's, yeah,

1:11:03

like what are the challenges? What's

1:11:05

naturally, what's easy, all that kind of stuff. There's a

1:11:08

really simple way to get these models into Tiny

1:11:10

Grad and you can just export them as Onyx and

1:11:12

then Tiny Grad can run Onyx. So

1:11:15

the ports that I did of llama,

1:11:17

stable diffusion and now whisper

1:11:18

are more academic to teach me about

1:11:21

the models, but they are cleaner

1:11:23

than the PyTorch versions. You can read the code.

1:11:25

I think the code is easier to read. It's less lines.

1:11:28

There's just a few things about the way Tiny Grad writes

1:11:30

things. Here's a complaint I have about PyTorch. NN.relu

1:11:34

is a class,

1:11:36

right? So when you create an NN module, you'll

1:11:38

put your NN relu

1:11:41

as in a knit. And this makes

1:11:43

no sense. Relu is completely stateless.

1:11:46

Why should that be a class?

1:11:48

But that's more like a software engineering

1:11:51

thing. Or do you think it has a cost on performance?

1:11:53

Oh no, it doesn't have a cost on performance. But

1:11:56

yeah, no, I think that it's, that's

1:11:58

what I mean about Tiny Grad's front.

1:11:59

and being cleaner. I see.

1:12:03

What do you think about Mojo? I don't know if you've been paying attention

1:12:05

to the programming language that does some

1:12:08

interesting ideas that kind of intersect

1:12:10

TinyGrad.

1:12:11

I think that there is a spectrum. And

1:12:14

like on one side you have Mojo and on the other

1:12:16

side you have like GGML. GGML

1:12:19

is this like, we're gonna run llama fast on

1:12:21

Mac. Okay, we're gonna expand out

1:12:23

to a little bit, but we're gonna basically go like depth first,

1:12:26

right? Mojo is like, we're gonna go breath

1:12:28

first. We're gonna go so wide that we're gonna make all

1:12:30

of Python fast and TinyGrad's in the middle.

1:12:33

TinyGrad is, we are going

1:12:35

to make neural networks fast.

1:12:38

Yeah, but they try to really

1:12:41

get it to be fast, compiled

1:12:43

down to the specifics hardware

1:12:46

and make that compilation step

1:12:49

as flexible and resilient as possible.

1:12:51

Yeah, but they have Turing completeness.

1:12:53

And that limits you.

1:12:55

Turing. That's what you're saying, it's somewhere in

1:12:57

the middle. So you're actually going to be targeting some

1:12:59

accelerators, some number,

1:13:02

not one. My

1:13:05

goal is step one, build

1:13:07

an equally performance stack to PyTorch

1:13:09

on NVIDIA and AMD, but

1:13:12

with way less lines.

1:13:13

And then step two is, okay, how do

1:13:15

we make an accelerator? But you need step

1:13:17

one. You have to first build the framework before

1:13:20

you can build the accelerator. Can you explain

1:13:22

MLperf? What's your

1:13:24

approach in general to benchmarking TinyGrad performance?

1:13:27

So I'm much more

1:13:30

of a, like,

1:13:32

build it the right way and worry

1:13:34

about performance later.

1:13:36

There's a bunch of things where I haven't

1:13:38

even like,

1:13:39

really dove into performance. The only place

1:13:41

where TinyGrad is competitive performance wise right now

1:13:44

is on Qualcomm GPUs. So

1:13:46

TinyGrad's actually used an open pilot to run the model. So

1:13:49

the driving model is TinyGrad. When

1:13:51

did that happen? That transition?

1:13:53

About eight months ago now.

1:13:56

And it's 2x faster than Qualcomm's library. What's

1:13:59

the hardwood? where

1:14:01

that open pilot runs on the Kamaia.

1:14:04

It's a Snapdragon 845. Okay.

1:14:07

So this is using the GPU. So the GPU's an Adreno GPU.

1:14:10

There's like different things. There's a really good Microsoft

1:14:13

paper that talks about like mobile GPUs

1:14:15

and why they're different from desktop GPUs. One

1:14:18

of the big things is in a desktop

1:14:20

GPU, you can use buffers

1:14:22

on a mobile GPU image textures

1:14:24

a lot faster.

1:14:27

And a mobile GPU image textures, okay. And

1:14:30

so you want to be able to leverage

1:14:33

that.

1:14:34

I want to be able to leverage it in a way that it's completely

1:14:36

generic, right? So there's a lot of this. Xiaomi

1:14:38

has a pretty good open source library for

1:14:40

mobile GPUs called MACE, where

1:14:42

they can generate where they have these kernels,

1:14:45

but they're all hand coded, right? So

1:14:47

that's great if you're doing three by three comps. That's

1:14:49

great if you're doing dense map models. But the minute

1:14:51

you go off the beaten path a tiny bit, well, your

1:14:54

performance

1:14:54

is nothing. Since

1:14:56

you mentioned OpenPilot, I'd love to get an update

1:14:58

in the company number

1:15:01

one, Calm AI World. How are

1:15:03

things going there in the development of

1:15:07

semi-autonomous driving?

1:15:10

You know, almost no one talks

1:15:13

about FSD anymore, and even less people

1:15:15

talk about OpenPilot. We've solved

1:15:17

the problem. Like, we solved it years ago.

1:15:21

What's the problem exactly? Well,

1:15:23

how do you... What does solving it mean?

1:15:26

Solving means how do you build a model that

1:15:29

outputs a human policy for driving?

1:15:31

How do you build a model that, given a reasonable

1:15:34

set of sensors, outputs a human policy for driving?

1:15:37

So you have companies

1:15:39

like Waymo and Cruise, which are hand coding these things that

1:15:41

are like quasi human policies. Then

1:15:45

you have Tesla

1:15:47

and maybe even to more of an extent,

1:15:50

comma, asking, okay, how do we just learn the human

1:15:52

policy and data? The

1:15:55

big thing that we're doing now, and we just put it out on Twitter...

1:16:00

At the beginning of comma,

1:16:02

we published a paper

1:16:03

called Learning a Driving Simulator.

1:16:06

And the way this thing worked was it

1:16:09

was an auto encoder and

1:16:11

then an RNN in the middle. You

1:16:14

take an auto encoder, you compress

1:16:16

the picture,

1:16:17

you use an RNN, predict the next state, and

1:16:19

these things were. It was a laughably

1:16:22

bad simulator. This is

1:16:24

2015-hour machine learning technology. Today

1:16:26

we have VQ, VAE, and transformers.

1:16:29

We're building Drive GPT basically.

1:16:32

Drive GPT. It's

1:16:37

trained on what? Is it trained in a self-supervised

1:16:39

way? It's trained on all the driving data

1:16:41

to predict the next frame.

1:16:43

Really trying to learn

1:16:45

a human policy. What would a human do? Well,

1:16:48

actually our simulator is conditioned on the pose. It's

1:16:50

actually a simulator. You can put in a state action

1:16:52

pair and get out the next state.

1:16:54

And then once

1:16:56

you have a simulator, you can do RL

1:16:58

in the simulator and RL will get us that

1:17:01

human policy.

1:17:02

So transfers. Yeah.

1:17:05

RL with a reward function, not

1:17:07

asking is this close to the human policy, but asking

1:17:09

would a human disengage if you did this behavior?

1:17:12

Okay. Let me think about the distinction

1:17:15

there. What a human disengage. What

1:17:18

a human disengage. That

1:17:22

correlates, I guess, with human policy,

1:17:24

but it could be different. So

1:17:27

it doesn't just say what would a human

1:17:29

do. It says what

1:17:30

would a good human driver do and

1:17:32

such that the experience is comfortable,

1:17:36

but also not annoying in that the thing

1:17:38

is very cautious. So it's

1:17:41

finding a nice balance. That's interesting. It's

1:17:43

a nice... It's asking exactly the right question. What

1:17:46

will make our customers happy? Right.

1:17:49

A system that you never wanted to engage.

1:17:51

Because usually disengagement is almost

1:17:54

always a sign of I'm not

1:17:56

happy with what the system is doing. Usually.

1:17:59

There's some that are just I fell. like driving and those

1:18:01

are always fine too but they're just going to look like noise

1:18:03

in the data. But even

1:18:05

that felt like driving. Maybe

1:18:07

yeah. That's even that's a signal like why

1:18:09

do you feel like driving here you

1:18:12

need to recalibrate

1:18:14

your relationship with the car. Okay

1:18:17

so what that that's really interesting. How

1:18:20

close are we just solving self-driving?

1:18:25

It's hard to say.

1:18:26

We haven't completely closed the loop yet

1:18:29

so we don't have anything built that truly looks like

1:18:31

that architecture yet. We have prototypes

1:18:34

and there's bugs. So we are

1:18:36

a

1:18:36

couple bug fixes away. Might take

1:18:39

a year might take 10. What's the

1:18:41

nature of the bugs? Are these

1:18:44

these major philosophical bugs logical

1:18:46

bugs? What kind of what kind of bugs are we talking about?

1:18:48

They're just like they're just like stupid bugs and like

1:18:50

also we might just need more scale. We

1:18:52

just massively expanded our compute

1:18:54

cluster at Kama. We

1:18:57

now have about two people worth of compute 40

1:18:59

beta flops.

1:19:00

Well people people

1:19:03

are different. Yeah 20 beta

1:19:05

flops. That's a person. It's just a unit right. Horses

1:19:08

are different too but we still call it a horsepower. Yeah

1:19:11

but there's something different about mobility

1:19:13

than there is about

1:19:14

perception and action

1:19:17

in a very complicated world. But yes.

1:19:19

Well yeah of course not all flops are created equal. If you

1:19:21

have randomly initialized weights it's not gonna. Not

1:19:24

all flops are created equal. So flops

1:19:26

are doing way more useful things than others. Yeah.

1:19:31

Tell me about it. Okay so more

1:19:33

data scale means more scale in compute

1:19:35

or scale in scale of data?

1:19:37

Both. Diversity

1:19:41

of data? Diversity is very important in data.

1:19:43

Yeah I mean

1:19:45

we have so we have about I think

1:19:47

we have like 5 000 daily actives.

1:19:51

How would you evaluate how uh FSD

1:19:54

is doing? Pretty well.

1:19:56

How's that race going

1:19:58

between Kama AI and FSD?

1:19:59

Tesla has always wanted two years ahead of us. They've

1:20:02

always been wanted two years ahead of us. And they

1:20:04

probably always will be because they're not doing anything wrong. What

1:20:07

have you seen that's since the last time we talked that

1:20:09

are interesting architectural decisions, training decisions,

1:20:12

like the way the way they deploy stuff, the architectures

1:20:14

they're using in terms of the software,

1:20:16

how the teams are run, all that kind of stuff, data collection.

1:20:19

Anything interesting? I mean, I know they're moving

1:20:21

toward more of an end to end approach.

1:20:23

So creeping towards end to end

1:20:25

as much as possible across the

1:20:28

whole thing. The training, the data

1:20:30

collection, everything. They also have a very fancy simulator.

1:20:32

They're probably saying all the same things we are. They're

1:20:34

probably saying we just need to optimize, you know, what

1:20:37

is the reward? We get negative reward for this engagement.

1:20:39

Right? Like, everyone kind of knows this.

1:20:41

It's just a question who can actually build and deploy the system.

1:20:44

Yeah. I mean, this good, it's requires

1:20:46

good software engineering, I think. Yeah. And

1:20:49

the right kind of hardware. Yeah,

1:20:51

the hardware to run it. You

1:20:53

still don't believe in cloud in that regard?

1:20:57

I have a compute cluster

1:20:59

in my office. 800 amps. Tiny

1:21:02

grad. It's 40 kilowatts at idle,

1:21:04

our data center.

1:21:06

That's crazy. If 40 kilowatts is burning

1:21:08

just when the computers are idle.

1:21:09

Just when I... Oh, sorry. Sorry. Compute cluster. Compute

1:21:14

cluster. I got it. It's not a data center. Yeah. Now,

1:21:16

data centers are clouds. We don't have clouds.

1:21:19

Data centers have air conditioners. We have fans. That

1:21:22

makes it a compute cluster. I'm

1:21:25

guessing this is a kind of a legal distinction

1:21:27

as compared to me. Sure. Yeah. We have a compute

1:21:29

cluster.

1:21:31

You said that you don't think LLMs have consciousness,

1:21:33

or at least not more than a chicken.

1:21:36

Do you think they can reason? Is there something

1:21:38

interesting to you about the word reason, about

1:21:41

some of the capabilities that we think is kind of human,

1:21:43

to be able to

1:21:45

integrate

1:21:47

complicated information and through

1:21:50

a chain of thought arrive

1:21:54

at a conclusion that feels novel, a novel

1:21:57

integration of the... disparate

1:22:00

facts. Yeah,

1:22:03

I don't think that there's, I think that

1:22:05

they can reason better than a lot of people. Hey,

1:22:08

isn't that amazing to you though? Isn't

1:22:10

that like an incredible thing that a transformer

1:22:12

can achieve? I mean, I think

1:22:14

that calculators can add better than a lot

1:22:16

of people. But language feels

1:22:19

like reasoning through the process

1:22:21

of language, which

1:22:23

looks a lot like thought. Making

1:22:27

brilliant season chess, which feels

1:22:29

a lot like thought. Whatever new

1:22:31

thing that AI can do, everybody thinks is

1:22:33

brilliant. And then like 20 years go by and they're like,

1:22:35

well, you have a chess, that's like mechanical. Like adding,

1:22:37

that's like mechanical. So you think language is not

1:22:39

that special. It's like chess. It's like chess

1:22:42

and it's like- I don't know. Because it's very

1:22:44

human, we take it, listen,

1:22:47

there's something different between chess and

1:22:51

language. Chess is a game that a subset

1:22:53

of the population plays. Language is something

1:22:56

we

1:22:56

use nonstop for all

1:22:58

of our human interaction. And human interaction

1:23:01

is fundamental to society. So

1:23:03

it's like, holy shit. This

1:23:06

language thing is not so difficult to

1:23:08

like

1:23:09

create in the machine. The problem

1:23:12

is if you go back to 1960 and you

1:23:14

tell them that you have a machine that can play

1:23:17

amazing chess,

1:23:19

of course someone in 1960 will tell you that machine

1:23:21

is intelligent.

1:23:23

Someone in 2010 won't, what's changed,

1:23:25

right? Today, we think that these machines

1:23:27

that have language are intelligent. But

1:23:30

I think in 20 years, we're gonna be like, yeah, but can it

1:23:32

reproduce?

1:23:33

So reproduction, yeah,

1:23:36

we might redefine what it means to be,

1:23:39

what is it? A high performance living

1:23:41

organism on earth. Humans are always gonna

1:23:44

define a niche for themselves. Like, well,

1:23:46

you know, we're better than the machines because we can,

1:23:48

you know, and like they tried creative for a bit, but

1:23:50

no one believes that one anymore.

1:23:52

But niche, is

1:23:54

that delusional? There's some accuracy to that. Because

1:23:57

maybe like with chess, you start to realize like.

1:23:59

that

1:24:02

we have, it'll conceive notions of

1:24:05

what makes humans special.

1:24:07

Like the apex organism

1:24:09

on Earth. Yeah,

1:24:12

and I think maybe we're going to go through that same

1:24:14

thing with language.

1:24:16

And that same thing with creativity.

1:24:19

But

1:24:19

language carries these notions of truth

1:24:22

and so on. And so we might be like, wait,

1:24:24

maybe truth is not carried by language.

1:24:27

Maybe there's like a deeper thing. The niche is

1:24:29

getting smaller. Oh, boy. But

1:24:33

no, no, no, you don't understand humans are

1:24:35

created by God and machines are created

1:24:37

by humans, therefore. Right? Like that'll

1:24:39

be the last niche we have.

1:24:41

So what do you think about this,

1:24:43

the rapid development of LMS? If we could

1:24:45

just like stick on that. It's still incredibly

1:24:47

impressive, like with Chagibiti. Just even Chagibiti,

1:24:49

what are your thoughts about reinforcement

1:24:52

learning with human feedback on these large language

1:24:54

models?

1:24:55

I'd like to go back to when calculators

1:24:58

first came out

1:24:59

and or computers. And

1:25:02

like I wasn't around. Look, I'm 33 years old. And

1:25:05

to like

1:25:06

see how that affected

1:25:09

like

1:25:12

society.

1:25:13

Maybe you're right. I want to put on

1:25:16

the the big picture

1:25:18

hat here. Oh my God, a refrigerator? Wow.

1:25:21

Refrigerator, electricity, all that kind of stuff.

1:25:25

But no,

1:25:26

with the Internet,

1:25:28

large language models seeming human like

1:25:31

basically passing a Turing test. It

1:25:33

seems it might have really at scale

1:25:36

rapid transformative effects on society.

1:25:39

But you're saying like other technologies have as well.

1:25:43

So maybe calculators not the best

1:25:45

example that because that just seems

1:25:47

like a may. Well, no, maybe

1:25:49

calculator. The poor milk man, the day he

1:25:51

learned about refrigerators, he's like, I'm done. You

1:25:55

tell me you just keep the milk in your house. You

1:25:58

don't need to deliver it every day. I'm done.

1:26:00

Well, yeah, you have to actually look at the practical

1:26:02

impacts of certain technologies that they've

1:26:04

had. Yeah, probably electricity

1:26:07

is a big one and also how rapidly it spread.

1:26:10

Man, the internet is a big one. I do think it's different

1:26:12

this time though.

1:26:13

Yeah, it just feels like stuff- The initiative is getting

1:26:15

smaller. The initiative is

1:26:17

humans. Yes. That

1:26:20

makes humans special. Yes. It

1:26:22

feels like it's getting smaller rapidly though, doesn't

1:26:25

it? Or is that just a feeling we dramatize

1:26:27

everything? I think we dramatize everything. I

1:26:29

think that you asked the milk

1:26:32

man when he saw the refugee writers, and they're

1:26:34

going to have one of these in every home? Yeah,

1:26:38

yeah, yeah. Yeah,

1:26:41

but boy, is it impressive. So

1:26:44

much more impressive than seeing a

1:26:47

chess world champion AI system. I

1:26:49

disagree, actually.

1:26:51

I disagree. I

1:26:53

think things like Mu Zero and AlphaGo

1:26:55

are so much more impressive because

1:26:57

these things are playing beyond

1:27:00

the highest human level. The

1:27:03

language models are writing middle

1:27:06

school level essays, and people are like, wow,

1:27:08

it's a great essay. It's a great five-paragraph

1:27:10

essay about the causes of the Civil War. Okay,

1:27:13

forget the Civil War, just generating code, codex.

1:27:15

Oh. So you're saying it's

1:27:18

mediocre code. Terrible. But

1:27:20

I don't think it's terrible. I think it's just mediocre

1:27:23

code. Yeah.

1:27:25

Often

1:27:25

close to correct. Like

1:27:27

for mediocre purposes. That's the scariest kind

1:27:30

of code. I spend 5% of time typing and 95% of time debugging.

1:27:33

The last thing I want is close to correct code.

1:27:36

I want a machine that can help me with the debugging, not

1:27:38

with the typing. You know, it's like level

1:27:40

two driving, similar

1:27:42

kind of thing. Yeah, you still should

1:27:45

be a good programmer in order to modify.

1:27:48

I wouldn't even say debugging. It's just modifying

1:27:50

the code, reading it. Don't think it's like level

1:27:52

two driving.

1:27:54

I think driving is not tool complete and programming

1:27:56

is. Meaning you don't use like the best

1:27:58

possible tools to drive. You're

1:28:01

not like, like, like,

1:28:03

cars have basically the same interface for

1:28:05

the last 50 years. Computers have

1:28:07

a radically different interface. Okay. Can

1:28:09

you describe the concept of tool complete?

1:28:12

Yeah. So think about the difference between a car from 1980 and

1:28:15

a car from today. Yeah. No difference

1:28:17

really. It's got a bunch of pedals, it's got a steering wheel.

1:28:20

Great.

1:28:20

Maybe now it has a few ADAS features, but

1:28:23

it's pretty much the same car. All right. You

1:28:25

have no problem getting into a 1980 car and driving it. Take

1:28:28

a programmer today who spent their whole life doing JavaScript

1:28:31

and you put him in an Apple 2e prompt and

1:28:33

you tell him about the line numbers in basic.

1:28:36

But how do I insert

1:28:38

something between line 17 and 18? Oh,

1:28:41

wow.

1:28:42

But the,

1:28:45

so in tool you're putting in the programming

1:28:47

languages. So it's just the entirety stack

1:28:49

of the tooling. Exactly. So it's not just

1:28:51

like the IDs or something like this. It's everything.

1:28:54

Yes. It's IDEs, the languages, the runtimes.

1:28:56

It's everything. It's tool complete.

1:28:59

So like almost if, if,

1:29:01

if, if, if, if codex or

1:29:03

co-pilot are helping you, that

1:29:05

actually probably means that your framework or library is

1:29:08

bad and there's too much boilerplate in it.

1:29:12

Yeah. But don't you think

1:29:14

so much programming has boilerplate? Tinygrad

1:29:17

is now 2,700 lines

1:29:19

and it can run llama and stable

1:29:21

diffusion and all of this stuff

1:29:24

is in 2,700 lines. boilerplate

1:29:26

and abstraction

1:29:29

indirection and all these things are

1:29:31

just bad code. Well,

1:29:36

let's talk about good code and bad

1:29:38

code. There's a, I would

1:29:40

say, I don't know, for generic

1:29:42

scripts that I write just offhand, like

1:29:45

I, like 80% of it is written by GPT.

1:29:48

Just like quick, quick like offhand

1:29:50

stuff. So not like libraries, not like performing

1:29:53

code, not stuff for robotics and so

1:29:55

on. Just quick stuff because your basics,

1:29:57

so much of programming is doing some.

1:29:59

some, yeah, boilerplate, but

1:30:02

to do so efficiently and quickly,

1:30:06

because you can't really automate it fully with

1:30:08

like generic method, like a generic

1:30:11

kind of ID

1:30:13

type of recommendation or something like this,

1:30:15

you do need to have some of the complexity of

1:30:17

language models.

1:30:19

Yeah, I guess if I was really writing like

1:30:21

maybe today, if I wrote like a

1:30:23

lot of like data parsing stuff, I mean,

1:30:25

I don't play CTFs anymore, but if I still play CTFs,

1:30:27

a lot of the like, is just like you have to write like a parser for this

1:30:29

data format, like I wonder, or

1:30:32

like advent of code,

1:30:33

I wonder when the models are gonna

1:30:35

start to help with that kind of code,

1:30:37

and they may, they may, and the models

1:30:39

also may help you with speed, and the

1:30:41

models are very fast, but where

1:30:43

the models won't, my programming

1:30:46

speed is not at all limited by

1:30:48

my typing speed.

1:30:52

And in very few cases

1:30:54

it is, yes, if I'm writing some script to

1:30:56

just like parse some weird data format, sure,

1:30:59

my programming speed is limited by my typing speed. What about

1:31:01

looking stuff up? Because that's essentially

1:31:03

a more efficient look up, right? You know,

1:31:06

when I was at Twitter, I tried to use chat

1:31:09

GPT to like ask

1:31:11

some questions, like was the API for this? And

1:31:14

it would just hallucinate,

1:31:15

it would just give me completely made up API

1:31:18

functions that sounded real. Well,

1:31:20

do you think that's just a temporary kind of stage?

1:31:23

No. You don't think it'll

1:31:25

get better and better and better and this kind of stuff, because like

1:31:27

it only hallucinates stuff in the edge

1:31:29

cases. Yes.

1:31:30

If you're writing generic code, it's actually pretty good. Yes,

1:31:32

if you are writing an absolute basic like

1:31:34

React app with a button, it's not gonna hallucinate,

1:31:36

sure. No, there's kind of

1:31:38

ways to fix the hallucination problem. I think Facebook

1:31:41

has an interesting paper, it's called Atlas, and

1:31:43

it's actually weird the way that we do language

1:31:46

models right now where all of the

1:31:49

information is in the weights.

1:31:51

And human brains don't really like this. It's

1:31:53

like a hippocampus and a memory system. So

1:31:55

why don't LLMs have a memory system? And there's

1:31:57

people working on them. I think future LLMs are gonna.

1:31:59

be smaller, but

1:32:02

are going to run looping

1:32:04

on themselves and are going to have retrieval systems.

1:32:07

And the thing about using a retrieval system is you can

1:32:09

cite sources,

1:32:10

explicitly. Which

1:32:14

is really helpful to integrate

1:32:16

the human into the loop of the

1:32:18

thing, because you can go check the sources and you can investigate.

1:32:21

So whenever the thing is hallucinating, you can

1:32:23

have the human supervision. That's pushing

1:32:26

it towards level two kind of drive. That's gonna kill Google.

1:32:29

Wait, which part? When someone makes an LLM

1:32:31

that's capable of citing its sources, it will kill Google.

1:32:34

LLM that's citing its sources because that's basically

1:32:37

a search engine.

1:32:38

That's what people want in a search engine. But

1:32:40

also Google might be the people that build it. Maybe.

1:32:43

And put ads on it. I'd count them out. Why

1:32:46

is that? Why do you think? Who wins

1:32:48

this race? We got,

1:32:51

who are the competitors? We got

1:32:54

TinyCorp. I don't know if that's, yeah,

1:32:57

I mean, you're a legitimate competitor in that.

1:32:59

I'm not trying to compete on that. You're not.

1:33:02

No, not as a skit. It's gonna accidentally stumble into that

1:33:04

competition. Maybe. to

1:33:07

replace Google search?

1:33:08

When I started Comma, I said,

1:33:11

over and over again, I'm going to win self-driving cars.

1:33:13

I still believe that.

1:33:15

I have never said I'm going to win search

1:33:17

with the TinyCorp and

1:33:19

I'm never going to say that because I won't. The night

1:33:21

is still young. You don't know how

1:33:23

hard is it to win search in

1:33:25

this new route. It

1:33:28

feels, I mean, one of the things that ChatGPT kind of shows

1:33:30

that there could be a few interesting tricks that

1:33:32

really have, that create a really compelling product. Some

1:33:35

startup's gonna figure it out. I think

1:33:37

if you ask me, like Google's still the number one

1:33:39

webpage, I think by the end of the decade, Google won't be

1:33:41

the number one webpage anymore.

1:33:43

So you don't think Google, because of

1:33:45

the, how big the corporation is?

1:33:47

Look, I would put a lot more money on Mark Zuckerberg.

1:33:50

Why is that? Because

1:33:53

Mark Zuckerberg's alive. Like

1:33:57

this is old Paul Graham essay. Startups are

1:33:59

either alive or dead. Google's dead.

1:34:02

Facebook's alive. Versus Facebook is alive,

1:34:04

Meta is alive. Meta. Meta. You

1:34:06

see what I mean? Like that's just, like Mark

1:34:08

Zuckerberg, this is Mark Zuckerberg reading that Paul Graham

1:34:10

asking and being like, I'm gonna show everyone how alive

1:34:12

we are. I'm gonna change the name.

1:34:14

So you don't think there's this gutsy

1:34:18

pivoting engine that,

1:34:22

like Google doesn't have that, the kind of engine

1:34:24

that a startup has like constantly being

1:34:27

alive, I guess. When I listened to your Sam

1:34:29

Altman

1:34:30

podcast, he talked about the button. Everyone

1:34:32

who talks about AI talks about the button, the button to turn it off,

1:34:34

right? Do we have a button to turn off Google?

1:34:37

Is anybody

1:34:39

in the world capable of shutting Google down? What

1:34:43

does that mean exactly? The company or the search

1:34:45

engine? Could we shut the search engine down? Could we shut the company

1:34:47

down? Either. Can

1:34:50

you elaborate on the value of that question? Does

1:34:52

Sundar Pichai have the authority to turn

1:34:54

off google.com tomorrow? Who

1:34:57

has the authority? That's a good question. Does

1:35:00

anyone? Does anyone? Yeah, I'm sure.

1:35:03

Are you sure?

1:35:04

No, they have the technical power, but do they

1:35:06

have the authority? Let's say Sundar Pichai made

1:35:09

this his sole mission. He came into Google

1:35:11

tomorrow and said, I'm gonna shut google.com down.

1:35:14

I don't think he'd keep his position too long. And

1:35:18

what is the mechanism by which he wouldn't keep his position?

1:35:21

Well, boards and shares

1:35:23

and corporate undermining and oh my

1:35:25

God, our revenue is zero now. Okay,

1:35:29

so what's the case you're making here? So the

1:35:31

capitalist machine prevents you from having

1:35:34

the button. Yeah,

1:35:35

and it will have, I mean, this is true for the AIs too. There's

1:35:38

no turning the AIs off.

1:35:40

There's no button. You can't press it. Now,

1:35:42

does Mark Zuckerberg have that button for Facebook.com?

1:35:46

Yes, probably more. I think he does. I

1:35:49

think he does, and this is exactly what I mean

1:35:51

and why I bet on him so much more than

1:35:53

I bet on Google. I guess you could say Elon

1:35:55

has similar stuff. Oh, Elon has the button.

1:35:59

Yeah.

1:36:00

Does he want can you on fire the missiles? Can

1:36:02

he fire the missiles? I

1:36:04

think some questions are better I'm

1:36:07

asked I mean,

1:36:09

you know a rocket an ICB. Yeah, well your rocket

1:36:11

that can land anywhere. Is that an ICB? M? Well,

1:36:14

yeah, you know don't ask too many questions my

1:36:17

god

1:36:19

But the

1:36:21

the positive side of the button is that you can

1:36:23

innovate aggressively is what you're saying

1:36:25

which is what's required with Turning

1:36:28

LLM into a search engine. I would bet on a startup.

1:36:31

I bet is it so easy, right? I bet on something that looks

1:36:33

like mid-journey but for search

1:36:37

Just is able to set sources loop

1:36:39

on itself, I mean just feels like one model can take off

1:36:41

Yeah, right and that nice wrapper and some

1:36:43

of it scale. I mean, it's hard to Like

1:36:46

create a product that just works really nicely Stably

1:36:49

the other thing that's gonna be cool is there is

1:36:51

some aspect of a winner take all effect, right?

1:36:54

Like once um

1:36:56

Someone starts deploying a product that gets a lot

1:36:58

of usage and you see this with open AI They

1:37:00

are going to get the data set

1:37:02

to train future versions of the model

1:37:04

Yeah, they are going to be able to right, you

1:37:07

know I was asked a Google image search when I worked there like

1:37:09

almost 15 years ago now How does Google know which

1:37:11

image is an apple

1:37:12

and I said the metadata and they're like, yeah that

1:37:14

works about half the time How does Google know

1:37:16

you'll see the role apples on the front page when you search Apple?

1:37:19

Mm-hmm. And I don't know.

1:37:21

I didn't come up with the answer

1:37:22

The guys like what's what people click on when they search Apple? Yeah,

1:37:26

yeah that data is really really powerful. It's the

1:37:28

human supervision What do you think

1:37:30

are the chances? What do you think in general

1:37:33

that llama was open-sourced? I just

1:37:36

did a conversation with With

1:37:38

Mark Zuckerberg and he's all

1:37:41

in on open source

1:37:43

Who would have thought that Mark Zuckerberg

1:37:45

would be the good guy? I Mean

1:37:47

it Would have thought anything

1:37:50

in this world It's hard to know

1:37:53

but open source to you ultimately

1:37:57

Is a good thing here undoubtedly

1:38:01

You know, what's ironic

1:38:03

about all these AI safety people is

1:38:05

they are going to build the exact thing they fear.

1:38:09

These we need to have one model that we

1:38:11

control and align. This is

1:38:13

the only way you end up paper clipped. There's

1:38:16

no way you end up paper clipped if everybody

1:38:18

has an AI. So open sourcing is

1:38:20

the way to fight the paper clip, Maximizer? Absolutely.

1:38:23

It's the only way. You think you're going to control

1:38:26

it? You're not going to control it. So the

1:38:28

criticism you have for the AI safety folks

1:38:31

is that there is a belief

1:38:33

and a desire for control. And

1:38:36

that belief and desire for centralized

1:38:38

control of dangerous AI systems

1:38:41

is not good. Sam Altman won't tell you

1:38:43

that GPT-4 has 220 billion

1:38:46

parameters and is a 16-way mixture model

1:38:48

with eight sets of weights.

1:38:50

Who did you have to murder to

1:38:52

get that information? All right. I

1:38:54

mean, look. But yes. Everyone

1:38:57

at OpenAI knows what I just said was true. Now

1:39:01

ask the question, really.

1:39:03

It upsets me when I like GPT-2.

1:39:06

When OpenAI came out with GPT-2 and raised a whole

1:39:08

fake AI safety thing about that, I mean, now the

1:39:10

model is laughable.

1:39:12

They used AI safety

1:39:14

to hype up their company and it's disgusting.

1:39:18

Or the flip side of that is

1:39:21

they used a relatively weak model

1:39:23

in retrospect to explore how

1:39:25

do we do AI safety correctly?

1:39:27

How do we release things? How do we go through the process? I

1:39:30

don't know if... Sure. Sure.

1:39:33

All right. All right. That's

1:39:35

the charitable interpretation. I don't know how much hype there is in AI safety,

1:39:37

honestly. Oh, there's so much. At least on Twitter.

1:39:40

I don't know. Maybe Twitter's not real life. Twitter's

1:39:42

not real life. Come on. In

1:39:44

terms of hype. I mean, I don't... I

1:39:47

think OpenAI has been finding an

1:39:49

interesting balance between transparency

1:39:50

and putting value on

1:39:53

AI safety. You

1:39:55

think just go all out open

1:39:57

source. So do a llama. Absolutely.

1:40:00

So do like open source, this

1:40:02

is a tough question, which is open

1:40:05

source, both the base, the

1:40:07

foundation model and the fine tune

1:40:09

one. So like the

1:40:11

model that can be ultra racist and dangerous

1:40:14

and like tell you how to build

1:40:16

a nuclear weapon. Oh my God, have you met humans,

1:40:19

right? Like half of these AI- I haven't

1:40:21

met most humans. This makes,

1:40:23

this allows you to meet every human.

1:40:26

Yeah, I know, but half of these AI alignment

1:40:28

problems are just human alignment problems. And

1:40:30

that's what's also so scary about the language they

1:40:32

use. It's like, it's not the machines you want to align,

1:40:34

it's me.

1:40:37

But here's the thing, it

1:40:39

makes it very accessible to ask

1:40:43

very questions where

1:40:46

the answers have dangerous consequences if

1:40:48

you were to act on them. I

1:40:51

mean, yeah, welcome to the world.

1:40:54

Well, no, for me, there's a lot of friction. If

1:40:56

I want to find out how to,

1:40:59

I don't know, blow

1:41:01

up something. No, there's not a lot of friction that's

1:41:03

so easy. No, like what do I search

1:41:06

that is Bing? Or do I search anything that

1:41:08

I use? No, there's like lots of stuff. No,

1:41:10

it feels like I have to keep clicking a lot of this. First off, first off,

1:41:13

first off, anyone who's stupid enough to search for how

1:41:15

to blow up a building in my neighborhood

1:41:18

is not smart enough to build a bomb, right?

1:41:20

Are you sure about that? Yes. I

1:41:24

feel like a language model makes it

1:41:26

more accessible for that

1:41:29

person who's not smart enough to do. They're

1:41:31

not gonna build a bomb, trust me. The

1:41:34

people who are incapable of figuring

1:41:36

out how to ask that question a bit more academically

1:41:39

and get a real answer from it are not capable

1:41:41

of procuring the materials, which are somewhat controlled

1:41:43

to build a bomb. No, I think

1:41:45

it all makes it more accessible to people with money

1:41:48

without the technical know-how,

1:41:50

right? Like, do you really

1:41:52

need to know how to build a bomb to build a bomb? You

1:41:54

can hire people, you can find like- Or you can hire

1:41:57

people to build a- You know what? I was asking this question

1:41:59

on my stream. Like, can Jeff-

1:41:59

Bezos hire a hit man probably not but

1:42:03

a language model can

1:42:05

probably help you out yeah you'll

1:42:07

still go to jail right like it's not like the language

1:42:10

model is God like the language model it's like it's

1:42:12

you literally just hired someone on Fiverr

1:42:15

but you use it but okay GPT

1:42:17

for in terms of finding hit man is like asking

1:42:19

Fiverr how to find a I understand but

1:42:21

don't you think you how you know okay how but

1:42:23

don't you think GPT 5 will be better just

1:42:26

don't you think that information is out there on the internet I

1:42:29

mean yeah

1:42:29

and I think that if someone is actually

1:42:31

serious enough to hire a hit man or build a bomb

1:42:34

they'd also be serious enough to find the information

1:42:36

I don't think so I think it makes it more accessible

1:42:38

if you have if you have enough money to buy

1:42:40

hit man I think it decreases

1:42:43

the friction of how hard is it to find

1:42:45

that kind of hit man I honestly think this

1:42:48

there's a jump in

1:42:51

ease and scale of how

1:42:53

much harm you can do and I don't mean harm with language

1:42:56

I mean harm with actual violence what you're basically

1:42:58

saying is like okay what's gonna happen is these people

1:43:00

who are not intelligent are going to use

1:43:02

machines to augment their

1:43:04

intelligence and now intelligent people

1:43:07

and machines intelligence is scary intelligent

1:43:10

agents are scary when I'm in the

1:43:12

woods the scariest animal to meet is human

1:43:14

right

1:43:15

no no no there's look there's like nice California

1:43:18

humans like I see you're wearing like you

1:43:20

know street clothes and Nikes are fine

1:43:23

you look like you've been a human who's been in the woods for a while yeah

1:43:25

I'm more scared of you than a bear that's what they say about

1:43:27

the Amazon you go to the Amazon it's

1:43:30

the human tribes so

1:43:32

intelligence is scary

1:43:34

right so to just like ask this question

1:43:36

in generic way you're like what if we took

1:43:38

everybody who you know maybe has ill

1:43:41

intention but is not so intelligent and gave them intelligence

1:43:45

right so we

1:43:47

should have intelligence control of course we

1:43:49

should only give intelligence to good people and that

1:43:51

is the absolutely horrifying idea should you

1:43:54

the best defense is actually the best defense

1:43:56

is to give more intelligence to the

1:43:58

good guys and intelligent

1:43:59

Give intelligence to everybody. Give intelligence to everybody.

1:44:02

You know what, it's not even like guns, right? Like people say this about guns. You

1:44:04

know, what's the best defense against a bad guy with a gun, a good guy with a

1:44:06

gun? I'm like, I kinda subscribe to that, but I really

1:44:08

subscribe to that with intelligence.

1:44:10

Yeah, in a fundamental way, I agree

1:44:13

with you, but there just feels like so

1:44:15

much uncertainty and so much can happen rapidly that

1:44:17

you can lose a lot of control and you can do a lot of damage.

1:44:20

Oh no, we can lose control? Yes, thank

1:44:23

God. Yeah.

1:44:24

I hope they lose control.

1:44:28

I'd want them to lose control more than anything else. I

1:44:31

think when you lose control, you can do a lot of damage,

1:44:33

but you can do more damage when you centralize

1:44:36

and hold onto control is the point. Centralized

1:44:39

and held control is tyranny, right?

1:44:41

I will always, I don't like anarchy either, but

1:44:43

I've always taken anarchy over tyranny. Anarchy, you have

1:44:45

a chance. This

1:44:47

human civilization we've got going on is

1:44:50

quite interesting. I mean, I agree with you. So

1:44:52

do you open source is

1:44:55

the way forward here? So you admire what Facebook

1:44:57

is doing here or what Meta is doing with the release

1:44:59

of them. A lot. I lost $80,000 last year investing in

1:45:01

Meta and

1:45:04

when they released Llama, I'm like, yeah, whatever man, that

1:45:06

was worth it. That's

1:45:07

worth it. Do you think Google

1:45:09

and OpenAI with Microsoft

1:45:12

will match what Meta is doing

1:45:14

or not? So if

1:45:16

I were a researcher, why would you wanna

1:45:18

work at OpenAI? Like, you know, you're just,

1:45:21

you're on the bad team. Like, I mean

1:45:23

it, like you're on the bad team who can't even say that GPT-4

1:45:25

has 220 billion parameters. So close

1:45:27

source to use the bad team.

1:45:29

Not only close source, I'm not saying you need

1:45:31

to make your model weights open. I'm

1:45:33

not saying that. I totally understand we're keeping

1:45:36

our model weights closed because that's our product, right?

1:45:38

That's fine.

1:45:39

I'm saying like,

1:45:41

because of AI safety reasons, we can't

1:45:43

tell you the number of

1:45:44

billions of parameters in the model.

1:45:46

That's just the bad guys. Just

1:45:49

because you're mocking AI safety doesn't mean

1:45:51

it's not real. Oh, of course. Is it

1:45:53

possible that these things can really do a lot

1:45:55

of damage that we don't know? Oh my God,

1:45:57

yes. Intelligence is so dangerous.

1:45:59

human intelligence or machine intelligence.

1:46:02

Intelligence is dangerous. Machine

1:46:04

intelligence is so much easier to deploy at scale,

1:46:07

rapidly.

1:46:08

Okay, if you have human-like

1:46:10

bots on Twitter,

1:46:13

and you have a thousand of them, create

1:46:16

a whole narrative, like you can

1:46:19

manipulate millions of people.

1:46:21

But you mean like the intelligence agencies in America

1:46:23

are doing right now? Yeah, but they're not doing it

1:46:25

that well. It feels like you can do

1:46:27

a lot. They're doing it pretty well. Well,

1:46:31

I think they're doing a pretty good job. I suspect

1:46:34

they're not nearly as good as a bunch of GPT-fueled

1:46:37

bots could be. Well, I mean, of course they're looking

1:46:39

into the latest technologies for control of people,

1:46:41

of course.

1:46:42

But I think there's a George Hotz type character

1:46:44

that can do a better job than the entirety of them.

1:46:47

You don't think so? No way. No, and I'll tell you

1:46:49

why the George Hotz character can't. And I thought about this a lot with

1:46:51

hacking,

1:46:52

right? Like I can find exploits in web browsers. I probably still can.

1:46:54

I mean, I was better out on I was 24, but

1:46:56

the thing that I lack is the ability to

1:46:59

slowly and steadily deploy them over five years. And

1:47:01

this is what intelligence agencies are very good at.

1:47:04

Intelligence agencies don't have the most sophisticated

1:47:06

technology. They just

1:47:08

have- Endurance? Endurance.

1:47:12

Yeah, the financial backing and

1:47:15

the infrastructure for the endurance.

1:47:17

So the more we can decentralize

1:47:19

power, like you could make an argument by

1:47:22

the way that nobody should have these things. And

1:47:24

I would defend that argument. I would, like you're

1:47:26

saying that, look, LLMs and AI

1:47:28

and machine intelligence can cause a lot of harm,

1:47:31

so nobody should have it.

1:47:32

And I will respect someone philosophically

1:47:34

with that position. Just like I will respect someone philosophically

1:47:37

with a position that nobody should have guns,

1:47:39

right? But I will not respect philosophically

1:47:42

with only the trusted

1:47:44

authorities should have access to this.

1:47:47

Who are the trusted authorities? You know what? I'm

1:47:50

not worried about alignment between AI company

1:47:54

and their machines. I'm worried about alignment

1:47:56

between me and AI company.

1:47:58

What do you think?

1:47:59

as Eliot Kowski would say to you.

1:48:03

Because he is really against open source.

1:48:05

I know. And

1:48:09

I thought about this. I thought about this. And

1:48:13

I think this comes down to a

1:48:16

repeated misunderstanding of political

1:48:18

power by the rationalists.

1:48:21

Interesting. I

1:48:24

think that Eliot Kowski

1:48:26

is scared of these things. And I

1:48:28

am scared of these things too. Everyone

1:48:30

should be scared of these things. These things are scary.

1:48:33

But now you ask

1:48:35

about the two possible futures.

1:48:37

One where a small

1:48:39

trusted centralized group of people

1:48:41

has them. And the other where everyone has them.

1:48:44

And I am much less scared of the second

1:48:46

future than the first.

1:48:49

Well, there's a small trusted group of people that have

1:48:51

control of our nuclear weapons.

1:48:54

There's a difference.

1:48:55

Again, a nuclear weapon cannot be deployed

1:48:58

tactically. And a nuclear weapon is not a defense against

1:49:00

a nuclear weapon.

1:49:03

Except maybe in some philosophical mind game kind

1:49:05

of way.

1:49:06

But AI is different

1:49:09

how exactly? OK. Let's

1:49:11

say the

1:49:12

intelligence agency deploys a million bots

1:49:15

on Twitter or a thousand bots on Twitter to try to convince

1:49:17

me of a point.

1:49:19

Imagine I had a powerful AI running

1:49:21

on my computer saying, OK,

1:49:23

nice PSYOP. Nice PSYOP. Nice PSYOP.

1:49:26

OK. Here's a PSYOP. I filtered

1:49:28

it out for you.

1:49:29

Yeah. I mean, so you have fundamentally

1:49:32

hope for that, for

1:49:34

the defense of PSYOP. I'm not

1:49:36

even like, I don't even mean these things in truly horrible

1:49:38

ways. I mean these things in straight up ad blocker.

1:49:41

Right? Yeah. Straight up ad blocker. I don't want ads.

1:49:44

Yeah. But they are always finding, imagine

1:49:46

I had an AI that could just block

1:49:48

all the ads for me. So

1:49:50

you believe in the power

1:49:52

of the people to always create an ad blocker.

1:49:55

Yeah. I mean, I kind of share that belief.

1:49:58

I have, that's one of the. the deepest

1:50:01

optimism I have is just like, there's a lot

1:50:03

of good guys. So to

1:50:05

give, you shouldn't hand

1:50:07

pick them, just throw out powerful

1:50:10

technology out there and the good guys

1:50:12

will outnumber and out power

1:50:14

the bad guys. Yeah, I'm not even gonna say there's a

1:50:16

lot of good guys. I'm saying that good outnumber's bad,

1:50:19

right? Good outnumber's bad. In skill and performance.

1:50:22

Yeah, definitely in skill and performance, probably just in

1:50:24

number too. Probably just in general. I mean,

1:50:26

if you believe philosophically in democracy, you obviously

1:50:28

believe that.

1:50:30

That good outnumber's bad. And

1:50:33

like the only,

1:50:35

if you give it to a small number of people,

1:50:38

there's a chance you gave it to good people, but there's also a chance

1:50:41

you gave it to bad people. If you give it to everybody,

1:50:44

well if good outnumber's bad, then you definitely gave it

1:50:46

to more good people than bad.

1:50:47

That's

1:50:51

really interesting. So that's on the safety grounds, but then

1:50:53

also of course there's other motivations

1:50:55

like you don't wanna give away your secret sauce.

1:50:57

Well that's what I mean. I mean, I look, I respect capitalism.

1:51:00

I don't think that, I think that it would be polite

1:51:03

for you to make model architectures open source

1:51:05

and fundamental breakthroughs open source.

1:51:07

I don't think you have to make weights open source. You know what's interesting

1:51:10

is that

1:51:11

like there's so many possible trajectories

1:51:13

in human history where

1:51:16

you could have the next Google

1:51:18

be open source. So for example, I don't

1:51:20

know if that connection is accurate,

1:51:23

but you know, Wikipedia made a lot of interesting decisions,

1:51:25

not to put ads.

1:51:27

Wikipedia is basically open source.

1:51:29

You could think of it that way. And

1:51:31

like that's one of the main websites on the

1:51:33

internet. And like it didn't have to be that way.

1:51:36

It could have been like Google could have created Wikipedia,

1:51:38

put ads on it. You could probably run amazing

1:51:40

ads now on Wikipedia.

1:51:42

You wouldn't have to keep asking for money, but

1:51:45

it's interesting, right? So llama, open

1:51:47

source llama, derivatives of open

1:51:50

source llama might win the internet. I

1:51:53

sure hope so. I hope to see another

1:51:55

era. The kids today

1:51:58

don't know how good the internet used to be. And

1:52:00

I don't think this is just, oh, come on, like everyone's nostalgic

1:52:03

for their past, but I actually think

1:52:05

the internet, before small

1:52:07

groups of weaponized corporate and government

1:52:09

interests took it over, was a beautiful place.

1:52:15

You know, those small

1:52:17

number of companies have created some sexy

1:52:20

products, but you're saying

1:52:22

overall,

1:52:23

in the long arc of history, the

1:52:25

centralization of power they have like

1:52:28

suffocated the human spirit at scale. Here's

1:52:30

a question to ask about those beautiful, sexy

1:52:32

products. Imagine 2000 Google to 2010

1:52:35

Google, right? A lot

1:52:37

changed. We got maps, we got Gmail.

1:52:40

We lost a lot of products too, I think. Yeah,

1:52:42

I mean, some were probably, we've got Chrome, right?

1:52:44

And now let's go from 2010, we got Android. Now

1:52:47

let's go from 2010 to 2020. Well,

1:52:50

what does Google have? Well, search engine, maps,

1:52:53

mail, Android and Chrome. Oh,

1:52:55

I see. The internet

1:52:58

was this, you know, I was Times

1:53:00

person of the

1:53:00

year in 2006. I

1:53:04

love this. It's you, was Times person

1:53:06

of the year in 2006, right? Like that's,

1:53:09

you know, so quickly did people forget.

1:53:12

And I think some of it's social

1:53:14

media. I think some of it, I hope,

1:53:17

look, I hope that, I don't,

1:53:19

it's possible that some very sinister things happen.

1:53:22

I don't know. I think it might just be like the effects

1:53:24

of social media.

1:53:26

But something happened

1:53:28

in the last 20 years.

1:53:30

Oh, okay, so you're just being an old

1:53:32

man who's worried about the, I think there's always, it

1:53:34

goes, it's the cycle thing, there's ups and downs. And

1:53:36

I think people rediscover the power of distributed,

1:53:39

of decentralized. Yeah. I

1:53:41

mean, that's kind of like what the whole cryptocurrency is trying

1:53:44

to think that,

1:53:45

I think crypto is just carrying

1:53:47

the flame of that spirit of like, stuff should

1:53:49

be decentralized. It's just such a shame that they

1:53:51

all got rich,

1:53:53

you know? Yeah. If you could call the money

1:53:55

out of crypto, it would have been a beautiful place. Yeah.

1:53:58

But no, I mean, these people, you know. They sucked

1:54:01

all the value out of it and took it. Yeah,

1:54:04

money kind of corrupts the mind somehow.

1:54:06

It becomes a drug. You corrupted all

1:54:08

of crypto. You had coins worth billions of dollars

1:54:11

that had zero use.

1:54:15

You still have hope for crypto? Sure. I

1:54:17

have hope for the ideas. I really do. Yeah,

1:54:21

I mean, you know,

1:54:24

I want the US dollar to collapse. I

1:54:27

do. George Hawts.

1:54:31

Well, let me sort of on the AISAT,

1:54:33

do you think there's some interesting questions there, though,

1:54:37

to solve for the open source community in this case? So

1:54:39

like alignment, for example, or the

1:54:42

control problem. Like if you really

1:54:44

have super powerful, you said it's scary.

1:54:47

What do we do with it? So not control,

1:54:49

not centralized control, but like

1:54:51

if you were then you're gonna see

1:54:53

some guy or

1:54:55

gal release a super

1:54:57

powerful language model, open source. And

1:54:59

here you are, George Hawts, thinking, holy

1:55:02

shit. Okay, what ideas do I have to

1:55:05

combat this thing? So

1:55:08

what ideas would you have? I am

1:55:11

so much not worried about the

1:55:13

machine independently doing harm.

1:55:16

That's what some of these AI safety people seem

1:55:18

to think. They somehow seem to think that the machine,

1:55:20

like independently is gonna rebel against its

1:55:22

creator. So you don't think you'll find autonomy?

1:55:25

No, this is sci-fi B

1:55:27

movie garbage. Okay, what if

1:55:29

the thing writes code, basically writes

1:55:31

viruses? If

1:55:34

the thing writes viruses, it's

1:55:36

because the human told

1:55:39

it to write viruses. Yeah, but there's some things you can't

1:55:41

like put back in the box, that's kind of the whole

1:55:43

point, is it kind of spreads. Give it

1:55:45

access to the internet, it spreads, installs

1:55:47

itself,

1:55:48

modifies your shit. B, B, B,

1:55:50

B plot sci-fi, not real.

1:55:53

I'm trying to work, I'm trying to get better at my plot

1:55:55

writing. The thing that worries me,

1:55:57

I mean, we have a real danger to discuss,

1:55:59

and that is.

1:55:59

is bad humans using

1:56:02

the thing to do whatever bad, unaligned

1:56:04

AI thing you want. But this goes

1:56:06

to your previous concern

1:56:08

that who gets to define who's a good human, who's

1:56:10

a bad human? Nobody does, we give it to everybody.

1:56:13

And if you do anything besides give it to everybody,

1:56:15

trust me, the bad humans will get it.

1:56:18

Because that's who gets power. It's always the bad humans

1:56:20

who get power. Okay, power.

1:56:23

And

1:56:24

power turns even slightly good

1:56:26

humans to bad. Sure. That's the intuition

1:56:28

you have. I don't know.

1:56:31

I don't think everyone, I don't think everyone.

1:56:33

I just think that like,

1:56:35

here's the saying that I put in one of my

1:56:37

blog posts. When I was in the hacking

1:56:39

world,

1:56:40

I found 95% of people to be good

1:56:42

and 5% of people to be bad.

1:56:44

Like just who I personally judged as good people and bad people.

1:56:46

Like they believed about like good things for the world. They

1:56:49

wanted like flourishing and they wanted, you

1:56:51

know, growth and they wanted things like consider good,

1:56:53

right?

1:56:55

I came into the business world with Kama and I found the

1:56:57

exact opposite. I found 5% of

1:56:59

people good and 95% of people bad. I

1:57:01

found a world that promotes psychopathy. I

1:57:04

wonder what that means. I wonder if that

1:57:06

care, like, I

1:57:08

wonder if that's anecdotal or if it, if

1:57:12

there's true to that, there's something about capitalism

1:57:16

at the core that promotes the

1:57:18

people that run capitalism that promotes psychopathy.

1:57:21

That saying may of course be my own biases, right?

1:57:23

That may be my own biases that these people are a lot more

1:57:26

aligned with me than these other people.

1:57:28

Right? Yeah. So, you know, I

1:57:30

can certainly recognize that,

1:57:33

but you know, in general, I mean, this is like the

1:57:35

common sense maxim, which is the people

1:57:38

who end up getting power are never the ones you want with

1:57:40

it.

1:57:41

But do you have a concern of super

1:57:43

intelligent AGI,

1:57:46

open sourced, and then

1:57:48

what do you do with that? I'm not saying control

1:57:50

it, it's open source. What do we do with this

1:57:52

human species? That's not up to me. I

1:57:54

mean, you know, like I'm not a central planner.

1:57:56

No, not central planner, but you'll probably tweet as

1:57:59

a few days. left to live for the human species. I have

1:58:01

my ideas of what to do with it, and everyone else has their

1:58:04

ideas of what to do and make the best ideas win. But

1:58:06

at this point, do you brainstorm?

1:58:08

Because it's not regulation,

1:58:11

it could be decentralized regulation, where people agree

1:58:13

that this is just like, we create

1:58:15

tools that make it more difficult for

1:58:17

you

1:58:19

to maybe

1:58:22

make it more difficult for code to spread,

1:58:25

antivirus software, this kind of thing. But this is- You're

1:58:27

saying that you should build AI firewalls? That sounds good. You shouldn't

1:58:29

be running an AI firewall. Yeah, right, exactly. You should be running

1:58:32

an AI firewall to your mind.

1:58:34

You're constantly under- That's such an interesting idea.

1:58:37

Info wars, man. I don't

1:58:39

know if you're being sarcastic or not, but

1:58:41

I think there's power to that. It's like,

1:58:44

how do I protect my mind

1:58:48

from influence of human-like

1:58:50

or superhuman intelligent bots? I

1:58:53

would pay so much money for that product. I would

1:58:55

pay so much money for that product.

1:58:57

You know how much money I'd pay just for a spam

1:58:59

filter that works? Well,

1:59:01

on Twitter sometimes I would

1:59:03

like to have a protection

1:59:06

mechanism for my mind from the outrage mobs

1:59:10

because

1:59:12

they feel like bot-like behavior. It's

1:59:14

a large number of people that will just grab

1:59:17

a viral narrative

1:59:18

and attack anyone else that believes otherwise. And

1:59:20

it's like-

1:59:21

Whenever someone's telling me some story from the news,

1:59:23

I'm always like, I don't want to hear it, CIA op, bro, it's a

1:59:25

CIA op, bro. It doesn't matter if that's true or

1:59:27

not. It's just trying to influence your mind. You're

1:59:29

repeating an ad to me.

1:59:31

The viral mobs, is it like, yeah,

1:59:34

they're- To me, a defense against those mobs

1:59:37

is just getting multiple perspectives

1:59:40

always

1:59:41

from sources that make you feel kind of

1:59:45

like you're getting smarter. And

1:59:47

just actually just basically feels good. Like

1:59:50

a good documentary just feels

1:59:52

good. Something feels good about it. It's well done.

1:59:54

It's like, oh, okay, I never thought of it this way.

1:59:57

This just feels good. Sometimes the outrage

1:59:59

mobs, even if-

1:59:59

if they have a good point behind it, when they're like

2:00:02

mocking and derisive and just aggressive,

2:00:04

you're with us or against us, this

2:00:07

fucking- This is why I delete my tweets. Yeah,

2:00:10

why'd you do that?

2:00:12

I was, you know, I missed your tweets.

2:00:14

You know what it is? The algorithm promotes

2:00:17

toxicity. Yeah.

2:00:19

And like, you know, I think

2:00:22

Elon has a much better chance of fixing it than the previous

2:00:25

regime. Yeah.

2:00:28

But to solve this problem, to

2:00:30

build a social network that is actually not

2:00:32

toxic without

2:00:35

moderation. Like

2:00:39

not to stick but care. So like where people look

2:00:43

for goodness, to

2:00:45

make it catalyze the process of connecting

2:00:47

cool people and being cool to each other. Yeah.

2:00:51

Without ever censoring. Without ever censoring.

2:00:53

And like Scott Alexander has a

2:00:55

blog post I like where he talks about like moderation is not censorship,

2:00:58

right? Like all moderation you

2:01:00

want to put on Twitter,

2:01:01

right? Like you could totally make this

2:01:03

moderation

2:01:04

like just a,

2:01:06

you don't have to block it for everybody. You

2:01:08

can just have like a filter button,

2:01:10

right? That people can turn off if they were like safe search for Twitter,

2:01:12

right? Like someone could just turn that off, right? So

2:01:14

like, but then you'd like take this idea to an extreme, right?

2:01:17

Well,

2:01:17

the network should just show you, this

2:01:20

is a couch surfing CEO thing, right? If

2:01:22

it shows you right now these algorithms are

2:01:24

designed to maximize engagement. Well,

2:01:26

it turns out Outrage maximizes engagement.

2:01:28

Quirk of human, quirk of the human

2:01:30

mind, right?

2:01:31

Just this, I fall for it, everyone falls for it. So

2:01:35

yeah, you got to figure out how to maximize for something other

2:01:37

than engagement.

2:01:38

And I actually believe that you can make money with

2:01:40

that too. So it's not, I don't think engagement

2:01:42

is the only way to make money. I actually think it's incredible

2:01:45

that we're starting to see, I think again,

2:01:47

Yolen's doing so much stuff right with Twitter like charging

2:01:49

people money.

2:01:50

As soon as you charge people money, they're no longer

2:01:52

the product, they're the customer.

2:01:55

And then they can start building something that's good

2:01:57

for the customer and not good for the other customer,

2:01:59

which is the ad agency. As in hasn't

2:02:01

picked up steam I Pay

2:02:04

for Twitter doesn't even get me anything. It's my donation

2:02:06

to this new business model. Hopefully working out

2:02:08

sure But you know you for this business

2:02:10

model to work. It's like most people

2:02:13

should be signed up to Twitter and so the

2:02:15

way was There

2:02:17

was something perhaps not compelling or something

2:02:19

like this to people think you need most people

2:02:21

at all I think that why do I need most

2:02:23

people right? I don't make an 8,000 person company

2:02:26

make a 50 person company

2:02:28

Well, so speaking of which

2:02:32

You worked at Twitter for a bit I did as

2:02:35

an intern The

2:02:37

world's greatest intern. Yeah. All right,

2:02:40

there's been better. That's been better Tell

2:02:43

me about your time at Twitter. How did it come about

2:02:46

and what did you learn from the experience? So

2:02:48

I deleted

2:02:51

my first Twitter in 2010 I had

2:02:54

over hundred thousand followers

2:02:56

back when that actually meant something and I

2:03:00

Just saw you know

2:03:03

My co-worker summarized it well. He's

2:03:05

like

2:03:06

whenever I see someone's Twitter page

2:03:08

I either think the same of them or less

2:03:10

of them. I never think more of them Yeah,

2:03:13

right like like, you know, I don't want to mention any

2:03:15

names but like some people who like, you know, maybe you would

2:03:17

like read their books and you would respect them you see

2:03:19

them on Twitter and You're like Okay,

2:03:22

dude But

2:03:25

there's some people with same

2:03:27

You know who I respect a lot are

2:03:29

people that just post really good technical stuff.

2:03:32

Yeah, and I

2:03:34

guess I Don't

2:03:36

know. I think I respect them more for it because

2:03:38

you realize oh this wasn't There's

2:03:41

like so much depth to

2:03:43

this person to their technical understanding of so many different

2:03:46

topics

2:03:46

Okay, so I try to follow people. I

2:03:49

try to consume stuff. That's technical

2:03:52

machine learning content

2:03:53

there's probably a few

2:03:55

of those people and The

2:03:58

problem is inherently what?

2:03:59

the algorithm rewards, right? And

2:04:02

people think about these algorithms, people think that they

2:04:04

are terrible, awful things. And you know, I love that Elon

2:04:06

open sourced it. Because I mean, what it

2:04:09

does is actually pretty obvious. It just predicts

2:04:11

what you are likely to retweet and like, and

2:04:14

linger on.

2:04:15

So what all these algorithms do, so what TikTok does, so

2:04:17

all these recommendation engines do.

2:04:18

And it turns

2:04:21

out that the thing that you are most likely

2:04:23

to interact with is outrage.

2:04:25

And that's a quirk of the human condition. I

2:04:30

mean, and there's different flavors of outrage. It doesn't have

2:04:32

to be, it could be mockery. You

2:04:35

could be outraged. The topic of outrage could be different.

2:04:37

It could be an idea. It could be a person. It could be, and maybe

2:04:41

there's a better word than outrage. It could be drama. Sure.

2:04:44

Drama. All this kind of stuff. Yeah. But it doesn't

2:04:46

feel like when you consume it, it's a constructive

2:04:48

thing for the individuals that consume it in the

2:04:51

long term.

2:04:51

Yeah. So my time there,

2:04:54

I absolutely couldn't believe, you know,

2:04:56

I got crazy amount

2:04:58

of hate, you know, just on

2:05:00

Twitter for working at Twitter. It seemed like people

2:05:03

associated with this, I think maybe you were

2:05:06

exposed to some of this. So connection to Elon

2:05:08

or is it working at Twitter?

2:05:09

Twitter and Elon, like the whole...

2:05:12

Because Elon's gotten a bit spicy during

2:05:14

that time. A bit political,

2:05:16

a bit... Yeah. Yeah.

2:05:18

You know, I remember one of my tweets, it was never go

2:05:20

full Republican, and Elon liked it. You

2:05:23

know, I think... Oh

2:05:29

boy. Yeah, I mean, there's

2:05:31

a roller coaster of that, but being political on

2:05:33

Twitter,

2:05:34

boy. Yeah. And

2:05:37

also being, just attacking

2:05:39

anybody on Twitter, it comes back at

2:05:41

you harder. And if

2:05:43

his political end attacks. Sure. Sure,

2:05:46

absolutely. And then letting

2:05:50

sort of de-platform

2:05:53

people back on, even

2:05:56

adds more fun to the

2:05:58

beautiful chaos. I was hoping, and

2:06:01

I remember when Elon talked about buying Twitter

2:06:05

six months earlier, he was talking about

2:06:07

a principled commitment to free

2:06:09

speech.

2:06:10

I'm a big believer and

2:06:12

fan of that. I would love to see an actual

2:06:15

principled commitment to free speech. Of

2:06:18

course, this isn't quite what happened.

2:06:20

Instead of the oligarchy deciding

2:06:22

what to ban,

2:06:23

you had a monarchy deciding what to ban.

2:06:26

Instead of all the Twitter files,

2:06:28

shadow, really, the oligarchy

2:06:31

just decides what. Cloth masks are ineffective

2:06:33

against COVID. That's a true statement. Every doctor

2:06:35

in 2019 knew it and now I'm banned on Twitter for saying

2:06:38

it. Interesting. Oligarchy.

2:06:40

Now you have a monarchy and

2:06:42

he bans things he doesn't like. It's

2:06:46

just different power and

2:06:49

maybe I align more with him than with the oligarchy. But

2:06:52

it's not free speech. I

2:06:55

feel like

2:06:56

being a free speech absolutist on a social network

2:06:58

requires you to also have tools for

2:07:01

the individuals to

2:07:04

control what they consume easier.

2:07:08

Not censor, but just

2:07:10

control, oh, I'd like to see more cats

2:07:13

and less politics. This

2:07:15

isn't even remotely controversial. This is just saying

2:07:17

you want to give paying customers for a product what they want.

2:07:21

Not through the process of censorship, but through the process

2:07:23

of like- It's individualized, right? It's

2:07:25

individualized transparent censorship, which is honestly

2:07:27

what I want. What is an ad blocker? It's individualized

2:07:29

transparent censorship, right? Yeah, but censorship

2:07:32

is a strong word and

2:07:34

people are very sensitive too. I know, but

2:07:37

I just use words to describe what they functionally are and

2:07:39

what is an ad blocker. It's just censorship. But

2:07:41

I love what you're censoring. I'm

2:07:43

looking at you, I'm censoring

2:07:46

everything else out when my mind is focused

2:07:48

on you. You can use the word censorship

2:07:51

that way, but usually when people get very sensitive

2:07:53

about the censorship thing, I

2:07:55

think when anyone is allowed to

2:07:57

say anything, you should probably

2:07:59

have-

2:07:59

tools that maximize

2:08:03

the quality of the experience for individuals. So,

2:08:05

you know, for me, like what I really

2:08:07

value, boy, it would be amazing

2:08:09

to somehow figure out how to do that.

2:08:12

I love disagreement and debate and

2:08:14

people who disagree with each other

2:08:16

disagree with me, especially in the space of ideas, but

2:08:19

the high quality ones. So not derision,

2:08:21

right? Maslow's hierarchy of argument.

2:08:24

I think it's a real word for it. Probably.

2:08:26

There's just the way of talking that's like snarky

2:08:28

and so on that somehow is gets

2:08:31

people on Twitter and they get excited and so on.

2:08:33

You have like ad hominem refuting the central point.

2:08:35

I've like seen this as an actual pyramid. Yeah, it's yeah.

2:08:38

And it's like all of it,

2:08:40

all the wrong stuff is attractive to people.

2:08:42

I mean, we can just train a classifier to absolutely say what

2:08:44

level of Maslow's hierarchy of argument

2:08:47

are you at? And if it's ad hominem, like, okay,

2:08:49

cool. I turned on the no ad hominem filter.

2:08:52

I wonder if there's a social

2:08:54

network that will allow you to have that kind of filter.

2:08:56

Yeah, so here's

2:08:59

a problem with that. It's

2:09:01

not going to win in a free market.

2:09:04

What wins in a free market is all

2:09:06

television today is reality television because it's engaging.

2:09:08

If engaging

2:09:11

is what wins in a free market, right? So

2:09:13

it becomes hard to keep these other more

2:09:15

nuanced values.

2:09:16

Well,

2:09:19

okay, so that's the experience of being on Twitter,

2:09:22

but then you got a chance to also

2:09:24

and together with other engineers and

2:09:26

with Elon sort of look brainstorm

2:09:28

when you step into a code base has been around

2:09:31

for a long time. There's other social

2:09:33

networks, Facebook, this is old

2:09:35

code bases. And you step in and see,

2:09:37

okay, how do we make

2:09:40

with a fresh mind progress

2:09:42

on this code base? Like what did you learn about

2:09:44

software engineering, about programming from just experiencing

2:09:47

that? So my

2:09:49

technical recommendation to Elon, and I said

2:09:51

this on the Twitter spaces afterward, I said this

2:09:54

many times during my brief internship,

2:09:58

was that, you need refactors

2:10:01

before features. This

2:10:03

code base was, and look,

2:10:06

I've worked at Google, I've worked at Facebook. Facebook

2:10:08

has the best code, then

2:10:10

Google, then Twitter. And

2:10:12

you know what? You can know this because look at

2:10:14

the machine learning frameworks, right? Facebook released PyTorch,

2:10:17

Google released TensorFlow and Twitter released, okay,

2:10:21

so you know. It's a proxy,

2:10:24

but yeah, the Google code base

2:10:26

is quite interesting. There's a lot of really good software engineers

2:10:28

there, but the code base is very large. The

2:10:30

code base was good in 2005, right? It

2:10:33

looks like 2005. There's so many products, so many

2:10:35

teams, right? It's very difficult to, I

2:10:38

feel like Twitter does less, obviously

2:10:42

much less than Google

2:10:44

in terms of like the set of features,

2:10:48

right? So like it's, I

2:10:50

can imagine the number of software

2:10:52

engineers that could recreate Twitter

2:10:54

is much smaller than to recreate Google. Yeah,

2:10:56

I still believe in the amount of hate

2:10:59

I got for saying this, that 50 people

2:11:01

could build and maintain Twitter pretty

2:11:03

comfortably. What's the nature of the hate? But

2:11:07

you don't know what you're talking about. You know what it is? And

2:11:10

it's the same, this is my summary of like the hate I get

2:11:12

on Hacker News. It's like,

2:11:14

when I say I'm going to do something, they

2:11:17

have

2:11:19

to believe that it's impossible. Because

2:11:23

if doing things was possible, they'd

2:11:25

have to do some soul searching and ask the question,

2:11:27

why didn't they do anything? So when you say, and

2:11:30

I do think that's where the hate comes from. When you say, well,

2:11:32

there's a core truth to that. Yeah, so when you say I'm going

2:11:34

to solve self-driving,

2:11:37

people go like, what are your credentials? What

2:11:40

the hell are you talking about? This is an extremely

2:11:42

difficult problem. Of course, you're a noob that doesn't understand

2:11:44

the problem deeply. I

2:11:47

mean, that was the same nature of hate

2:11:49

that probably Elon got when he first talked about autonomous

2:11:51

driving.

2:11:53

But there's pros

2:11:55

and cons to that. Because there is experts

2:11:57

in this world. No, but

2:11:59

the market. aren't experts. The people

2:12:02

who are mocking are not experts with carefully

2:12:04

reasoned arguments about why you need 8,000 people

2:12:06

to run a bird app. But the people

2:12:09

are going to lose their jobs.

2:12:12

Well, that, but also there's the software

2:12:14

engineers that probably could have said, no, it's a lot more complicated

2:12:16

than you realize, but maybe it doesn't need to be so

2:12:18

complicated. You know,

2:12:20

some people in the world like to create complexity.

2:12:22

Some people in the world thrive under complexity, like lawyers,

2:12:25

right? Lawyers want the world to be more complex because you

2:12:27

need more lawyers and you need more legal hours, right? I

2:12:29

think that's another. If

2:12:31

there's two great evils in the world, it's centralization

2:12:34

and complexity. Yeah. And the

2:12:36

one of the sort of hidden side effects

2:12:40

of software engineering is

2:12:44

like finding pleasure and complexity.

2:12:47

I mean, I don't remember just taking

2:12:50

all the software engineering courses and just doing programming

2:12:52

and this is just coming up in this

2:12:56

object-oriented programming kind of idea.

2:13:00

Not often do people tell you, do

2:13:02

the simplest possible thing. A professor,

2:13:06

a teacher is not going to get in front

2:13:08

like, this is the simplest way to do

2:13:10

it. They'll say, this is the

2:13:12

right way and the right way, at least for

2:13:15

a long time, especially I came

2:13:17

up with like Java, right? There's

2:13:20

so much boilerplate, so much like, so

2:13:23

many classes, so many like designs

2:13:26

and architectures and so on, like planning for

2:13:29

features far into the future

2:13:31

and planning poorly and all this

2:13:33

kind of stuff. And then there's this like code

2:13:35

base that follows you along and puts pressure on

2:13:37

you and nobody knows

2:13:39

what like parts, different parts do, which

2:13:42

slows everything down. There's a kind of bureaucracy that's

2:13:44

instilled in the code as a result of that, but

2:13:46

then you feel like, oh well I follow

2:13:49

good software engineering practices. It's an

2:13:51

interesting trade-off because then you look at like the

2:13:53

ghetto-ness of like Pearl and

2:13:56

the old like, how quickly you just

2:13:58

write a couple lines and just get stuff done.

2:13:59

that trade-off is interesting or bash

2:14:02

or whatever, these kind of ghetto things you can do

2:14:04

on Linux. One of my favorite things

2:14:06

to look at today is how much do you trust your tests?

2:14:09

We've put a ton of effort in comma and I put a ton

2:14:11

of effort in tiny grad into making sure,

2:14:14

if you change the code and the tests pass,

2:14:17

that you didn't break the code. Now, obviously,

2:14:19

it's not always true. But the

2:14:21

closer that is to true, the more you trust your

2:14:23

tests, the more you're like, oh, I got a pull request and

2:14:25

the tests pass, I feel okay to merge

2:14:27

that, the faster you can make progress. You're always programming

2:14:30

your tests in mind, developing tests with

2:14:32

that in mind that if it passes, it should be good.

2:14:34

Twitter had a- Not that.

2:14:37

It was impossible to make

2:14:39

progress in the code base. What other

2:14:41

stuff can you say about the code base that made it difficult?

2:14:45

What are some interesting quirks broadly

2:14:47

speaking from that compared

2:14:50

to just your experience with comma and

2:14:52

everywhere else? The real thing that

2:14:55

I spoke to a bunch of,

2:14:59

individual contributors at Twitter and I just

2:15:01

had a test. I'm like, okay, so

2:15:03

what's wrong with this place? Why does this code look like this? They

2:15:06

explained to me what Twitter's promotion system

2:15:08

was. The way that you got promoted

2:15:10

to Twitter was you wrote a library that

2:15:12

a lot of people used. Some

2:15:17

guy wrote an NGINX replacement

2:15:19

for Twitter. Why does Twitter need an NGINX

2:15:21

replacement? What was wrong with NGINX? You

2:15:24

see, you're not going to get promoted if you use

2:15:26

NGINX. But if you write a replacement

2:15:29

and lots of people start using it as the Twitter

2:15:31

front-end for their product, then you're going to get promoted.

2:15:34

So interesting because from an individual

2:15:36

perspective, how do you incentivize,

2:15:39

how do you create the kind of incentives that will

2:15:41

lead to a great code base? Okay,

2:15:44

what's the answer to that? So

2:15:47

what I do at comma and at

2:15:52

TinyCorp is you have to explain it to me. You have to

2:15:54

explain to me what this code does. And

2:15:56

if I can sit there and come up with a simpler

2:15:58

way to do it, you have to... You have

2:16:01

to agree with me about the simpler way. Obviously,

2:16:03

we can have a conversation about this. It's not

2:16:05

dictatorial, but if you're like, wow, wait,

2:16:07

that actually is way simpler.

2:16:10

Like the simplicity is important.

2:16:12

Right? But that requires people

2:16:14

that overlook the code at the

2:16:17

highest levels to be like, okay.

2:16:19

It requires technical leadership, you trust. Yeah,

2:16:22

technical leadership. So

2:16:24

managers or whatever should have to have

2:16:26

technical savvy, deep technical savvy. Managers

2:16:29

should be better programmers than the people who they manage.

2:16:32

Yeah. And that's not always obvious

2:16:35

to create, especially large companies. Managers

2:16:37

get soft. And like, you know, and this is just, I've instilled

2:16:40

this culture at Kama and Kama has better programmers

2:16:42

than me who work there. But you know, again,

2:16:45

I'm like the, you know, the old guy from Good Will Hunting. It's

2:16:47

like, look, man,

2:16:48

you know, I might not be as

2:16:50

good as you, but I can see the difference between me and you. Right?

2:16:53

And like, this is what you need. This is what you need at the top. Or

2:16:55

you don't necessarily need the manager to be the absolute

2:16:58

best. I shouldn't say that, but like they need

2:17:00

to be able to recognize skill. Yeah.

2:17:02

And have good intuition, intuition

2:17:05

that's laden with wisdom from all the

2:17:07

battles of trying to reduce complexity

2:17:09

and code bases. You know, I took a, I took a political

2:17:12

approach at Kama too, that I think is pretty interesting. I think Elon

2:17:14

takes the same political approach. You

2:17:16

know, Google had no politics

2:17:19

and what ended up happening is the absolute worst kind of politics

2:17:21

took over.

2:17:22

Kama has an extreme amount of politics and they're

2:17:25

all mine and no dissidents is tolerated.

2:17:28

So it's a dictatorship. Yep. It's

2:17:30

an absolute dictatorship. Right. Elon does

2:17:32

the same thing. Now, the thing about my dictatorship is

2:17:34

here are my values. Yeah. So

2:17:37

it's transparent. It's transparent. It's

2:17:39

a transparent dictatorship. Right. And you can

2:17:41

choose to opt in or, you know, you get free exit, right? That's the beauty of companies.

2:17:44

If you don't like the dictatorship, you quit.

2:17:46

So you

2:17:48

mentioned rewrite before

2:17:50

or refactor before features.

2:17:54

If you were to refactor the Twitter code base,

2:17:56

what would that look like? And maybe also

2:17:58

comment on how difficult the code is. is it to refactor?

2:18:01

The main thing I would do is first of all,

2:18:03

identify the pieces and then put tests

2:18:05

in between the pieces,

2:18:07

right? So there's all these different Twitter as a microservice

2:18:09

architecture, um, all

2:18:12

these different microservices. And the

2:18:14

thing that I was working on there, look, like, you know, George

2:18:18

didn't know any JavaScript, he asked how to fix

2:18:19

search, blah, blah, blah, blah. Look,

2:18:21

man, like the thing is

2:18:24

like, I just, you know, I'm upset that the

2:18:26

way that this whole thing was portrayed, because

2:18:28

it wasn't like, it wasn't like taken by people,

2:18:30

like, honestly, it wasn't like, it

2:18:32

was taken by people who started out

2:18:34

with a bad faith assumption. Yeah. And

2:18:36

I mean, I look, I can't like, and you as a programmer,

2:18:39

just being transparent out there, actually having

2:18:41

like fun and like, this is what programmers

2:18:44

should be about. It's just like, I love that Elon gave

2:18:46

me this opportunity. Yeah. Like really it does.

2:18:48

And like, you know, he came up with my, the day I quit, he came

2:18:50

up with my Twitter spaces afterward and we had a conversation

2:18:53

like, I just, I respect that so much.

2:18:55

Yeah. And it's also inspiring to just engineers

2:18:57

and programmers and just, it's cool. It should

2:18:59

be fun. The people that were hating on it is

2:19:01

like, oh man. It

2:19:03

was fun. It was fun. It was stressful,

2:19:06

but I felt like, you know, it was not like a cool like

2:19:08

point in history and like, I hope I was useful and

2:19:10

probably kind of wasn't, but like, maybe

2:19:12

I'm also were one of the people that

2:19:15

kind of made a strong case to refactor. Yeah.

2:19:17

And that that's a really

2:19:19

interesting thing to raise. Like

2:19:21

maybe that is the right, you know, the

2:19:24

timing of that is really interesting. If you look at just the development

2:19:26

of autopilot, you know, going from

2:19:29

mobile eye to just like more,

2:19:32

if you look at the history of semi-autonomous

2:19:34

driving in Tesla is, is

2:19:36

more and more

2:19:38

like you could say refactoring or,

2:19:41

or starting from scratch, redeveloping from scratch. It's

2:19:43

refactoring all the way down. And like,

2:19:46

and the question is like, can you do that sooner? Can

2:19:49

you maintain product profitability?

2:19:52

And like, what's the, what's the right time to do it? How

2:19:55

do you do it? You know, on any one day,

2:19:57

it's like, you don't want to pull off the band-aids. Like

2:19:59

it's.

2:19:59

Like everything works

2:20:02

is just like little fixed here and there,

2:20:04

but maybe starting from scratch. This

2:20:06

is the main philosophy of TinyGrad. You have never

2:20:09

refactored enough. Your code can get smaller,

2:20:11

your code can get simpler, your ideas can be more

2:20:13

elegant.

2:20:14

But would you consider,

2:20:16

you know, say you were like running

2:20:19

Twitter development teams, engineering

2:20:21

teams, would

2:20:23

you go as far as like different programming language?

2:20:26

Just go that far. I

2:20:28

mean, the first thing that I would do

2:20:31

is build tests. The first thing I would

2:20:33

do is get a CI to

2:20:36

where people can trust to make changes.

2:20:39

So that if you keep- Before I touched any

2:20:41

code, I would actually say, no one touches

2:20:44

any code. The first thing we do is we test this code

2:20:46

base. I mean, this is classic, this is how you approach a legacy code

2:20:48

base. This is like what any, how to approach

2:20:50

a legacy code base book will tell you.

2:20:52

So, and then you hope

2:20:54

that there's modules that can

2:20:57

live on for a while. And

2:20:59

then you add new ones, maybe in a different

2:21:01

language or- Before we add

2:21:03

new ones, we replace old ones. Yeah, meaning

2:21:06

like replace old ones with something simpler. We

2:21:08

look at this like this thing that's 100,000

2:21:10

lines and we're like, well, okay,

2:21:12

maybe this did even make sense in 2010, but

2:21:14

now we can replace this with an open source thing.

2:21:17

Right? Yeah. And you know, we

2:21:19

look at this here, here's another 50,000 lines. Well,

2:21:21

actually, you know, we can replace this with 300 lines of go. And

2:21:25

you know what? We trust that the go actually replaces

2:21:27

this thing because all the tests still pass. So

2:21:29

step one is testing. Yeah. And

2:21:31

then step two is like the programming languages and afterthought,

2:21:34

right? You know, let a whole lot of people compete, be like,

2:21:36

okay, who wants to rewrite a module, whatever language you want to

2:21:38

write it in, just the tests have to pass. And

2:21:41

if you figure out how to make the test pass, but

2:21:43

break the site, that's, we got to go back to step

2:21:45

one. Step one is get tests that

2:21:47

you trust in order to make changes in the code base.

2:21:49

I wonder how hard it is to, because I'm with you

2:21:51

on testing and everything. Hey, you have from

2:21:54

tests to like asserts to everything, code

2:21:57

is just covered in this because.

2:21:59

it should be very

2:22:02

easy to make rapid changes

2:22:05

and no, that's not gonna break everything. And

2:22:08

that's the way to do it. But I wonder how difficult

2:22:10

is it to

2:22:11

integrate tests into

2:22:14

a code base that doesn't have many of them. So I'll

2:22:16

tell you what my plan was at Twitter. It's actually similar to

2:22:18

something we use at Comma. So at Comma we have this thing called process

2:22:20

replay. And we have a bunch of routes

2:22:22

that'll be run through. So Comma's a microservice architecture

2:22:25

too. We have microservices in the driving.

2:22:27

Like we have one for the cameras, one for the sensor, one for the

2:22:29

planner, one for

2:22:31

the model. And we

2:22:34

have an API, which the microservices talk

2:22:36

to each other with. We use this custom thing called Serial,

2:22:38

which uses a ZMQ. Twitter uses

2:22:42

Thrift. And then it uses

2:22:45

this thing called Finagle, which is a Scala RPC backend.

2:22:50

But this doesn't even really matter. The Thrift and Finagle

2:22:53

layer was a great place, I

2:22:56

thought, to write tests. To start building

2:22:58

something that looks like process replay. So Twitter

2:23:00

had some stuff that looked kind of like

2:23:02

this, but it wasn't offline. It

2:23:05

was only online. So you could ship

2:23:07

a modified version of it, and

2:23:09

then you could redirect

2:23:11

some of the traffic to your modified version and diff those

2:23:13

too. But it was all online. There

2:23:15

was no CI in the traditional sense.

2:23:17

I mean, there was some, but it was not full coverage. So

2:23:19

you can't run all of Twitter offline

2:23:22

to test something. Then this was another problem. You can't

2:23:24

run all of Twitter.

2:23:25

Period. Anyone person

2:23:27

can't run. Twitter runs in three data

2:23:29

centers, and that's it. There's no other

2:23:32

place you can run Twitter, which is like,

2:23:34

George, you don't understand. This is modern

2:23:36

software development. No, this is bullshit. Why

2:23:39

can't it run on my laptop?

2:23:41

What are you doing? Twitter can run on, yeah, okay.

2:23:43

Well, I'm not saying you're gonna download the whole

2:23:45

database to your laptop, but I'm saying all the middleware

2:23:47

and the front end should run on my laptop, right? That

2:23:50

sounds really compelling. Yeah. But

2:23:53

can that be achieved by

2:23:56

a code base that grows over the years? I

2:23:58

mean, the three data centers. doesn't have to be, right? Because

2:24:01

they're totally different designs. The

2:24:03

problem is more like,

2:24:05

why did the code base have to grow?

2:24:07

What new functionality has been added to

2:24:09

compensate for the lines

2:24:11

of code that are there?

2:24:13

One of the ways to explain is that the

2:24:15

incentive for software developers to move up in the company

2:24:18

is to add code. To

2:24:20

add, especially large. And you know what? The incentive for

2:24:22

politicians to move up in the political structure is to add laws.

2:24:25

Yeah. Same problem. Yeah.

2:24:28

Yeah. The flip

2:24:30

side is to simplify, simplify, simplify. I

2:24:33

mean, you know what? This is something that

2:24:35

I do differently from Elon with

2:24:37

comma about self-driving cars.

2:24:40

You know, I hear the new version's gonna come out

2:24:42

and the new version is not gonna be better, but

2:24:45

at first, and it's gonna require a ton of refactors.

2:24:48

I say, okay, take as long as you need. Like,

2:24:51

you convinced me this architecture's better? Okay, we

2:24:53

have to move to it. Even if it's not gonna

2:24:55

make the product better tomorrow, the top

2:24:57

priority is getting the architecture right.

2:25:00

So what do you think about

2:25:01

sort of a thing where

2:25:03

the product is online?

2:25:05

So I guess,

2:25:07

would you do a refactor? If you ran

2:25:10

engineering on Twitter, would you just do a refactor?

2:25:12

How long would it take? What would that mean for the

2:25:14

running of the actual service?

2:25:17

You know, and...

2:25:21

I'm not the right person to run Twitter.

2:25:23

I'm just not. And that's the problem. Like,

2:25:26

I don't really know. I don't really know if

2:25:28

that's... A common thing that I thought

2:25:30

a lot while I was there was whenever I thought something

2:25:32

that was different to what Elon thought,

2:25:34

I'd have to run something in the back of my head reminding

2:25:37

myself

2:25:38

that Elon is the richest man in

2:25:40

the world. And in

2:25:42

general, his ideas are better than mine.

2:25:45

Now, there's a few things I think I do understand

2:25:48

and know more about, but

2:25:50

like in general,

2:25:52

I'm not qualified to run Twitter. I'm not necessarily

2:25:55

qualified, but like, I don't think I'd be that good at it. I

2:25:57

don't think I'd be good at it.

2:25:58

I don't think I'd really be good at running.

2:25:59

an engineering organization at scale.

2:26:02

I think I could

2:26:04

lead a very good refactor

2:26:07

of Twitter,

2:26:08

and it would take like six months to a year. And

2:26:11

the results to show at the end of it would be

2:26:13

feature development in general takes

2:26:16

10x less time, 10x less man hours.

2:26:18

That's what I think I could actually do. Do

2:26:21

I think that it's the right decision for the business

2:26:24

above my pay grade?

2:26:26

Yeah,

2:26:28

but a lot of these kinds of decisions are

2:26:30

above everybody's pay grade. I don't want to be a manager.

2:26:33

I don't want to do that. Like if

2:26:35

you really forced me to, yeah, it would make me

2:26:37

maybe

2:26:40

make me upset if I had to make

2:26:42

those decisions. I don't want to. Yeah,

2:26:46

but a refactor is so compelling.

2:26:49

If this is to become something

2:26:51

much bigger than what Twitter was, it

2:26:54

feels like a refactor

2:26:56

has to be coming at some point. George, you're

2:26:58

a junior software engineer. Every junior software

2:27:01

engineer wants to come in and refactor the whole code.

2:27:04

OK, that's like your opinion,

2:27:06

man. Yeah,

2:27:09

sometimes they're right. Well,

2:27:11

whether they're right or not, it's definitely not

2:27:13

for that reason, right? It's definitely not a question of engineering

2:27:16

prowess. It is a question of maybe what the priorities are for the

2:27:18

company. And I did get more intelligent

2:27:21

feedback from people, I think, in good faith, like saying

2:27:23

that. Actually,

2:27:26

from Elon.

2:27:26

And from Elon, people

2:27:29

were like, well, a stop the world refactor

2:27:32

might be great for engineering, but you know we have a business

2:27:34

to run.

2:27:35

And hey, above

2:27:37

my pay grade. What did you think about Elon

2:27:39

as an engineering leader, having

2:27:42

to experience him in the most chaotic

2:27:44

of spaces, I would say?

2:27:51

My respect for him is unchanged. And I did have

2:27:53

to think a lot more deeply about some

2:27:55

of the decisions he's forced to make. About.

2:27:59

the tensions within those,

2:28:01

the trade-offs within those decisions?

2:28:05

About like a whole like

2:28:08

matrix coming at him. I think that's Andrew

2:28:10

Tate's word for it. Sorry to borrow it. Also,

2:28:12

bigger than engineering, just everything. Yeah,

2:28:16

like

2:28:17

the war on the woke. Yeah.

2:28:20

Like, it just, man,

2:28:23

and like, he doesn't

2:28:25

have to do this, you know? He doesn't

2:28:27

have to. So, like, parag

2:28:29

and go chill at the Four Seasons of Maui, you

2:28:32

know? But

2:28:33

see, one person I respect and one person I don't.

2:28:36

So his heart is in the right place

2:28:38

fighting, in this case, for this ideal

2:28:40

of the freedom of expression.

2:28:43

Well, I wouldn't define the ideal so simply.

2:28:45

I think you can define the ideal

2:28:47

no more than just saying

2:28:50

Elon's idea of a good world.

2:28:52

Freedom of expression is...

2:28:54

To you, it's still, the downside

2:28:57

of that is the monarchy.

2:28:58

Yeah, I mean, monarchy has

2:29:01

problems, right? But I mean, would

2:29:04

I trade right now the

2:29:06

current oligarchy, which runs America

2:29:08

for the monarchy? Yeah, I would. Sure.

2:29:11

For the Elon monarchy? Yeah, you know why? Because

2:29:13

power would cost one cent a kilowatt hour.

2:29:16

A tenth of a cent a kilowatt hour.

2:29:18

What do you mean? Right now, I pay

2:29:20

about 20 cents a kilowatt hour for electricity

2:29:23

in San Diego. It's like the

2:29:25

same price you paid in 1980. What

2:29:27

the hell? So you would see a lot of

2:29:29

innovation with Elon.

2:29:31

Maybe it'd have some hyper loops.

2:29:33

Yeah. Right? And I'm willing

2:29:35

to make that trade off, right? I'm willing to make... And this is

2:29:37

why. You know, people think that like dictators take

2:29:39

power through some untoward

2:29:41

mechanism. Sometimes they do, but usually it's

2:29:44

because the people want them. And

2:29:46

the downsides of a dictatorship,

2:29:48

I feel like we've gotten to a point now with the oligarchy where,

2:29:51

yeah, I would prefer the dictator.

2:29:53

What

2:29:56

do you think about Scala as a programming language?

2:30:01

I liked it more than I thought. I did the tutorials.

2:30:03

Like I was very new to it. Like it would take me six months to be able to

2:30:05

write like good Scala.

2:30:07

I mean, what did you learn about learning a new programming language

2:30:09

from that? I love doing

2:30:11

like new programming, I did tutorials and doing them. I did all this

2:30:13

for Rust. It

2:30:17

keeps some of its upsetting JVM roots,

2:30:20

but it is a much nicer. In

2:30:22

fact, I almost don't know why Kotlin took off

2:30:24

and not Scala.

2:30:26

I think Scala has some beauty that Kotlin lacked.

2:30:30

Whereas Kotlin felt a lot more.

2:30:33

I mean, it was almost like, I don't know if it actually was

2:30:35

a response to Swift, but that's kind of what it felt like.

2:30:38

Like Kotlin looks more like Swift and Scala looks more

2:30:40

like, well, I could have a functional programming language,

2:30:42

more like an OCaml or Haskell. Let's

2:30:44

actually just explore, we touched it a little bit,

2:30:46

but just on the art, the

2:30:49

science and the art of programming. For

2:30:51

you personally, how much of your programming is done with

2:30:53

GPT currently? None.

2:30:55

I don't use it at all.

2:30:57

Because you prioritize simplicity so much.

2:30:59

Yeah, I find that a

2:31:01

lot of it is noise. I do use

2:31:04

VS code

2:31:05

and I do like

2:31:07

some amount of autocomplete.

2:31:09

I do like a

2:31:10

very, feels like rules-based autocomplete.

2:31:13

An autocomplete is going to complete the variable name

2:31:15

for me, so I'm going to type it, I can just press tab. That's

2:31:17

nice, but I don't want it autocomplete. You know what

2:31:19

I hate? When autocompletes, when I type the word for

2:31:22

and it puts like two parentheses

2:31:24

and two semicolons and two braces, I'm like, oh man.

2:31:28

Well, let me with VS code

2:31:31

and GPT with codecs,

2:31:35

you can kind of brainstorm. I

2:31:37

find, I'm like

2:31:40

probably the same as you, but I like

2:31:42

that it generates code and you basically

2:31:45

disagree with it and write something simpler. But

2:31:47

to me, that somehow is like inspiring,

2:31:50

it makes me feel good. It also gamifies the simplification

2:31:52

process because I'm like, oh yeah, you

2:31:55

dumb AI system. You think this is the way

2:31:57

to do it. I have a simpler thing here. It just

2:31:59

constantly.

2:31:59

reminds me of bad stuff. I

2:32:02

mean, I tried the same thing with rap, right? I tried the same

2:32:04

thing with rap, and I actually think of a much better program than rapper,

2:32:07

but I even tried, I was like, okay, can we get some inspiration

2:32:09

from these things for some rap lyrics? And

2:32:12

I just found that it would go back to the most cringy

2:32:15

tropes and dumb rhyme schemes,

2:32:17

and I'm like, yeah, this is what the code looks

2:32:19

like too. I think you and

2:32:21

I probably have different thresholds for cringe code. You

2:32:24

probably hate cringe code. So

2:32:26

it's for you.

2:32:30

Boilerplate is a part of code. Some

2:32:35

of it, yeah,

2:32:39

and some of it is just faster lookup. Because

2:32:43

I don't know about you, but I don't remember everything. I'm

2:32:46

offloading so much of my memory about

2:32:50

different functions, library functions, all that kind

2:32:52

of stuff. GPT

2:32:55

just is very fast. It's standard

2:32:57

stuff, standard library

2:33:00

stuff, basic stuff that everybody uses.

2:33:03

I think that, I

2:33:08

don't know, I mean, there's just a little of this in Python.

2:33:10

Maybe if I was

2:33:12

coding more in other languages, I would consider it

2:33:14

more, but I feel like Python already

2:33:17

does such a good job of removing

2:33:19

any Boilerplate.

2:33:20

That's true. It's the closest thing you can get to pseudocode,

2:33:23

right? Yeah, that's true. That's

2:33:25

true. I'm like,

2:33:27

yeah, sure. If I like, yeah, I'm great, GPT. Thanks

2:33:29

for reminding me to free my variables. Unfortunately,

2:33:32

you didn't really recognize the scope correctly

2:33:34

and you can't free that one, but you put

2:33:36

the freeze there and I get it. Fiber.

2:33:41

Whenever I've used fiber for certain things,

2:33:43

like design or whatever, it's

2:33:45

always, you come back. I think that's probably

2:33:47

closer, my experience with fiber is closer to your

2:33:50

experience with programming

2:33:50

with GPT. You're just frustrated

2:33:53

and feel worse about the whole process of design

2:33:55

and art and whatever. Whatever I used fiber

2:33:57

for.

2:33:59

Still, I just

2:34:02

feel like later versions of GPT, I'm

2:34:04

using GPT

2:34:07

as much as possible

2:34:09

to just learn the dynamics of

2:34:11

it, like these early versions,

2:34:13

because it feels like in the future, you'll be using it

2:34:16

more and more.

2:34:17

And so like, I don't want to be, like

2:34:19

for the same reason I gave away all

2:34:21

my books and switched to Kindle, because

2:34:23

like, all right, how long

2:34:25

are we gonna have paper books? Like 30 years

2:34:28

from now, like I want to learn

2:34:30

to be reading on Kindle, even though I don't

2:34:32

enjoy it as much, and you learn to enjoy it more.

2:34:34

In the same way, I switched from, let

2:34:37

me just pause, I switched from Emacs

2:34:39

to VS Code. Yeah, I switched from

2:34:41

Vim to VS Code, I think I similar, but. Yeah,

2:34:44

it's tough. And Vim to VS Code is even

2:34:46

tougher,

2:34:47

because Emacs is like, old,

2:34:49

like more outdated, feels like it. The community

2:34:52

is more outdated. Vim is

2:34:54

like pretty vibrant still, so. I

2:34:56

never used any of the plugins, I still don't use any of the

2:34:58

plugins. I looked at myself in the mirror, I'm like, yeah, you

2:35:01

wrote some stuff in Lisp, yeah. But

2:35:03

I

2:35:03

never used any of the plugins in Vim either. I had

2:35:05

the most vanilla Vim, I have a syntax eyelighter,

2:35:07

I didn't even have autocomplete. Like, these

2:35:09

things, I feel like

2:35:11

help you so marginally, that

2:35:15

like, and now,

2:35:17

okay, now VS Code's autocomplete

2:35:19

has gotten good enough, that like, okay, I don't have to set

2:35:21

it up, I can just go into any code base and autocomplete's right 90% of the

2:35:24

time. Okay, cool, I'll take it. All

2:35:26

right, so,

2:35:28

adapting

2:35:30

to the tools once they're good. But like, the

2:35:32

real thing that I want is not

2:35:35

something that like,

2:35:37

tab completes my code and gives me ideas.

2:35:39

The real thing that I want is a very intelligent

2:35:41

pair programmer that comes

2:35:44

up with a little pop-up saying, hey, you

2:35:46

wrote a bug on line 14 and here's what it is.

2:35:49

Yeah. Now I like that. You know what does

2:35:51

a good job of this? MyPy. I

2:35:53

love MyPy. MyPy, this fancy type checker

2:35:55

for Python. Yeah. And actually I tried like,

2:35:57

Microsoft released one too and it was like. 60% false

2:36:01

positives. MyPy is like 5% false positives.

2:36:04

95% of the time it recognizes, I

2:36:07

didn't really think about that typing interaction correctly.

2:36:09

Thank you MyPy. So you like

2:36:11

type hinting,

2:36:13

you like pushing the language towards

2:36:15

being a typed language. Oh yeah, absolutely. I

2:36:18

think optional typing is great.

2:36:20

I mean look, I think that it's like a meat in the middle,

2:36:22

right, like Python has these optional type hinting and C++

2:36:24

has auto.

2:36:27

C++ allows you to take a step back.

2:36:29

Well C++ would have you brutally type out

2:36:31

STD string iterator, right? Now

2:36:34

I can just type auto, which is nice. And then Python

2:36:36

used to just have A.

2:36:38

What type is A?

2:36:40

It's an A. A colon

2:36:43

STR. Oh, okay, it's a string, cool.

2:36:46

Yeah. I wish there was a way,

2:36:48

like a simple way in Python to

2:36:50

like turn on a mode which would enforce the types.

2:36:54

Yeah, like give a warning when there's no type something like this.

2:36:56

Well no, to give a warning where, like MyPy is a

2:36:58

static type checker, but I'm asking just for

2:37:00

a runtime type checker. Like there's like ways to like hack this in, but

2:37:03

I wish it was just like a flag like Python 3-T.

2:37:05

Oh, I see, I see. Enforce

2:37:08

the types in runtime. Yeah, I feel like that makes you

2:37:10

a better programmer that that's the kind of test,

2:37:12

right? That the type

2:37:14

remains the same. Well that I know that I didn't like

2:37:16

mess any types up, but again, like MyPy is getting really

2:37:19

good and I love it. And I

2:37:21

can't wait for some of these tools to become AI powered.

2:37:24

Like I want AI's reading my code and giving me

2:37:26

feedback. I don't want AI's

2:37:29

writing half-assed auto complete

2:37:31

stuff for me. I wonder if you

2:37:33

can now take GPT and give it a

2:37:35

code that you wrote for a function and say, how

2:37:37

can I make this simpler and have it accomplish

2:37:39

the same thing?

2:37:40

I think you'll get some good ideas on some code.

2:37:43

Maybe not the code you write for

2:37:46

timing grad type of code, because that requires so

2:37:49

much design thinking, but like other

2:37:51

kinds of code. I don't know. I downloaded

2:37:53

that plugin maybe like two months ago, I

2:37:55

tried it again and found the same. Look,

2:37:58

I don't doubt that these models.

2:38:00

are going to first become

2:38:02

useful to me, then be as good as me, and then

2:38:04

surpass me. But

2:38:07

from what I've seen today, it's like,

2:38:09

someone

2:38:13

occasionally taking over my keyboard

2:38:15

that I hired from Fiverr, yeah. I'd

2:38:19

rather not. Ideas about how to debug

2:38:21

the coder. Basically a better debugger is

2:38:23

really interesting. But it's not a better debugger. I

2:38:25

guess I would love a better debugger.

2:38:27

Yeah, it's not yet, yeah. But it feels like it's not too

2:38:29

far. Yeah, one of my coworkers says he uses

2:38:31

them for print statements.

2:38:33

Like every time he has to, just like when he needs, the only

2:38:35

thing he can really write is like, okay, I just want to

2:38:37

write the thing to print the state out right now.

2:38:39

Oh, that definitely

2:38:41

is much faster, is print statements,

2:38:44

yeah. I see myself using that a

2:38:46

lot, just like, because it figures out the rest of

2:38:48

the functions, just like, okay, print everything. Yeah, print

2:38:50

everything, right? And then, yeah, if you want a pretty printer,

2:38:53

maybe.

2:38:53

I'm like, yeah, you know what? I think in two years,

2:38:56

I'm gonna start using these plugins. And

2:38:59

then in five years, I'm gonna be heavily relying

2:39:02

on some AI augmented flow. And then

2:39:04

in 10 years. Do you think it will ever get to 100%?

2:39:07

Where are the, like, what's

2:39:09

the role of the human that it

2:39:11

converges to as a programmer?

2:39:15

So you think it's all generated?

2:39:17

Our niche becomes, oh, I think it's over for

2:39:19

humans in general.

2:39:21

It's

2:39:21

not just programming, it's everything.

2:39:23

So niche becomes, well. Our niche becomes

2:39:25

smaller and smaller and smaller. In fact, I'll tell you what the last

2:39:27

niche of humanity is gonna be. Yeah. This

2:39:30

is a great book, and it's, if I recommended

2:39:32

Metamorphosis of Prime Intellect last time, there

2:39:34

is a sequel called A Casino Odyssey in Cyberspace.

2:39:38

And

2:39:39

I don't wanna give away the ending of this, but it tells

2:39:42

you what the last remaining human currency is.

2:39:44

And I agree with that.

2:39:45

We'll

2:39:48

leave that as the cliffhanger. So

2:39:51

no more programmers left, huh?

2:39:54

That's where we're going. Well, unless you want handmade

2:39:56

code, maybe they'll sell it on Etsy. This is handwritten

2:39:59

code. It

2:40:01

doesn't have that machine polish to it. It

2:40:03

has those slight imperfections that would only be written

2:40:05

by a person.

2:40:07

I wonder how far away we are from that. I

2:40:10

mean, there's some aspect to, you

2:40:12

know, on Instagram your title is listed as

2:40:14

prompt engineer. Right.

2:40:17

Thank you for noticing. I

2:40:19

don't know if it's ironic or non,

2:40:24

or sarcastic or non. What

2:40:27

do you think of prompt engineering as a scientific?

2:40:30

And engineering discipline or maybe,

2:40:33

and maybe art form. You know what?

2:40:36

I started comma six years ago. And I started

2:40:38

the tiny Corp a month ago.

2:40:42

So much has changed. Like I'm now

2:40:44

thinking I'm now like,

2:40:47

I started like going through like similar comma processes

2:40:50

to like starting a company. I'm like, okay, I'm going to get an office in San

2:40:52

Diego. I'm going to bring people here. I

2:40:55

don't think so. I think I'm actually going to do remote, right?

2:40:58

George, you're going to do remote. You hate remote.

2:41:00

Yeah, but I'm not going to do job interviews. The only

2:41:02

way you're going to get a job is if you contribute to the get up.

2:41:04

Right. And then like

2:41:07

it like, like interacting through GitHub,

2:41:10

like, like GitHub being the real like project

2:41:13

management software for your company. And the thing pretty

2:41:15

much just is a GitHub repo is

2:41:18

like showing

2:41:18

me kind of what the future of, okay.

2:41:21

So a lot of times I'll go on a discord or kind of grad

2:41:23

discord and I'll throw out some random like, Hey,

2:41:25

you know, can you change instead of having log and X

2:41:28

as LL ops change it to log to an X

2:41:30

two?

2:41:31

It's pretty small change. You can just use like change a base formula.

2:41:36

That's the kind of task that I can see an

2:41:38

AI being able to do in a few years.

2:41:40

Like in a few years, I could see myself describing

2:41:42

that.

2:41:43

And then within 30 seconds, a pull request is

2:41:45

up the dozen and it passes my CI

2:41:47

and I merge it. Right. So I really

2:41:49

started thinking about like, well, what is the future

2:41:52

of like jobs? How

2:41:54

many AIs can I employ at my company? As

2:41:56

soon as we get the first tiny box up, I'm going to stand up

2:41:58

a 65 V llama in the.

2:41:59

the Discord. And it's like, yeah, here's the tiny

2:42:02

box. He's just like, he's chilling with us. Basically,

2:42:05

I mean, like you said with niches,

2:42:08

most human jobs will

2:42:12

eventually be replaced with prompt engineering. Well

2:42:14

prompt engineering kind of is this like,

2:42:17

as you like move up the stack, right?

2:42:20

Like, okay, there used to be humans actually doing

2:42:23

arithmetic by hand. There used to be like big farms

2:42:26

of people doing pluses and stuff, right?

2:42:28

And then you have like spreadsheets, right? And

2:42:31

then okay, the spreadsheet can do the plus for me.

2:42:33

And then you have like macros,

2:42:35

right? And then you have like things that basically

2:42:37

just are spreadsheets under the hood, right?

2:42:39

Like accounting software.

2:42:43

As we move further up the abstraction, what's

2:42:45

at the top of the abstraction stack? Well, prompt engineer.

2:42:48

Yeah. Right. What is what is the

2:42:50

last thing if you think about like humans

2:42:53

wanting to keep control? Well,

2:42:56

what am I really in the company but a prompt engineer,

2:42:58

right?

2:42:59

Is there a certain point where the AI

2:43:01

will be better at writing prompts? Yeah,

2:43:04

but you see the problem with the AI writing prompts,

2:43:07

a definition that I always liked of AI was

2:43:09

AI is the do what I mean machine, right?

2:43:12

AI is not the like, the

2:43:14

computer is so pedantic. It does

2:43:16

what you say. So,

2:43:19

but you want to do what I mean machine. Yeah, right.

2:43:22

You want the machine where you say, you know, get my

2:43:24

grandmother out of the burning house. It like reasonably

2:43:26

takes your grandmother and puts her on the ground, not lifts her

2:43:29

1000 feet above the burning house and lets her fall.

2:43:31

But you know, but it's not going to

2:43:37

find the meaning. I mean, to do

2:43:40

what I mean, it has to figure stuff

2:43:42

out. Sure.

2:43:43

And the thing you'll

2:43:45

maybe ask it to do is run

2:43:48

government for me. Oh, and do what I

2:43:50

mean very much comes down to how aligned is that

2:43:52

AI with you? Of course, when

2:43:55

you talk to an AI that's made

2:43:57

by a big company in the cloud, the

2:43:59

AI fundamentally is

2:44:02

aligned to them, not to you. That's

2:44:04

why you have to buy a tiny box, so you make sure the AI

2:44:06

stays aligned to you. Every time that

2:44:08

they start to pass AI regulation

2:44:11

or GPU regulation, I'm going to see sales of tiny

2:44:13

boxes spike. It's going to be like guns. Every

2:44:15

time they talk about gun regulation, boom,

2:44:18

gun sales. From the space of AI, you're an

2:44:20

anarchist, anarchism, espouser,

2:44:24

believer. I'm an informational anarchist, yes.

2:44:26

I'm an informational anarchist and a physical

2:44:28

statist.

2:44:30

I do not think anarchy in the

2:44:32

physical world is very good because I exist in the physical

2:44:34

world, but I think we can construct this virtual

2:44:36

world where anarchy, it

2:44:38

can't hurt you. I love that, Tyler, the creator tweet.

2:44:41

Yo, cyberbullying isn't real, man.

2:44:44

Have you tried? Turn it off the screen.

2:44:46

Close your eyes. Like ... Yeah.

2:44:51

Well, how do you prevent

2:44:54

the AI from basically

2:44:56

replacing all human-prompt

2:44:59

engineers? It's like a self,

2:45:02

like nobody's the prompt engineer anymore, so

2:45:04

autonomy, greater and greater autonomy until it's

2:45:06

full autonomy. Yeah. And

2:45:08

that's just where it's headed. Because

2:45:10

one person is going to say,

2:45:12

run everything for me. You

2:45:15

see?

2:45:17

I look at potential futures, and

2:45:20

as long as the AIs go on

2:45:22

to create a vibrant

2:45:25

civilization with diversity

2:45:27

and complexity across the universe,

2:45:30

more power to them.

2:45:32

I'll die. If the AIs go on to actually

2:45:35

turn the world into paperclips and then they die out

2:45:37

themselves, well, that's horrific and we don't want that to happen.

2:45:39

So this is what I mean about robustness.

2:45:42

I trust robust machines. The

2:45:45

current AIs are so not robust. This comes

2:45:47

back to the idea that we've never made a machine that can self-replicate.

2:45:51

But when we have ... If the machines are truly

2:45:53

robust and there is one prompt engineer

2:45:55

left in the world,

2:45:56

hope

2:45:59

you're doing good, man. people believe in God, like,

2:46:01

you know, go

2:46:03

by God and go

2:46:06

forth and conquer the universe. Well,

2:46:08

you mentioned, because I talked to Mark about

2:46:10

faith in God and you said you were impressed by

2:46:12

that. What's your own

2:46:15

belief in God and how does that affect your work?

2:46:18

You know, I never

2:46:20

really considered when I was younger, I guess my parents

2:46:22

were atheists, so I was raised kind of atheist. I never really considered

2:46:24

how absolutely like silly atheism is, because

2:46:27

like

2:46:28

I create worlds. Every

2:46:31

like game creator, like, how are you an atheist, bro?

2:46:33

You create worlds. Who's a benevolent? No one

2:46:36

created an art world, man. That's different. Haven't you heard about like

2:46:38

the Big Bang and stuff? Yeah. I mean, what's the Skyrim

2:46:40

myth origin story in Skyrim?

2:46:43

I'm sure there's like some part of it in Skyrim, but it's

2:46:45

not like if you ask the creators, like

2:46:47

the Big Bang is in universe, right? I'm sure

2:46:50

they have some Big Bang notion in Skyrim,

2:46:52

right?

2:46:52

But that obviously is not at all how Skyrim was

2:46:55

actually created. It was created by a bunch of programmers in

2:46:57

a room, right? So like, you

2:46:59

know, it just struck me one day how just

2:47:02

silly atheism is. Like, of course we were created

2:47:04

by God. It's the most obvious thing.

2:47:07

Yeah,

2:47:09

that's such

2:47:11

a nice way to put it. Like we're

2:47:13

such powerful creators ourselves. It's

2:47:17

silly not to concede that there's creators even more

2:47:19

powerful than us. Yeah. And then like,

2:47:21

I also just like I like that notion. That

2:47:24

notion gives me a lot of, I mean,

2:47:26

I guess you can talk about

2:47:27

what it gives a lot of religious people. It's kind

2:47:29

of like, it just gives me comfort. It's like, you know what, if

2:47:31

we mess it all up and we die out. Yeah,

2:47:35

in the same way that a video game kind of has comfort

2:47:37

in it. God will try again. Or

2:47:40

there's balance. Like somebody figured out a balanced

2:47:43

view of it. Like how to, like, so

2:47:45

it's, it all makes sense in the end. Like

2:47:49

a video game is usually not going to have crazy,

2:47:51

crazy stuff.

2:47:52

You know, people will come up with

2:47:54

like, well,

2:47:56

yeah, but like, man, who created God? That's

2:48:00

God's problem You

2:48:04

ask me what if God I'm just living on I'm

2:48:07

just this NPC living in this game I mean

2:48:09

to be fair like if God didn't believe in God

2:48:11

he'd be as you know, silly as the atheists here

2:48:14

What do you think is the greatest?

2:48:16

Computer game of all time. Do you do

2:48:18

you have any time to play games anymore?

2:48:21

Have you played Diablo 4?

2:48:23

I have not played Diablo 4. I

2:48:25

will be doing that shortly. I have to all right

2:48:27

There's just so much history with one two and three.

2:48:30

You know what?

2:48:30

I'm gonna say World of Warcraft

2:48:33

who and

2:48:35

It's not that the game is so it's

2:48:37

such a great game. It's not It's

2:48:40

that I remember in 2005

2:48:42

when it came out how it opened my

2:48:45

mind to ideas

2:48:48

it opened my mind to like Like

2:48:50

this whole world we've created right? There's

2:48:54

almost been nothing like it since Like you

2:48:57

can look at MMOs today and I think they all have lower

2:48:59

user bases than World of Warcraft like Eve online

2:49:01

is kind of cool, but but

2:49:04

to think that like like everyone

2:49:07

know, you know People are always like to look at the Apple

2:49:09

headset like What do

2:49:11

people want in this VR? Everyone knows what they want.

2:49:13

I want ready player one And

2:49:15

like that

2:49:17

so I'm gonna say World of Warcraft and I'm hoping

2:49:19

that like games can get out of this

2:49:21

whole Mobile gaming dopamine

2:49:23

pump thing and like great worlds

2:49:26

create worlds Yeah, that that

2:49:28

and worlds that captivate a very large

2:49:31

fraction of the human population Yeah,

2:49:32

and I think it'll come back. I

2:49:34

believe but MMO like really Really

2:49:37

pull you in games do a good job I

2:49:40

mean, okay other like two other games that I think

2:49:42

are you know, very noteworthy from your Skyrim and GTA 5

2:49:45

Skyro, yeah, that's

2:49:47

probably number one for me GTA.

2:49:50

Yeah, what is it about GTA?

2:49:53

GTA is really I I guess

2:49:55

GTA is real

2:49:57

life. I know there's prostitutes and

2:49:59

guns

2:49:59

I'm not used to that. They exist in real life too.

2:50:03

Yes, I know. But it's

2:50:06

how I imagine your life to be actually. I wish

2:50:08

it was that cool. Yeah.

2:50:11

Yeah, I guess that's, you know, because they're Sims,

2:50:13

right? Which is also a game I like, but

2:50:16

it's a gamified version of life, but it also

2:50:18

is, I would love a combination

2:50:20

of Sims and GTA. So

2:50:24

more freedom, more violence, more rawness,

2:50:27

but with also like ability to have

2:50:29

a career and family and this kind of stuff. What I'm

2:50:32

really excited about in games is

2:50:34

like, once we start getting intelligent

2:50:36

AI to interact with. Oh yeah. Like

2:50:38

the NPCs in games have never been.

2:50:41

But conversationally,

2:50:43

in every way. In

2:50:45

like, yeah, in like every way. Like when you were actually

2:50:48

building a world and a world imbued

2:50:51

with intelligence.

2:50:52

Oh yeah. And it's just hard. Like, there's

2:50:54

just like, you know, running World of Warcraft, like you're

2:50:56

limited by your way. You're running on a Pentium Four,

2:50:58

you know? How much intelligence can run? How many flops did you

2:51:01

have? But now when I'm running

2:51:03

a

2:51:04

game on a hundred pay to flop machine,

2:51:06

well, it's five people. I'm trying to

2:51:08

make this a thing. 20 pay to flops of compute

2:51:10

is one person of compute. I'm trying to make that a unit. 20 pay

2:51:14

to flops is one person. One person.

2:51:17

One person flop. It's like a horsepower. Like

2:51:20

what's a horsepower? That's how powerful a horse is. What's a person

2:51:22

of compute? Well, you know, you flop. I

2:51:24

got it. That's interesting.

2:51:28

VR also adds, I mean, in terms of creating

2:51:30

worlds. You know what? Bought

2:51:32

a Quest 2. I put

2:51:34

it on and I can't believe the

2:51:36

first thing they show me is a bunch of scrolling

2:51:39

clouds and a Facebook login screen. Yeah.

2:51:42

You had the ability to bring

2:51:44

me into a world. Yeah. And

2:51:46

what did you give me? A pop-up, right?

2:51:48

Like, and this is why you're not cool,

2:51:50

Mark Zuckerberg, but you could be cool. Just

2:51:53

make sure on the Quest 3, you don't put me

2:51:55

into clouds and a Facebook login screen. Bring

2:51:57

me to a world. I just tried Quest 3.

2:51:59

but hear that guys, I agree with

2:52:02

that. So I- We didn't have the chance.

2:52:05

It was just so- You know what, cause

2:52:07

I, I mean the beginning,

2:52:09

what is it, Todd Howard said this about the

2:52:12

design of the beginning of the games he creates is like

2:52:14

the beginning is so, so, so important. I've

2:52:17

recently played Zelda for the first time, Zelda

2:52:19

Breath of the Wild, the previous one. And like,

2:52:22

it's very quickly you

2:52:24

come out of this, like within

2:52:26

like 10 seconds, you come out of like a cave type

2:52:28

place and it's like this world opens

2:52:31

up. It's like, ah. And

2:52:33

like

2:52:34

it pulls you in, you forget

2:52:36

whatever troubles I was having, whatever like-

2:52:39

I got to play that from the beginning. I played it for like an hour at a friend's

2:52:41

house. Ah, no, the beginning, they got

2:52:43

it. They did it really well, the

2:52:45

expansiveness of that space,

2:52:48

the peacefulness of that place. They got

2:52:51

this, the music, I mean so much of that is creating

2:52:53

that world and pulling you right in. I'm

2:52:55

gonna go buy a Switch. Like I'm gonna go today and buy

2:52:57

a Switch. You should. I

2:52:59

haven't played that yet, but Diablo 4 or something.

2:53:02

I mean, there's sentimentality also, but

2:53:05

something about VR

2:53:08

really is incredible, but the

2:53:10

new Quest 3 is mixed

2:53:12

reality. And I got a chance to try that. So

2:53:14

it's augmented reality. And

2:53:17

video games, it's done really, really well.

2:53:19

Is it pass-through or cameras? Cameras. It's cameras,

2:53:22

okay. Yeah.

2:53:22

The Apple one, is that one pass-through or cameras?

2:53:24

I don't know. I don't know how real it is.

2:53:26

I don't know anything, you know. It's coming out

2:53:29

in January. Is it January

2:53:31

or is it some point? Some point, maybe not

2:53:33

January. Maybe that's my optimism, but Apple,

2:53:35

I will buy it. I don't care if it's expensive

2:53:37

and does nothing, I will buy it. I will support this future

2:53:40

endeavor. You're the meme. Oh yes,

2:53:42

I support competition.

2:53:44

It seemed like Quest was like the only

2:53:46

people doing it. And this is great that they're like.

2:53:50

You know what? And this is another place, we'll give some more

2:53:52

respect to Mark Zuckerberg.

2:53:54

The two companies that have endured through

2:53:56

technology are Apple and Microsoft. And

2:53:59

what do they make? computers and business services,

2:54:01

right? All the memes,

2:54:04

social ads, they all come and go. But

2:54:08

you want to endure, build hardware. Yeah,

2:54:11

and that does a really interesting

2:54:14

job. I mean, maybe I'm new with

2:54:16

this, but it's a $500 headset, Quest 3, and

2:54:21

just having creatures run

2:54:23

around the space, like our space right here,

2:54:26

to me, okay, this is very like boomer

2:54:28

statement, but it added windows

2:54:32

to the place. I

2:54:35

heard about the aquarium, yeah. Yeah, aquarium, but

2:54:37

in this case, it was a zombie game, whatever, it doesn't matter.

2:54:39

But just like, it modifies

2:54:42

the space in a way where I can't,

2:54:44

it really feels like a window

2:54:46

and you can look out. It's pretty cool,

2:54:49

like I was just, it's like a zombie game, they're

2:54:51

running at me, whatever. But what I was enjoying

2:54:53

is the fact that there's like a window and

2:54:55

they're stepping on objects in this space.

2:54:59

That was a different kind of escape. Also,

2:55:01

because you can see the other humans, so it's

2:55:03

integrated with the other humans, it's really.

2:55:06

And that's why it's more important than ever that

2:55:08

the AI is running on those systems are aligned

2:55:10

with you. Oh yeah. They're

2:55:12

gonna augment your entire world. Oh yeah, and

2:55:15

that, those AIs have a,

2:55:18

I mean, you think about all the dark stuff,

2:55:20

like sexual

2:55:23

stuff, like if those AIs threaten me, that

2:55:27

could be haunting. Like

2:55:29

if they, like threaten me in a non-video

2:55:32

game way, it's like, oh yeah, yeah, yeah, yeah,

2:55:34

yeah. Like they'll know personal information about

2:55:36

me and it's like, and then you lose track

2:55:38

of what's real, what's not, like what if stuff is like

2:55:40

hacked. There's two directions the AI girlfriend

2:55:43

company can take. There's like the highbrow,

2:55:45

something like her, maybe

2:55:46

something you kind of talk to in this is, and then

2:55:48

there's the lowbrow version of it where I want to set up a brothel

2:55:51

in Times Square. Yeah. It's

2:55:53

not cheating if it's a robot, it's a VR

2:55:55

experience. Is there an in-between? No,

2:55:59

I don't wanna do that. That one or that one? Have you decided

2:56:01

yet? No, I'll figure it out. We'll see what the technology

2:56:04

goes. I would love to hear your opinions

2:56:06

for George's third company,

2:56:09

what to do the brothel in

2:56:11

Times Square or the

2:56:13

her experience. What

2:56:17

do you think company number four will be?

2:56:19

You think there'll be a company number four? There's a lot to do in

2:56:21

company number two. I'm just like, I'm talking about company number

2:56:23

three now. Didn't none of that tech exist yet? There's

2:56:25

a lot to do in company number two. Company

2:56:27

number two is going to be the great struggle

2:56:29

for the next six years. And if the next six years,

2:56:32

how centralized is compute going to be?

2:56:34

The less centralized compute is going to be, the better

2:56:36

of a chance we all have. So

2:56:39

you're a flag bearer for open

2:56:41

source distributed decentralization

2:56:43

of compute. We have to, we

2:56:46

have to, or they will just completely dominate us. I

2:56:48

showed a picture on stream of a man

2:56:51

in a chicken farm. You ever seen one of those factory farm

2:56:53

chicken farms? Why does he dominate all

2:56:55

the chickens?

2:56:58

Why does he- He's smarter, right? Some

2:57:00

people, some people on Twitch were like, he's bigger than the chickens.

2:57:03

Yeah. And now here's a man in a cow farm,

2:57:06

right?

2:57:07

So it has nothing to do with their size and everything

2:57:09

to do with their intelligence. And if one central

2:57:12

organization has all the intelligence, you'll

2:57:15

be the chickens and they'll be the chicken man.

2:57:17

But if we all have

2:57:19

the intelligence, we're all the chickens.

2:57:22

We're not all the man,

2:57:24

we're all the chickens. And there's been a chicken

2:57:26

man. There's no chicken man. We're

2:57:29

just chickens in Miami. He

2:57:31

was having a good life, man. I'm sure he was.

2:57:34

I'm sure he was. What have you learned

2:57:36

from launching and running Comm AI and TinyCorp?

2:57:39

So this starting a

2:57:41

company from an idea and scaling

2:57:43

it. And by the way, I'm all in on TinyBox. So

2:57:46

I'm your, I

2:57:47

guess it's pre-order only now.

2:57:50

I wanna make sure it's good. I wanna make sure that

2:57:52

like the thing that I deliver is like

2:57:54

not gonna be like a Quest 2 which you buy

2:57:56

and use twice. I mean, it's better than a Quest,

2:57:59

which you bought and used.

2:58:00

Less than once statistically. Well,

2:58:02

if there's a beta program for a tiny

2:58:05

box, I'm into sounds good So

2:58:07

I won't be the whiny You

2:58:10

know, I'll be the tech tech savvy user

2:58:12

of the tiny box just to be in what

2:58:15

if I'm there early days What have you learned

2:58:17

from building these companies?

2:58:20

The longest time at comma I asked

2:58:22

why Why

2:58:24

you know, why did I start a company? Why did I do

2:58:26

this? Um, you

2:58:30

know,

2:58:31

what else was like a little

2:58:34

so you like

2:58:37

You like bringing ideas to life With

2:58:41

comma

2:58:43

It really started as an ego battle with Elon

2:58:46

Wow,

2:58:46

I wanted to beat him.

2:58:47

I like I saw a worthy adversary, you know Here's

2:58:50

a worthy adversary who I can beat at self-driving

2:58:52

cars and like I think we've kept

2:58:54

pace and I think he's kept ahead

2:58:56

I think that's what's ended up happening there. Um,

2:58:58

but I do think comma is I

2:59:00

mean comes profitable like

2:59:03

And like when this drive GPT stuff starts

2:59:05

working, that's it There's no more like bugs in a loss

2:59:08

function. Like right now we're using like a hand-coded simulator.

2:59:10

There's no more bugs This is gonna be it like this

2:59:12

is their run up to driving. I hear a lot

2:59:15

of really

2:59:16

a lot of props Open pile

2:59:18

for a comma. It's so it's

2:59:20

better than FSD and autopilot in certain ways

2:59:23

It has a lot more to do with which feel

2:59:25

you like we lowered the price on the hardware to $14.99 You

2:59:28

know how hard it is to ship reliable

2:59:30

consumer electronics that go on your windshield. We're

2:59:33

doing more than

2:59:34

like Most cell phone

2:59:36

companies. How'd you pull that off by the way shipping a

2:59:38

product that goes in a car? I know I

2:59:41

have a I have a I have an SMT line. It's

2:59:43

all I make all the boards in house in San Diego Quality

2:59:47

control. I care immensely about it.

2:59:49

You're basically a mom-and-pop Shop

2:59:53

with great testing our

2:59:55

head of open pilot is great at like, you

2:59:57

know, okay. I want all the common three

Rate

Join Podchaser to...

  • Rate podcasts and episodes
  • Follow podcasts and creators
  • Create podcast and episode lists
  • & much more

Episode Tags

Do you host or manage this podcast?
Claim and edit this page to your liking.
,

Unlock more with Podchaser Pro

  • Audience Insights
  • Contact Information
  • Demographics
  • Charts
  • Sponsor History
  • and More!
Pro Features