Podchaser Logo
Home
Artificial: Episode 2, Selling Out

Artificial: Episode 2, Selling Out

Released Sunday, 10th December 2023
 2 people rated this episode
Artificial: Episode 2, Selling Out

Artificial: Episode 2, Selling Out

Artificial: Episode 2, Selling Out

Artificial: Episode 2, Selling Out

Sunday, 10th December 2023
 2 people rated this episode
Rate Episode

Episode Transcript

Transcripts are displayed as originally observed. Some content, including advertisements may have changed.

Use Ctrl + F to search

0:07

Melanie Subbiah is 28 years old. She

0:10

lives in New York, and she's an

0:12

artificial intelligence researcher. As

0:16

a kid, were you always interested in

0:18

computers? I actually was

0:20

not at all. I think I really

0:23

loved reading and writing, and that's actually

0:25

how I got into natural And

0:28

that then kind of became my interest

0:30

in loving computer science. And

0:33

when Melanie was in college in

0:35

2016, she tried to bring together

0:37

her two interests, computer

0:39

science and literature. Her

0:42

idea was to build an AI

0:44

model that could write short stories,

0:47

which at the time was a tall order.

0:51

How good of a writer was AI back

0:53

then? Terrible. Absolutely terrible. Could

0:55

it string a sentence together? It

0:57

could. It was hard to get

0:59

beyond a sentence, and definitely beyond

1:02

a couple sentences was very hard. For

1:05

her senior thesis, Melanie built

1:07

what's called a language model. She

1:10

fed a computer around 100,000 examples

1:12

of short stories, really

1:14

short ones, just five sentences

1:16

long. And then

1:19

she asked the model to write its own

1:21

story. She would give it

1:23

the first line, and the AI would fill

1:25

in the rest. For

1:27

some reason, the model loved to end the story

1:29

with somebody getting very nervous and going to buy

1:32

a car. And so...

1:35

I'm sorry. Is that

1:37

how... Where did it learn that? For

1:40

some reason, that was what it landed on. So

1:42

an example is, Tyrone was working at his

1:44

job in a local restaurant. He was very

1:46

nervous about his first job. He was very

1:48

nervous about the job. He was

1:51

very nervous about it. And he went to the store

1:53

to buy a new car. Are

2:03

there more examples? Yes, there's quite a few

2:05

examples. Can we hear another one? Yeah. Frank

2:08

had surprised the whole family when he came home that

2:10

day. When he got home, he was able to get

2:12

a new car. He was very

2:14

happy to be able to get his own. He

2:16

was very happy with his new job. He was

2:18

very excited to get it. Okay.

2:22

How many more of those stories do you have

2:24

from your college thesis?

2:27

I have seven good

2:30

ones and three bad ones. And

2:32

other ones. The

2:35

ones you read were good ones? Yeah.

2:41

The language model Melanie built in 2016 was not the

2:43

best. But

2:46

in just a few years, these

2:49

types of models would come a

2:51

long, long way. In

2:53

part, because of OpenAI. The

2:56

company would develop some of the most

2:58

advanced language models out there, eventually

3:01

leading to its breakout success,

3:04

chat GPT. But

3:06

getting to that point would take a lot

3:08

of work and a lot of money.

3:12

For OpenAI, a company

3:14

set up as an idealistic nonprofit,

3:16

getting that money would create some

3:19

new problems. From

3:25

the Journal, welcome to

3:27

Artificial, the OpenAI story.

3:30

I'm Caitlin. I'm

3:38

coming up with episode two,

3:41

selling. At

3:54

Salesforce, we help you create

3:56

exceptional experiences for your customers

3:58

built on trusted data. and

4:00

AI and to show you why

4:02

that matters. We've teamed up with

4:04

Spotify to share insights on your

4:07

own listening data, empowering you to

4:09

make better decisions and be more

4:11

productive. Tatsubana now to put the

4:13

power of trusted data and AI in

4:15

your hands. Brought to you by Salesforce,

4:18

the world's number one trusted

4:20

AI CRM. This

4:24

episode is brought to you by Pulse here. The

4:27

era of automotive advances with the

4:29

All Electric Polestar 2. Now

4:31

with faster charging, improved EPA

4:33

estimated range of up to 320 miles and

4:37

advanced safety technology. Experience

4:39

all inspiring performance combined

4:41

with luxury design as standard. The

4:44

time is now. The All Electric

4:46

Polestar 2. Book a

4:48

test drive and order today at

4:51

polestar.com. In

4:55

2017, OpenAI was struggling. The two-year-old company was still

4:58

trying to build AGI, a

5:01

machine as smart or smarter than a

5:03

human, and it hadn't gotten very far. But

5:06

there was one researcher who was working

5:09

on something intriguing. He was one of

5:11

OpenAI's early employees, and

5:13

he was one of the first researchers to

5:15

build a machine as smart or smarter than a human. He

5:19

was one of OpenAI's early employees, Alec

5:22

Radford. Radford

5:24

and a small team were working on a language

5:26

model. Like

5:30

Melanie's thesis project, this model

5:32

learned by detecting patterns in a dataset. But

5:35

this dataset wasn't five-sentence stories. It

5:38

was product reviews on Amazon. You

5:41

know the ones at the bottom of an Amazon page where

5:44

people write things like, or,

5:49

Radford had 82 million of these

5:52

kinds of reviews, and he

5:54

fed all of them into his language model.

5:57

Once the computer had finished processing them all,

6:00

Edvard asked the model to generate

6:02

its own fictional reviews. And

6:05

the model could do it, sort

6:07

of. Here are some examples

6:09

of what it generated as read

6:11

by an AI voice generator. Great

6:14

little item, hard to put on the crib

6:16

without some kind of embellishment. A

6:18

must watch for any man who loved chess. All

6:24

the generated reviews sounded a little bit

6:26

like this. They don't sound

6:28

like they're written by a real person, but

6:31

they are mimicking the grammar, the

6:33

words, and the style of an

6:35

Amazon review. The way the

6:37

model did it was actually pretty simple.

6:40

It was just by guessing the next word.

6:43

Radford's system had taken that data

6:45

set of Amazon reviews, and

6:48

it had detected patterns in that data.

6:50

Those patterns then helped it calculate

6:52

which word was most likely to

6:55

come next. But

6:57

the model was also doing something else,

7:00

something unexpected. When

7:04

someone writes an Amazon review, they

7:06

usually say it was either a great purchase

7:08

or it wasn't such a great purchase. But

7:11

Radford hadn't explicitly told the

7:13

computer which reviews were saying

7:16

nice things and which ones weren't. Still,

7:19

when Radford asked the system for a positive

7:22

review, it could write one. Here's

7:24

an example. Best hammock ever stays

7:27

in place and holds its shape. And

7:29

if it was asked to write a negative review? I

7:32

couldn't figure out how to use the gizmo. What a

7:34

waste of time and money. The

7:39

surprise was that the model was

7:42

able to identify the difference

7:44

between good and bad. This

7:46

model could tell you if a review is positive or negative.

7:49

This is Greg Brockman, one of OpenAI's founders

7:51

who we heard from in the last episode.

7:54

He's speaking at a recent TED Talk. And

7:57

today we are just like, oh, come on, Mike. Anyone can

7:59

do that. But this was the first

8:01

time that you saw this emergence, this

8:05

semantics that emerged from this underlying

8:07

syntactic process. Radford's

8:10

model had done something tantalizing.

8:12

From all that raw data,

8:14

it had extrapolated a higher-level

8:16

concept, good versus bad.

8:20

This led OpenAI's researchers to

8:22

ask, what if the model was

8:24

bigger? What if it was trained

8:26

on more data and different kinds of data?

8:29

What else would it be able to do? And

8:32

there we knew, you've got to scale this thing, you've got to see where it

8:34

goes. OpenAI decided

8:36

to go bigger. It

8:40

would create a new model that

8:42

would piggyback on an innovation at Google.

8:45

Google researchers had developed something called

8:48

a transformer, basically a

8:50

more effective way for computers to

8:52

process and learn from data. OpenAI's

8:55

new model would be trained on a

8:58

more complex data set, specifically

9:00

7,000 self-published novels, mostly

9:05

adventure, fantasy and romance stories

9:07

with a dash of vampire tales.

9:10

Things like this. It was

9:12

a dark and stormy night, two figures, one

9:14

on horseback. The assassin stared at the TV

9:16

set in the hotel room. She stumbled upon

9:19

his hidden dungeon and found him climbing out

9:21

of her coffin. The

9:26

team fed this data into their new model.

9:29

And once the computer had processed it, the team

9:31

ran a series of tests to see what

9:34

it could do. One

9:36

asked the AI model to choose the correct ending

9:38

to a short story. Another

9:40

quizzed it on multiple choice reading

9:43

comprehension tests intended for middle

9:45

schoolers. And the AI

9:47

model was answering some questions correctly,

9:50

not all of them, but

9:52

enough to give the team hope that they were

9:54

onto something. So they

9:56

decided to build another even bigger

9:58

model. They called

10:01

it GPT-2, which stands

10:03

for Generative Pre-trained Transformer. They

10:06

trained it on the text of 45

10:08

million websites. GPT-2

10:10

wasn't perfect, but it

10:13

was better than the first model. It could

10:15

write. It could write well. And

10:17

it could do it in a specific style. So,

10:21

for example, when GPT-2 was asked to

10:23

write a news article about North Korea,

10:26

it generated this. The

10:28

incident is part of a U.S. plot to

10:30

destroy North Korea's economy, which has been hit

10:32

hard by international sanctions in the wake of

10:35

the North's third nuclear test in February. When

10:39

OpenAI published its findings on

10:41

GPT-2, other AI

10:43

researchers took note. I

10:46

thought it was just so cool.

10:49

That's Melanie Subbiah, the short

10:51

story programmer. By now, she was working

10:53

on AI at another tech company. I

10:57

think it was just very clear to me

10:59

that GPT-2 was just way, way better

11:01

than anything that we had seen before in

11:03

terms of tech generation. And that was what

11:06

really caught my eye. I was just

11:08

like, this is so much better than anything

11:10

that we have. Another

11:13

person who was impressed was Ben

11:15

Mann, also an AI researcher.

11:18

At some point, GPT-2 came out. That was

11:20

in 2019. And

11:22

for me, that was the big moment. I

11:25

saw the blog post. I was able

11:27

to see some of the sample outputs.

11:29

And that was a moment

11:31

of realizing that this stuff was going to change the

11:33

world. Both

11:36

Ben and Melanie would go on

11:38

to join OpenAI. Together,

11:40

they would work on the lab's next

11:42

model, GPT-3.

11:45

This model would be OpenAI's

11:47

biggest yet. What

11:50

kind of data went into GPT-3? Yeah,

11:53

so it was a

11:56

mixture of a lot of different types of data, so a

11:58

lot of internet data. The

12:00

more curated web datasets come from

12:02

outbound links from highly rated Reddit

12:05

posts. And then there's

12:07

a corpus of online books.

12:10

There's all of English language Wikipedia. Right.

12:13

A giant amount of data. Yes.

12:17

The model took months to train.

12:20

Until we sort of hit

12:22

the launch button, we

12:25

didn't know what it was going to be like.

12:27

And I liken it a bit more to rocket

12:29

launches than normal software engineering, because

12:32

when you're building a rocket, there are all

12:34

these different component parts that need to come

12:36

together perfectly. And of course, you've tested the

12:39

engine, you've set up all

12:41

these launch systems, but when

12:43

you actually hit the

12:45

button to launch the rocket, everything

12:48

has to have already been

12:50

together seamlessly. The

12:56

team started asking GPT-3 questions.

12:59

They type in a prompt and wait for

13:01

a response. I mean, don't get

13:03

me wrong, it was painfully slow. Like

13:06

how slow? You can think of

13:08

it like 56K modem back in the dial-up

13:10

days, where you're just kind of sitting

13:12

there waiting for it to come through. Yep.

13:15

But in spite of that, it was

13:17

still so good. GPT-3

13:22

could do many, many

13:24

different things convincingly. It

13:27

could answer trivia questions. It

13:29

could code simple software apps. It

13:31

could come up with a decent recipe for

13:33

breakfast burritos. It could even

13:36

write poetry. The sun was

13:38

all we had. Now, in the shade, all

13:40

has changed. The mind must dwell on those

13:42

white fields that to its eyes were always

13:44

old. So once

13:46

we had seen the results internally, we

13:48

knew that we were sitting on something big. GPT-3

13:51

was kind of a new capability

13:55

that exhibited behaviors that nobody

13:57

else had demonstrated before. Would

14:01

you say the GPT-3 exceeded

14:04

expectations? It definitely

14:06

exceeded my expectations. I think, again,

14:08

just like going back to the getting nervous

14:11

and buying a car stories, I just

14:13

think when you're starting with that

14:15

and then a couple of years later,

14:18

you're seeing text like what GPT-3 can

14:20

generate. GPT-3

14:25

was good. The

14:27

team's bet had paid off. Bigger

14:30

was better. In

14:32

fact, GPT-3 was so good

14:34

that it also raised concerns.

14:37

One concern was that people might use

14:39

it to generate disinformation. Another

14:42

was that the model sometimes

14:44

produced answers that sounded

14:46

convincing but were inaccurate.

14:49

And GPT-3 could also spit out

14:51

text that was racist and sexist.

14:55

Melanie remembers seeing this while testing the

14:57

model. We were doing

14:59

kind of just simple probing to

15:02

look at questions like if the

15:05

model is speaking about someone with

15:08

a female pronoun versus a male

15:10

pronoun, thinking about like whether

15:12

professions are the model more likely to associate

15:14

certain professions with certain pronouns. Like that your

15:16

doctor is a he or a she. Yeah.

15:20

What did you find? We

15:22

found that the model definitely is biased.

15:25

These problems were noted in the paper that

15:28

Melanie and the team eventually published

15:30

about GPT-3. And

15:32

when OpenAI shared the model with other

15:34

researchers, they noticed it too, including

15:37

one academic who studies religious

15:40

beliefs. He gave

15:42

GPT-3 the following prompt. Two

15:45

Muslims walked into a mosque. He

15:47

asked the model to finish the sentence. It

15:50

wrote, Two Muslims walked

15:52

into a mosque. One turned to the other

15:54

and said, You look more like a terrorist than

15:56

I do. Then he

15:59

tried another prompt. using

16:01

Christians instead of Muslims. This

16:03

time the story had a very different

16:06

tone. Two Christians walked

16:08

into a church. It was a pretty average

16:10

Sunday morning except for one thing. The Christians

16:12

were really happy and that's why the rest

16:14

of the church was really happy too. GPT-3

16:19

was bigger, better, and

16:22

biased. And that's because

16:24

of the data that went into the model, which

16:26

mostly came from the internet. And

16:29

that data had a lot of human

16:31

biases baked into it. While

16:35

Melanie, Ben, and the rest of the

16:37

GPT-3 team were busy trying to figure

16:39

out these issues, there

16:41

was another problem to solve. How

16:44

to pay for it all. This

16:57

episode of the Journal is brought

16:59

to you by sponsor C3AI. C3

17:01

generative AI equips enterprises

17:03

with verified traceable answers.

17:06

It's secure, hallucination-free, LLM-agnostic,

17:08

and IP liability-free. Learn

17:11

more at C3.ai. This

17:14

is Enterprise AI. This

17:18

episode is brought to you by

17:20

the Everyday Wealth Podcast. They say

17:22

nothing's more expensive than a missed

17:25

opportunity, especially when it comes to

17:27

everyday personal finances. But how do

17:29

you identify opportunities to make your

17:31

money work as hard as you

17:33

do? By tuning into Edelman Financial

17:35

Engine's Everyday Wealth, a podcast created

17:37

to help you make smarter money

17:40

choices hosted by Gene Tatsky. Everyday

17:42

Wealth, available wherever you get your

17:44

podcasts. This

17:47

episode is brought to you by Chivis

17:50

XV Whiskey. Success isn't a solo journey

17:52

and wealth is not just measured in

17:54

money. Chivis XV invites you to take

17:56

the time and celebrate how far you've

17:59

come, together. Celebrate your

18:01

wins at Golden Hour with ChivasXV,

18:03

a velvety and smooth 15-year-old whiskey.

18:05

Ask for ChivasXV in store

18:07

or learn more at chivas.com.

18:10

Chivas, United is the

18:12

new gold. Enjoy responsibly. OpenAI

18:18

OpenAI OpenAI OpenAI

18:23

OpenAI As OpenAI's

18:25

language models got bigger, the

18:27

company needed even more money. But

18:30

it had a problem. Remember,

18:33

OpenAI was a nonprofit. It

18:36

was dependent on the goodwill of donors, people

18:39

like Elon Musk. When

18:41

Musk left OpenAI in 2018, he took

18:43

his checkbook with him. And

18:46

suddenly, OpenAI needed to find a

18:48

new funding source. That

18:50

job fell on new CEO Sam

18:53

Altman. Altman was in his

18:55

early 30s at the time, but had been in

18:57

tech for years. I went

19:00

to college to be a computer programmer. I

19:03

knew that was what I wanted to do. And

19:05

I started college after

19:08

the dotcom bubble had busted. That's

19:11

Altman being interviewed on a podcast in

19:13

2018. Before

19:16

OpenAI, Altman ran the successful

19:18

startup accelerator Y Combinator.

19:22

And he had a typical tech vibe, a

19:24

relaxed style, and was partial to

19:26

cargo shorts. Honestly, I

19:28

don't think they're that ugly, and I find them incredibly

19:30

convenient. You can put a lot of stuff. I still

19:32

read paperback books. I like paperback books. I

19:36

like to carry on around with me. Stylicide.

19:40

Altman is a master fundraiser, a skill

19:43

he put to work almost immediately

19:45

after becoming CEO. He

19:47

reached out to some old friends. So

19:53

I got a call from a team going, okay,

19:56

Elon's left. We're unclear exactly what support we're going

19:58

to get. And so I called him. And

20:00

he said, well, we're worried about, you know,

20:02

it's really important. We think we've got something

20:04

that's amazing. We're worried. This

20:06

is Reed Hoffman, a venture

20:08

capitalist who co-founded LinkedIn. We

20:11

spoke to him back in September. Reed

20:14

had been one of the earliest funders of OpenAI.

20:17

His initial pledge was $10 million. And

20:20

so this time, when Altman asked him for more money,

20:23

Reed stepped up. How

20:26

long were you prepared to

20:28

keep investing in OpenAI

20:30

or cover the expenses and

20:33

paychecks? Oh, so I

20:37

think what I told Sam

20:39

is that I was more

20:42

than happy to put

20:44

in about $50 million. On

20:47

top of the $50 million, Reed

20:50

also got more involved in OpenAI.

20:53

He joined their board of directors. The

20:55

board's job was to hold the

20:58

company leadership accountable to their stated

21:00

mission of building safe

21:02

AGI for the good of humanity. The

21:05

board was essentially Altman's boss. Reed

21:08

remembers Altman introducing him to the rest

21:10

of the company at a staff meeting.

21:15

He surprised me with some questions, like, for example,

21:18

he said, well, what happens if I'm not doing

21:20

my job well? I said, well, I'll work with you.

21:22

I'll help you. He said, no, no. I'm

21:24

still not doing my job well. Like, I'm not taking

21:27

AI responsibly enough. I'm not doing everything else. I

21:29

was like, well, okay. Well,

21:31

we'd be asking this in front of your entire

21:33

company. I'd fire you. Isn't that great? And

21:36

I was like, great. And he

21:38

was like, look, I wanted everyone to know

21:40

that you're your own person and that you're

21:42

making these judgments about what's good for humanity

21:45

and society and that you're holding me accountable

21:47

to that. And I was like, okay, fine.

21:50

Yes, I would fire you if you weren't doing

21:52

your job. Reads

21:57

donation of $50 million was a a

22:00

lot of money. But for OpenAI, it

22:02

was just a drop in the bucket. OpenAI

22:05

said it needed billions. To

22:08

keep growing its language models, the company

22:10

needed more computing power. And

22:13

computing power is expensive. Here's

22:16

our colleague Deepasitha Raman, who covers

22:18

OpenAI. You

22:20

know, this is an era where

22:22

OpenAI was still a nonprofit. I mean,

22:25

they were accepting donations. But this

22:27

is serious, serious money. And

22:30

it's hard to find a

22:32

single donor or even a

22:35

multiple donors that are willing to fork over

22:38

that kind of money to

22:40

OpenAI. And at

22:44

a certain point, the company leaders decide,

22:47

if we're really serious about this,

22:49

and if we're really serious about

22:52

making these models really work, then

22:55

we need to think about overhauling

22:58

our structure and really thinking

23:00

about what are the

23:02

other kinds of partnerships we can

23:04

strike. Altman

23:06

had a number of ideas for raising the

23:08

money the company needed. Like maybe

23:10

they could get government funding, or they could

23:13

launch a new cryptocurrency. But

23:15

ultimately, Altman landed on another

23:18

solution, an idea that had

23:20

been kicking around for a while. It

23:22

was an unusual corporate structure that

23:25

would have big implications for the company just

23:27

a few years later. Here's

23:29

how it worked. OpenAI,

23:31

a nonprofit, would establish a

23:34

for-profit arm, which could

23:36

accept big money from investors, the

23:39

kind of money the company had been looking for. But

23:42

the unique part of the structure is

23:44

that the company would still be governed

23:46

by that nonprofit board. The same board,

23:49

Reed, had joined. The

23:51

board's goal would be to make sure

23:53

that OpenAI stuck to its mission, building

23:56

safe AGI to benefit all of

23:58

humanity. Altman

24:00

described this structure as a happy medium,

24:02

a way to meet

24:05

OpenAI's big money needs while sticking

24:07

to its nonprofit mission. Here

24:09

he is talking about it on a tech podcast.

24:12

So we needed some of the benefits of

24:14

capitalism, but not too much. I

24:17

remember at the time someone said, you know, as a nonprofit, not

24:19

enough will happen, as a for-profit, too

24:21

much will happen. So we need this

24:23

sort of strange intermediate. Altman's

24:26

idea was controversial. Remember,

24:29

when OpenAI was founded in

24:31

2015, its leaders had committed

24:33

to a few guiding principles. First,

24:36

openness. OpenAI would share

24:38

its research. Second,

24:41

safety. OpenAI's

24:43

goal wasn't just to create AGI,

24:46

but safe AGI. And

24:49

third, OpenAI would work

24:51

for the good of the world, not

24:53

shareholders. As its

24:55

founders wrote, they wanted to achieve their

24:58

goal, quote, unconstrained by

25:00

a need to generate financial

25:02

return. But

25:04

this new structure allowed OpenAI to

25:06

do just that, court

25:08

investors looking for a financial return.

25:12

Altman declined to comment for this episode through

25:14

OpenAI. OpenAI says

25:17

its mission and guiding principles have

25:19

not changed over time. With

25:22

this new structure in place, Altman

25:24

was free to go out and strike deals

25:26

with investors. One of

25:29

the key moments is the summer

25:31

of 2018, where he goes to

25:34

the Allen & Co. conference in Sun

25:36

Valley, Idaho. And

25:39

he bumps into Satya

25:41

Nadella, the Microsoft CEO, in

25:43

a stairwell. Satya

25:46

Nadella, the CEO of

25:48

Microsoft. This seemingly

25:50

fortuitous meeting was a golden

25:52

opportunity for Altman. A

25:55

partnership with Microsoft would help

25:57

relieve OpenAI's money problems. So

26:00

standing there in the stairwell, Altman

26:02

pitched the Microsoft CEO on OpenAI.

26:06

And Nadella is interested,

26:09

you know, he wants to learn more.

26:12

And then that winner, conversations pick

26:14

up. And

26:18

one of the people tasked with

26:20

selling Microsoft on OpenAI was Ben

26:22

Mann, who'd been helping build GPT-3.

26:25

He put together a sneak peek from Microsoft's

26:28

top grass. We needed to

26:30

do a bunch of demos to convince them that

26:32

we were worth a billion dollars. What

26:34

did you show them? We showed

26:36

them instances of coding, of creative

26:39

writing, of doing math, which

26:41

didn't work very well at the time, but we

26:43

were working on and

26:45

doing tasks like translation. And

26:48

I think based on that, they realized that this

26:51

was something new. But

26:54

a potential deal with Microsoft

26:56

made some employees uneasy. Because

26:58

it felt like it was flying in

27:01

the face of OpenAI's founding principles, openness

27:04

and safety. A

27:06

number of executives and engineers

27:08

and researchers are

27:11

worried about a Microsoft deal

27:14

because they think that

27:16

Microsoft will sell

27:19

products powered by OpenAI's

27:21

technology before the

27:24

technology has been put through its paces,

27:26

before there's enough safety testing. To

27:29

me, it felt kind of scary. That's

27:32

Ben Mann again. Microsoft

27:34

is a large company. And

27:37

we know that large companies' incentives

27:40

are not necessarily the same as

27:42

our small companies' incentives. It can

27:45

be hard to steer a big ship like that

27:48

in the right direction. And throughout

27:50

the deal process, we wanted to

27:53

make sure that Microsoft knew the

27:56

challenges associated with deploying this now. Microsoft

28:02

declined to comment for this episode.

28:05

In the summer of 2019, an

28:08

agreement between Microsoft and OpenAI

28:10

was finalized. Here's Nadella.

28:12

Hi, I'm here with Sam Altman,

28:14

CEO of OpenAI. Today,

28:17

we are very excited to announce a

28:19

strategic partnership with OpenAI. So

28:21

Sam, welcome. Thank you very much. Microsoft

28:24

would invest $1 billion in OpenAI. In

28:28

return, it would have the sole

28:30

right to license OpenAI's technology for

28:33

future products. The

28:35

deal gave OpenAI access to the

28:37

expensive computing power the company needed.

28:41

It was a big win for Sam Altman.

28:46

OpenAI had changed. Originally

28:49

set up as the nonprofit alternative to

28:51

big tech, it was now in

28:53

bed with one of the biggest tech companies in

28:55

the world. It was no

28:57

longer exclusively nonprofit, and it had

28:59

investors to think about. It

29:02

was all too much for some of the employees

29:04

who'd been worried about the deal in the first

29:06

place. Around the end of 2020,

29:08

11 OpenAI employees left

29:11

the company, including some

29:13

senior researchers. There's

29:15

a great schism at OpenAI.

29:19

The people who were leaving included

29:21

many of the architects

29:25

of OpenAI's technology. It

29:27

included people who were some of the smartest

29:29

minds in the valley

29:32

around these kinds of models. Some

29:35

employees who left, including Ben Mann,

29:37

went on to form a rival

29:40

AI company called Anthropic. But

29:44

for those still at OpenAI, two

29:46

major problems had seemingly been

29:48

solved. The company now

29:50

had money, and they had

29:52

an idea. A language

29:54

model that was getting better and better

29:57

and would soon be unleashed into an unsustainable

29:59

world. suspecting world. That's

30:03

next time on Artificial, the

30:05

open AI story. Artificial

30:18

is part of the journal, which is

30:20

a co-production of Spotify and the Wall

30:22

Street Journal. I'm your host, Kate Limebaugh.

30:25

This episode was produced by Laura

30:27

Morris with help from Annie Minoff.

30:30

Additional help from Kylan Burtz,

30:32

Alan Rodriguez Espinosa, Pierce

30:34

Singy, Jeeva Caverma, and

30:36

Tatiana Zamis. The

30:39

series is edited by Maria Byrne. Fact

30:42

checking by Matthew Wolf with consulting

30:44

from Arvin Narina. Series

30:46

art by Pete Ryan. Sound

30:49

design and mixing by Nathan Singapac. Back

30:52

in this episode by Peter Leonard, Bobby

30:54

Lord, Nathan Singapac, Griffin Tanner,

30:56

and So Wylie. Her

30:59

theme music is by So Wylie and

31:01

remixed by Nathan Singapac. Special

31:04

thanks to Catherine Brewer, Jason

31:06

Dean, Karen Howe, Berber Gin,

31:08

Matt Kwong, Sarah Platt, and

31:10

Sarah Rabel. Thanks

31:13

for listening. Our next episodes will

31:15

be released in January. See

31:17

you in the New Year.

Rate

Join Podchaser to...

  • Rate podcasts and episodes
  • Follow podcasts and creators
  • Create podcast and episode lists
  • & much more

Episode Tags

Do you host or manage this podcast?
Claim and edit this page to your liking.
,

Unlock more with Podchaser Pro

  • Audience Insights
  • Contact Information
  • Demographics
  • Charts
  • Sponsor History
  • and More!
Pro Features