Podchaser Logo
Home
What Next TBD: A Moral War for A.I.

What Next TBD: A Moral War for A.I.

Released Friday, 1st December 2023
 1 person rated this episode
What Next TBD: A Moral War for A.I.

What Next TBD: A Moral War for A.I.

What Next TBD: A Moral War for A.I.

What Next TBD: A Moral War for A.I.

Friday, 1st December 2023
 1 person rated this episode
Rate Episode

Episode Transcript

Transcripts are displayed as originally observed. Some content, including advertisements may have changed.

Use Ctrl + F to search

0:07

I want you to think back to almost exactly

0:09

a year ago. All of

0:11

a sudden, one name, one product

0:14

really, was everywhere. It's

0:16

called ChatGPT, which stands for

0:18

Generative Pre-trained Transformer, and it's

0:20

fully powered through artificial intelligence.

0:22

This project from the Open

0:24

AI Research Lab can write

0:26

essays and carry on convincing

0:28

written conversation. It took Netflix

0:30

more than three years to reach

0:32

one million users, but it took

0:34

ChatGPT just five days. Techies

0:37

everywhere short-circuiting with excitement. ChatGPT

0:39

is a disruptor and a

0:41

game changer for business communication.

0:44

Computers have achieved a sort of creativity.

0:46

Conversational AI is a tool to help us learn

0:49

faster, apply it in the right way, and there

0:51

are billions to be made. We

0:55

were talking about ChatGPT. You

0:57

were talking about ChatGPT. Your

0:59

not-very-online relatives were talking about

1:02

ChatGPT. And

1:04

according to Karen Howe, that moment was

1:06

an inflection point. When

1:08

ChatGPT came out, it was the first consumer-facing

1:11

demonstration of really powerful AI

1:14

capabilities that suddenly made everyone

1:16

in the public, all

1:18

policymakers, everyone's mom,

1:21

grandma, like you know, uncle, suddenly

1:23

come online to the idea that

1:25

this technology is a really big

1:27

deal and it's going to have

1:29

massive cascading effects all around society.

1:32

The reason I wanted to talk to

1:34

Karen is that she knows more about

1:36

AI than probably any other reporter in

1:38

the country. She's covered the

1:40

industry for years and is now writing a book

1:43

about it. She says

1:45

the release of ChatGPT changed everything

1:47

for the company that created it,

1:49

OpenAI, and for every

1:51

other AI company that hadn't yet brought something

1:53

like it to the public. And

1:56

in that moment, it was also

1:58

a huge glaring block. red

2:01

flag to every other company within

2:04

the world that has the resources

2:06

to develop this technology to hurry

2:08

up quick and start doing something

2:10

similar. It also

2:12

helps explain why it was international

2:14

news when OpenAI's board ousted CEO

2:17

Sam Altman and he then

2:19

clawed his way back. Because

2:21

this company, more than any other,

2:23

has upended what Silicon Valley means

2:25

when it says AI. And

2:28

so in the moment when OpenAI

2:30

released this change of tea, it basically changed the entire game,

2:33

it's changed suddenly all

2:35

of the, basically the

2:37

orientation of the entire tech industry to

2:41

consolidate around this singular idea of

2:43

let's try to use this technology to build

2:45

this kind of cap-on-like thing or to build

2:47

so-called large language models. And

2:50

that's really not happened

2:52

before. We haven't seen in the

2:54

entire trajectory of AI development, we

2:56

haven't seen a moment that so

2:59

quickly made everyone

3:01

start doing the same exact

3:03

thing. But,

3:11

and this is important, there was a catch.

3:14

A contradiction baked into the founding

3:16

of OpenAI. The organization

3:18

began as a nonprofit with a

3:20

mission to help humanity. Only

3:22

a small part of the company was supposed to

3:25

make a profit and a cap-on at that. The

3:28

board, however, maintained the priorities

3:30

of the nonprofit. Karen

3:33

says a fight between the two sides was almost

3:35

inevitable. What's interesting

3:38

about this whole fiasco

3:40

that happened is I'm

3:42

of the belief that the board kind

3:44

of did its job, like that's exactly what

3:47

it was set up to do. The whole

3:50

reason for this kind of weird mechanism

3:52

at the time was essentially like an

3:54

elaborate way to try to self-regulate. and

4:00

was in fact, if they

4:02

believed at some point that

4:05

Sam Allman or whoever the CEO was, was

4:09

leading the company astray from the original

4:11

mission to create technology that was beneficial

4:14

to everyone, that the board would

4:16

then have the right to fire them. And

4:19

it's sort of interesting. I think there's

4:21

a lot of rightful criticism about the

4:23

way that the board went about it,

4:25

but I do think that the fact that

4:27

they did it itself should

4:30

not be criticized because that's what they

4:32

signed up for. That's what everyone signed

4:34

up for. And in fact, Sam Allman

4:36

himself was a key author of this legal

4:38

structure that gave the board the power to do

4:40

this to him. But

4:46

now that the dust has settled, Sam Allman

4:48

clearly won the site. He's

4:50

back in control. The original board is

4:52

gone. And this idea

4:54

of self-regulation in the AI business

4:56

seems almost painfully naive, considering the

4:59

amount of money at stake. Today

5:02

on the show, the open AI fiasco

5:04

is a morality play for the industry.

5:07

One with massive stakes for all of

5:09

us as AI development races ahead. I'm

5:12

Lizzie O'Leary, and you're listening to What Next TBD,

5:15

a show about technology, power, and how the

5:17

future will be determined. Stick around.

5:32

Karen told you about the structure of open AI,

5:35

a nonprofit with a for-profit wing.

5:38

That wing, the money-making side, brought

5:40

chat GPT to market and

5:42

attracted $13 billion in investment

5:45

from Microsoft. But open

5:47

AI has more than just a split structure.

5:50

There's also an ideological split among the

5:52

people who work there and in the

5:54

field at large. You can

5:56

call them the techno optimists and the

5:58

AI doomers. so

7:32

extreme like the police around

7:34

these two different camps has

7:36

become so stream is

10:00

the most aggressive company in terms

10:02

of commercialization. Now, the company

10:04

is launching product after product, not

10:07

just Chat GPT, but the image

10:09

maker Dolly, and a whole host

10:11

of services for AI startups. They

10:13

recently hosted a conference for developers in

10:16

which they touted a platform to create

10:18

custom AI chatbots. Think of

10:20

it like an app store for your own Chat

10:22

GPT. We're thrilled to introduce

10:26

GPTs. GPTs

10:28

are tailored versions of Chat GPT for

10:31

a specific purpose. You

10:33

can build a GPT, a customized

10:36

version of Chat GPT, for almost anything,

10:38

with instructions, expanded knowledge,

10:41

and actions. And

10:43

then you can publish it for others to use. It's

10:45

sort of the most extreme

10:47

Silicon Valley of Silicon Valley

10:50

ideas. It's

10:53

sort of like the recreation of the Apple

10:56

App Store, or the turning of a product

10:58

into a platform. All of these really deeply-seated

11:01

Silicon Valley methods.

11:03

Before the hearing of Altman

11:06

happened, OpenAI

11:08

was starting to become widely criticized. What

11:10

is your nonprofit board even doing? What

11:13

is it there for? Because the

11:15

for-profit arm was essentially just taking

11:17

the lead and just seemed to

11:19

look exactly like any other startup,

11:22

any other company. And

11:24

now that Altman has been

11:26

reinstated, arguably,

11:29

he is going to make a huge effort

11:32

to try and make sure he never has

11:34

this kind of vulnerability again, where he can

11:36

get fired by the board again. And

11:38

so there were a lot of, clearly, the

11:40

board and the

11:43

OpenAI executives and Altman during

11:45

this window of time had

11:47

huge negotiations, very

11:50

tense negotiations, about how

11:52

to actually create some kind of

11:55

viable exit plan that everyone agreed with.

11:57

And ultimately, it was a huge effort.

12:00

I think Altman ended up giving quite a

12:02

lot of concessions in that he's no longer

12:04

on the board So that is one sign

12:06

that he doesn't have like full Power

12:10

in this scenario in like the new

12:12

kind of iteration of open AI But

12:15

certainly he is going to

12:17

try like every other possible way in

12:20

addition to the board Who

12:22

continue pushing ahead his own vision

12:24

and making sure that he can

12:26

continue to pursue what he wants

12:28

whether that's? Commercialization or not. Do

12:31

you think it's fair to say that? In

12:35

this reorganization He now

12:37

has more power. I would

12:39

guess Yes But

12:42

a lot of it is going to

12:44

come down to how much

12:46

these board members Turn out

12:48

to ally with him the new ones the

12:51

new ones And I think

12:53

all men would not have agreed to

12:55

these board members and for himself

12:57

to step off the board If

13:00

he didn't think that they were Big

13:03

potential allies, but we won't

13:05

really know I mean up until

13:08

the board fired Altman I also

13:10

thought that the previous board was

13:12

pretty allied with all men So

13:15

it was pretty surprising to me that they would

13:17

end up doing an action as drastic

13:19

as this Although again, it was in their job

13:21

description But

13:24

you know, we won't really know whether

13:26

these new board members might Take

13:31

a similar approach until they actually act on

13:33

it But for for the time

13:35

being the best guess is that Altman wouldn't have

13:37

allowed them on the board unless he personally felt

13:40

very confident That he would be able to turn

13:42

them into allies if they aren't already one

13:45

of the reasons I'm asking that question is because

13:49

We don't still know kind

13:51

of the fine-grained details of

13:53

why the original board Fired

13:56

him they put out a statement that seemed

13:58

sort of dramatic in the moment and

14:01

we have these competing camps that

14:03

you've illustrated, but

14:06

there's always the possibility

14:08

that more details

14:10

could drip out. And so

14:12

it makes me wonder whether we

14:15

should expect those little breadcrumbs to come in

14:17

the next few months, or if,

14:19

nope, door is closed, let's just move on

14:22

with this new version of the company. I

14:25

think if we see details coming out, it

14:27

will have to be from the previous board,

14:31

or the current board, actually, that made

14:33

the decisions, because ultimately, the previous

14:36

board never communicated to employees

14:40

why they made the decisions that they did.

14:42

The employees themselves are also in the dark.

14:44

And of course, I don't think Altman is

14:46

going to himself reveal, or

14:48

who knows, maybe he would reveal because it

14:50

might be strategically savvy, but

14:53

we're not gonna find out for employees.

14:55

There's only a small, a tiny

14:57

handful of people that truly know

14:59

the reasons, and it will

15:01

be up to them to speak. And I think

15:05

right now, my sense from

15:07

speaking with sources is that

15:09

the board and

15:12

executives, and Altman in particular,

15:14

are really trying to project

15:16

a sense of stability because

15:19

it's not just ultimately about his image

15:23

and what the public thinks of him, but also,

15:25

OpenAir obviously has Microsoft as a huge investor.

15:28

Right, they put $13 billion into this. Exactly,

15:32

so there's just a lot of incentives

15:34

right now for everyone to project stability

15:36

and unity, and to give this impression

15:38

that everything is fine, and we're gonna

15:40

continue chugging along, and that was a weird blip.

15:43

But, you know, ultimately, we

15:47

could see more details dripping out as

15:50

things maybe settle. When

15:56

we come back, what's the AI doomers are

15:59

really afraid of? legal

18:00

structure was an

18:03

experiment in the best

18:05

possible way that you could self-regulate, and

18:08

it still fell apart. I mean,

18:10

that's a huge damning sign that

18:12

this whole thing... I mean, we've been talking

18:14

about this for so long that self-regulation

18:17

doesn't work, but this is

18:19

yet another example, a cherry

18:21

on the top of mounting evidence.

18:25

And I hope that policymakers

18:27

are aware of this. I hope that

18:29

consumers and the general public become more

18:31

aware that, oh, wait, this is actually

18:33

just, in a sense, another

18:37

manifestation of Silicon Valley. And

18:40

AI is not just this organic technology

18:43

that emerges, but it's actually created by

18:45

people with particular beliefs and particular

18:47

agendas, and that

18:50

can help inform them in making

18:53

better decisions of consumers

18:55

about how much they want to incorporate

18:58

this technology into their lives, into their

19:00

work, into their... For doctors,

19:02

into their medical practices or for teachers,

19:04

into their classrooms. And I

19:06

think that is the bigger lesson that I

19:08

hope people can all take away. I

19:11

wonder if you could articulate for someone who

19:13

is outside of Silicon Valley, who doesn't think

19:15

about this on a constant basis, what

19:19

those fears are about? What

19:22

does the existential risk look

19:25

like? What does the day-to-day risk

19:27

look like? I

19:29

think the existential risk fears

19:32

are based on this

19:34

hypothetical that if

19:37

we do believe that these

19:39

systems are intelligent, which

19:41

is already entering

19:44

kind of a quagmire because... Right, like

19:46

what is intelligence? Yeah,

19:48

there is no scientific consensus around what intelligence

19:50

is. But if we believe that

19:52

these systems are intelligent, digital

19:55

intelligences are just...

19:57

They're faster at learning, faster at

19:59

learning. at combining knowledge,

20:01

knowledge in quotes, than

20:04

humans. So humans, when

20:06

we have knowledge and we learn,

20:09

it's sort of a very inefficient process. It

20:11

takes us many, many years to get like

20:14

a proper education. And then when we talk

20:16

to each other, we don't really combine knowledge

20:18

very effectively because people will have disagreements or

20:20

different interpretations about things. But with

20:22

a digital, quote unquote,

20:24

intelligence, they would be able to

20:27

learn from data, like within a

20:29

few months, and then just transfer

20:32

the data that they trained on and

20:34

the things that they learned instantly. And

20:38

so under this premise or under this belief

20:40

that this is true, then

20:42

you could see why it could be

20:46

very scary because you would be

20:48

able to quickly create a

20:50

super intelligence that is smarter than humans

20:52

on many different tasks and could not

20:55

just be smarter, but outsmart

20:58

humans and start to manipulate

21:00

people and create sort of

21:02

a life of its own where its objective

21:04

function is to perpetuate its own existence. And

21:06

if that means humans get in the way

21:08

of that, then destroy humans. I mean, again,

21:12

this is like an extremely hypothetical

21:14

scenario and is based on a

21:16

lot of different assumptions about what

21:18

intelligence is, whether or not it's

21:21

successfully being created within these digital technologies

21:24

and also whether or not

21:26

these so-called digital intelligences could

21:29

even act in the physical

21:31

world because again, they're digital and part

21:33

of the reason why humans can

21:35

do things in the world and have

21:37

potentially dangerous consequences in the world is

21:39

because we have physical bodies and we

21:41

walk around in three dimensional space. The

21:44

other camp kind of of

21:47

people that are concerned about risks of

21:49

AI is often they often call

21:52

it like short term risks or risks that are

21:54

in the here and the now. And

21:56

these people are concerned about the things that

21:58

AI technologies that we are already have

22:01

and what we've already seen, like real

22:03

examples that we've already seen of ways

22:05

that they break down and can cause

22:07

harm to people. So for this

22:10

group, they're like, we shouldn't even be talking about

22:12

what intelligence is and whether or not

22:15

we're recreating it. We should just talk

22:17

about like the literal things that we

22:19

have and observe the fact that self-driving

22:21

cars have killed pedestrians and

22:24

observe that facial recognition systems

22:26

have been involved in the wrongful arrest of

22:29

black men in the U.S. and

22:32

observe that there have been hiring

22:34

algorithms developed that don't hire women

22:36

because they learn over time that

22:39

that is sort of the best

22:41

way to maximize for

22:43

certain types of traits that

22:45

don't necessarily represent what we

22:47

need to maximize as a society. And

22:51

so that kind of camp is

22:53

much more, I mean,

22:55

there's a lot of tension between these camps because you

22:57

could see that for the

22:59

existential risk people, they would

23:02

argue that like you need

23:05

to project further into the future than what

23:07

we see now and that

23:09

when you project into the future, of course, it's

23:11

going to be hypothetical. Whereas the

23:13

people that are focused on the short term

23:15

risks are like, actually, if we focused

23:18

on solving the problems with AI

23:20

today, that would naturally get us

23:23

to better AI in the

23:25

future and to overlook the

23:27

things that are happening today

23:29

and just project hypothetically. You're

23:31

not actually going to get

23:33

anywhere meaningful because you're not

23:35

actually recognizing how

23:37

AI literally interfaces with society.

23:40

A lot of what you're talking about reminds

23:44

me of the way

23:46

products, any tech product, is

23:50

a product of the people who

23:52

built it. Their smarts, their

23:55

biases, what

23:57

they're deeply versed in, what they're not.

24:00

These are phenomenally complex and

24:02

powerful systems. When

24:05

I think about the new board

24:07

of OpenAI, it

24:09

is all men. Does

24:11

that give you pause that there

24:14

is so much

24:16

power in the hands of such

24:18

a small group of people? It

24:21

gives me pause whether or not they're all men

24:23

or not. Say more. It's

24:26

interesting that they are

24:28

all men because I would say if

24:30

there had been one woman on the board,

24:32

I mean, even before there were two women on the board, if

24:35

there had been one woman on the board, if there

24:37

had been one non-white person on the board, that

24:40

it almost would have given a false sense

24:42

of security and a false sense of representation

24:45

because it would belie the

24:47

fact that there are still only

24:49

three of them. And

24:53

that fundamentally is the bigger problem.

24:57

And for me, I do

24:59

think that obviously it is

25:02

problematic that these three people do not

25:04

seem to represent the diversity

25:07

of society, but could they have ever?

25:11

And I think that is the bigger question.

25:13

And I hope that actually the fact that there is

25:15

no representation on the board

25:17

kind of accelerates that bigger

25:20

discussion. You've read

25:22

this other story that I have been thinking

25:24

about. It was back in October

25:26

and the headline was, we don't actually know

25:28

if AI is taking over everything. And

25:31

I wanted to talk about that a little bit because I recently

25:34

was in San Francisco, you get off

25:36

the plane, it's like every other poster

25:38

or billboard is for an AI

25:40

company, literally everything. And

25:43

if you step back from that, it seems harder

25:46

to define just how

25:49

transformative AI is or isn't

25:52

because as you wrote about, very

25:54

little of it is transparent about

25:57

how it does what it does.

26:01

How can we know how

26:03

much AI is impacting our lives already?

26:06

We don't. This

26:09

is a huge problem in

26:12

that it is very much driven by

26:14

the fact that AI development is now

26:18

happening primarily in secrecy

26:20

within these companies that have huge

26:23

incentives to continue keeping that secrecy

26:25

because of competitive advantages and also

26:27

because OpenAI says, oh, we can't

26:29

tell you because

26:31

it would be dangerous to society

26:33

for people to know this. I

26:36

mean, what we do know is that, OK,

26:38

I'll nuance it a bit in that

26:41

we do know that AI

26:43

is having a huge impact on society.

26:46

What we don't know is whose

26:49

AI is the

26:51

most pervasive and where they are

26:53

pervasive, which is a very

26:55

important detail because it

26:58

matters if OpenAI's

27:01

AI is everywhere or

27:03

if another kind of AI is everywhere because

27:06

that helps us scrutinize, understand,

27:08

and hold accountable ultimately

27:10

the developers of the technology that are

27:13

impacting us. If we don't know what

27:15

AI our doctor is using, what AI

27:17

our teacher is using, what AI our

27:19

lawyer is using, we can't

27:22

know how much to trust

27:25

a particular result. We

27:27

don't know who to contest

27:29

if something is wrong. And

27:33

regulators, fundamentally, then can't

27:35

create rules that make

27:37

sense and apply

27:39

to specific cases of how this

27:41

technology is used, the entire supply chain

27:44

of how it's used. So

27:46

that is, I think, the more fundamental

27:48

problem is the who. We just don't

27:50

know the who. And your

27:53

life could be entirely run by

27:55

OpenAI's algorithms. And you would not

27:57

know that. And that is the

27:59

problem. And yet

28:01

I think if you are a person

28:04

whose life does not directly touch the

28:06

tech industry or Silicon Valley, you might

28:08

be thinking, this is

28:10

really complicated. It seems

28:13

too hard to get my head around right

28:15

now. What

28:19

should you be thinking about

28:22

so that you don't find yourself

28:24

in a situation where AI

28:28

is deeply enmeshed in your life before you

28:30

realized it? I think

28:32

asking lots of questions and keeping

28:34

your eyes open to the possibility

28:37

that AI could be used in

28:39

many, many ways that you

28:41

don't necessarily know about. If you are

28:43

a parent, you can definitely

28:47

ask questions with the school, with

28:49

your kids' teachers about, are you using AI

28:51

in the classroom? How are you using it

28:53

in the classroom? Who are you using? And

28:57

if you are, I mean, anyone who

29:00

lives in a city with a local government

29:02

could be asking their local government

29:05

officials, like, how are you thinking

29:07

about AI? How are you

29:09

potentially going to acquire some of these

29:11

tools? In which agencies, in which

29:14

processes are you going to incorporate these

29:16

tools? And what kind of accountability mechanisms

29:18

are you going to put in place?

29:21

I think that kind of

29:23

just speaking up

29:25

and observing and asking

29:27

those questions to map out

29:30

for yourself how your

29:33

life might be

29:35

affected by these technologies is

29:37

a really critical first step to then figuring out

29:39

whether or not you want them to be a part

29:42

of them, a part of your life. And

29:45

I think also just for

29:47

consumers, I mean, a lot of people now use

29:49

chatGBC, a lot of people use BARD, a lot

29:51

of people use Anthropics

29:53

Quad. I

29:56

also think just like thinking more about how

29:59

you actually want to incorporate. of this technology and like

30:02

any other products that we use,

30:05

how we want to vote with our money, which

30:07

companies do we want to give that money to,

30:09

and to put

30:11

pressure with that money

30:14

when a company does not actually

30:17

align with our expectations of

30:19

being a good actor in the world. Karen

30:28

Howe, thank you so much for

30:30

your insight and your time. Thank

30:32

you so much, Lizzie. Karen

30:35

Howe is a contributing writer for The

30:37

Atlantic and is writing a book for

30:39

Penguin Press about the AI industry. And

30:42

that is it for the show today. What Next

30:45

TBD is produced by Evan Campbell and Anna Phillips.

30:48

Our show is edited by Jonathan Fisher. Alicia

30:50

Montgomery is vice president of audio for

30:52

Slate. TBD is part of

30:54

the larger What Next family, and we're

30:57

also part of Future Tense, a partnership

30:59

of Slate, Arizona State University, and New

31:01

America. And if you're a fan

31:03

of the show, I have a request for you. Join

31:06

Slate Plus. Just head

31:08

on over to slate.com/what next plus to

31:10

sign up. It also makes a great

31:12

holiday gift for the newshound in your

31:14

life. We will be

31:16

back on Sunday with another episode. I'm Lizzie

31:19

O'Leary. Thanks for listening.

Rate

Join Podchaser to...

  • Rate podcasts and episodes
  • Follow podcasts and creators
  • Create podcast and episode lists
  • & much more

Episode Tags

Do you host or manage this podcast?
Claim and edit this page to your liking.
,

Unlock more with Podchaser Pro

  • Audience Insights
  • Contact Information
  • Demographics
  • Charts
  • Sponsor History
  • and More!
Pro Features