Episode Transcript
Transcripts are displayed as originally observed. Some content, including advertisements may have changed.
Use Ctrl + F to search
0:00
The. Year was nineteen fifty six
0:02
and the place was Dartmouth College.
0:04
In. A research proposal: A math professor used
0:06
a term that was then entirely new.
0:09
And. Entirely fanciful. Artificial.
0:13
Intelligence. There's nothing fanciful
0:15
about A anymore. The
0:17
directors of the Stanford
0:19
Institute for Human Centered
0:21
Artificial Intelligence janitor Mandy
0:23
and fatally on uncommon
0:25
Knowledge Now. Welcome
0:37
to uncommon knowledge. I'm Peter Robinson.
0:39
Philosopher John A Tremendous serve from
0:41
two thousand to Twenty Seventeen. As
0:43
Provost here at Stanford University Doctor
0:46
Regiment, he received his undergraduate degree
0:48
from the University of Nevada before
0:50
earning his doctorate in Philosophy at
0:52
Stanford. He earned that doctorate in
0:55
Nineteen Eighty Three. And. Became
0:57
a member of the Stanford Philosophy Department
0:59
the very next year. He's the author
1:01
of a number of books, including the
1:04
Nineteen Ninety Volume The Concept of Logical
1:06
Consequence Since stepping down as Provost Doctor,
1:08
a tremendous Zelda number of positioned at
1:11
Stanford including and for our purposes today
1:13
this is the relevant position. Code.
1:15
Director. Of. The Stanford Institute
1:17
for Human Centered Artificial Intelligence. Born
1:20
in Beijing, doctor Faye Faye Li
1:22
moved to this country at the
1:24
age of fifteen. She received her
1:27
undergraduate degree from Princeton and a
1:29
doctorate in Electrical Engineering from the
1:31
California Institute of Technology. Now.
1:34
A professor of computer science here
1:36
at Stanford Doctor leaves the founder
1:39
once again the Stanford Institute for
1:41
Human Centered Artificial Intelligence. Doctor Lose
1:43
memoir published just last year. The
1:46
world's I see curiosity, exploration and
1:48
discovery at the dawn of a
1:51
I. John. That you Monday
1:53
and fifty li. Thank you for making the time to
1:55
join me. That measure of Rainer
1:57
L. adding S. I. would
1:59
say that i'm great to ask a dumb question, but
2:02
I'm actually going to ask a question that is right at
2:05
the top of my form. What
2:08
is artificial intelligence? I
2:10
have seen the term 100 times
2:12
a day for what, several years now.
2:15
I have yet to find a succinct
2:19
and satisfying explanation. Let's
2:22
see. Let's go to the philosophy. Here's a man who's
2:24
professionally rigorous. But here's a woman who actually
2:26
knows you. Yeah, I've seen those answers. Let's
2:29
take the answer and then I will give you a
2:31
different answer. Oh, really? All right. Okay.
2:34
Peter used the word succinct and I'm sweating here. Because
2:37
artificial intelligence by today
2:39
is already a collection
2:41
of methods
2:45
and tools that summarizes
2:47
the overall area
2:50
of computer science that has to
2:52
do with data, pattern
2:56
recognition, decision making in
2:59
natural language, in images,
3:02
in videos, in robotics,
3:04
in speech. So
3:06
it's really a collection. At the
3:08
heart of artificial intelligence is
3:11
statistical modeling such as machine
3:13
learning using computer programs. But
3:17
today, artificial intelligence truly is
3:19
an umbrella term
3:21
that covers many things that
3:24
we're starting to feel familiar
3:26
about. For example, language intelligence,
3:28
language modeling, or speech,
3:31
or vision. John, you
3:34
and I both knew John McCarthy who
3:36
came to Stanford after he wrote
3:38
that, used the term, coined the term artificial
3:41
intelligence, now the late John McCarthy. And
3:43
I confess to you who knew him as I
3:46
did that I'm a little suspicious
3:48
of the term because I knew John. And
3:50
John liked to be provocative. And
3:52
I am thinking to myself, wait a moment, we're
3:55
still dealing with ones and zeros.
3:58
Computers are calculating machines. Artificial.
4:01
Intelligence is a. Is
4:03
a marketing term so know.
4:07
It's It's not really a marketing term
4:09
so I will give it. give you
4:11
an answer is more like what John
4:13
would have given and forth and that
4:16
is. It's is the field. the sub
4:18
field of computer science that attempts to
4:20
create machines. That. A
4:22
can accomplish tasks that.
4:25
Seem. To require intelligence.
4:28
From. The. Early beginner
4:31
early artificial intelligence for system
4:33
since that play chess or
4:35
checkers even in a very
4:37
very simple things and John.
4:40
Who as you know who
4:42
knew him? Ah, was. Ah,
4:46
ambitious, And he thought that
4:48
in a summer conference at
4:51
Dartmouth. They. Could solve.
4:53
Most. Of the problems. For
4:56
it can. I'm going to come up
4:58
what nickname a couple of very same as events
5:00
what I'm looking for here. I'll
5:02
namely events we have in Nineteen Ninety Seven.
5:05
The computer to seats Garry Kasparov at chess
5:07
Big Moment for the first time. Big Blue
5:09
and I B M project to seats a
5:11
human being, a chest and not just a
5:14
human being, but Garry Kasparov who by some
5:16
measures is one of the half dozen greatest
5:18
chess players who ever met. And
5:22
as best I can tell computer science has
5:24
said he on. Things
5:26
are getting faster, but still
5:28
and then we have. In.
5:30
Twenty sistine. A computer
5:33
defeat go expert harm's way.
5:36
And the following year it defeats
5:38
go Grandmaster lead seat all I'm
5:40
not at all sure and pronouncing
5:43
that correctly, settle in a five
5:45
game match and people say whoa,
5:47
something just happened this time. So
5:49
what I'm looking for here is
5:52
something. Something. that
5:54
a layman like me can latch onto
5:56
the here's the discontinuity here's where we
5:58
entered a new moment here's artificial
6:00
intelligence. Am I looking for something that
6:02
doesn't exist? No,
6:05
no, I think you're not. So
6:09
the difference between Deep
6:11
Blue and which played
6:13
chess, which played chess, Deep Blue
6:15
was written using traditional
6:17
programming techniques, and
6:19
what Deep Blue did is it
6:22
would for each move, for each
6:24
position on the board, it would
6:26
look down to all the possible...
6:28
Every conceivable decision tree. Every decision
6:30
tree to a certain
6:32
depth. I mean, obviously you can't
6:34
go all the way. And
6:36
it would have ways of
6:38
weighing which ones are best. And so
6:40
then it would say, this is the
6:42
best move for me at this time.
6:45
That's why in some sense it
6:48
was not theoretically very interesting. The
6:52
Go AlphaGo... AlphaGo
6:56
which was a Google project. It was a Google
6:58
project. This uses deep
7:03
learning. It's a neural net. It's
7:06
not explicit programming. We
7:09
don't know, we don't go into
7:11
it with an idea
7:13
of, here's the algorithm
7:15
we're going to use. Do this and then
7:17
do this and do this. So
7:20
it was actually quite a surprise, particularly
7:23
AlphaGo. Not
7:25
to me, but sure. To
7:28
the public, yes. To the public. But
7:31
our colleague, I'm going at this one more
7:33
time because I really want to understand this.
7:35
I really do. Our
7:37
colleague here at Stanford, Zhiyuan, who must be known
7:39
to both of you, a physicist here at Stanford,
7:41
and he said to me, Peter, what you need
7:43
to understand about the
7:45
moment when a computer defeated Go, which
7:48
is much
7:50
more complicated, at least in the decision space,
7:52
is much, much bigger, so to speak, than
7:55
chess. There are more pieces, more squares. And
7:58
Zhiyuan said to me... that
8:01
whereas chess just did more quickly what a
8:03
committee of grandmasters would have decided on, the
8:06
computer in Go was
8:09
creative. It was pursuing strategies
8:11
that human beings had never pursued before.
8:13
Is there something to that? Yeah,
8:15
so there's a famous... They think getting impatient
8:17
with me. I'm asking such a good question. No, no,
8:20
you're asking such a good question. So in the third
8:22
game of the... I think it was the third game
8:24
of the five games, there was a move, I think
8:26
it was move 32. 32 or 35. It's that the
8:28
computer program made a move
8:35
that really surprised every single
8:37
Go masters. Not only Lisa
8:39
Dole himself, but everybody who's
8:42
watching. That's a very surprising
8:46
move.
8:48
In fact, even post-anonymizing how
8:51
that move came about, the
8:54
human masters would say this
8:57
is completely unexpected. What
8:59
happens is that the computers
9:02
like John says,
9:04
right, has the
9:07
learning ability and has the inference
9:09
ability to think about patterns or
9:12
to decide on certain
9:14
movements even outside of
9:19
the trained, familiar
9:21
human masters domain
9:24
of knowledge. May I expand on that? The
9:29
thing is these deep neural nets
9:32
are supremely good
9:35
pattern recognition systems,
9:39
but the patterns they recognize, patterns
9:42
they learn to recognize
9:44
are not necessarily exactly the
9:46
patterns that humans recognize.
9:49
So it was seeing something
9:51
about that position and it
9:53
made a move that because
9:55
of the patterns that
9:57
it recognized in the book, in the board
10:00
that made
10:02
no sense from a human standpoint. In
10:06
fact, all of
10:08
the lessons in how to play Go tell
10:11
you never make a move that close to
10:13
the edge that quickly. And
10:17
so everybody thought it made a mistake, and
10:19
then it proceeded to win. And
10:22
I think the way to understand that is
10:24
it's just seeing patterns that
10:26
we don't see. It's
10:28
computing patterns
10:31
that is not traditionally human, and
10:34
it has the capacity to compute.
10:37
OK. I'm trying to... We're
10:39
already entering this territory, but
10:42
I am trying really hard to tease out
10:44
the, wait a
10:46
moment, these are still just machines
10:48
running zeros and ones, bigger
10:51
and bigger memory, faster and faster ability to
10:53
calculate, but we're still dealing with machines that
10:55
run zeros and ones. It's one strand. And
10:58
the other strand is, as you well know,
11:01
2001's Space Odyssey, where the computer takes over
11:03
the ship. Open the pod bay
11:05
doors, Hal. I'm sorry,
11:08
Dave. I'm afraid I
11:10
can't do that. OK. We'll
11:13
come to this soon enough. Fei-Fei
11:16
Li, in your memoir, The World I
11:18
See, quote, I believe our civilization stands
11:20
on the cusp of
11:22
a technological revolution with
11:25
the power to reshape life as we
11:27
know it. Revolution,
11:32
reshape life as we know it. Now
11:34
you're a man whose whole academic training
11:36
is in rigor. Are you going to
11:38
let her get away with this kind
11:40
of wild overstatement? No,
11:42
I don't think it's an overstatement. I
11:46
think she's right. He told me to write a book.
11:50
And you, Peter, it's a technology
11:53
that is extremely powerful,
11:55
that will allow us and is allowing us
11:58
to get to the point.
12:00
computers to do things we never
12:03
could have programmed them to do. And
12:06
it will change everything, but
12:08
it's like, what a lot of
12:10
people have said, it's like electricity, it's
12:12
like the steam revolution. It's
12:16
not something necessarily to be afraid of. It's
12:19
not that it's going to suddenly take over the
12:21
world. That's not what Fei-Fei was saying. Right.
12:25
It's a powerful tool
12:27
that will revolutionize industries
12:29
and humans the way
12:31
we live. But the word revolution
12:33
is not that it's a conscious
12:35
being. It's just a powerful tool
12:37
that changes things. I would find
12:39
that reassuring if a few pages later Fei-Fei
12:42
had not gone on to write. Oh no.
12:45
There's no separating the beauty of science
12:47
from something like, say, a
12:49
Manhattan project, close quote. Nuclear
12:52
science. We can produce
12:54
abundant energy, but it can also produce weapons
12:57
of indescribable horror. AI
13:01
has boogeymen of its own, whether
13:03
it's killer robots, widespread surveillance, or
13:05
even just automating all
13:07
eight billion of us out of our jobs. Now,
13:10
we could devote an entire program to each of those boogeymen,
13:13
and maybe at some point we should. But
13:17
now that you have scared me, even in
13:19
the act of reassuring me, and
13:21
in fact it throws me that you're so eager to reassure
13:23
me that I think maybe I really should be even more
13:26
scared than I am. Let me just go
13:28
right down. Here's the killer robots. Let me quote the
13:30
late Henry Kissinger. I'm just going to put these up
13:32
and let you... You
13:34
may calm me down if you can. Henry
13:37
Kissinger. If you imagine a war
13:39
between China and the United States, you have artificial
13:42
intelligence weapons. Nobody has
13:45
tested these things on a broad scale, and
13:48
nobody can tell exactly what will
13:50
happen when AI fighter planes
13:52
on both sides interact. So
13:55
you are then... I'm quoting Henry Kissinger, who is not
13:57
a fool, after all. So you are then
13:59
in a world of... potentially total
14:01
destructiveness." Fei-Fei?
14:05
So, like I said, I'm now denying
14:07
how powerful these tools are.
14:10
I mean, humanity, before AI,
14:12
has already created tools and
14:14
technology that are very destructive,
14:16
could be very destructive. We
14:18
talk about Mahatan Project, right?
14:21
But that doesn't mean that
14:24
we should collectively decide to use
14:26
this tool in this destructive way.
14:29
Okay, Peter, you know, think
14:31
back before you even
14:33
had heard about artificial intelligence. Which actually
14:35
was five years ago, maybe. No, I know.
14:38
This is all happening so fast. Just five
14:40
years ago. Or ten years ago. Remember
14:44
the tragic incident
14:46
where an Iranian
14:50
passenger plane was shot down flying
14:52
over the Persian Gulf by
14:55
an Aegis system? Yes, yes.
14:57
And one of our ships. One of
14:59
our ships, an automated system,
15:01
because it had to be automated
15:03
in order to be... Humans can't
15:05
react to that. Exactly.
15:09
And in this case, for reasons
15:11
that I think are quite understandable
15:13
now that you understand the incident, but it
15:17
did something that was horrible. That's
15:20
not different in kind from what you can do with AI,
15:22
right? So
15:24
we as creators
15:29
of these devices or as users
15:31
of AI have to be
15:34
vigilant about what
15:36
kind of use we put them to. And
15:39
when we decide to put them to one
15:41
particular use, and there may be uses,
15:45
the military has many good uses for them, we
15:47
have to be vigilant about
15:50
their doing what we intend
15:52
them to do rather than doing things
15:54
that we don't intend to do. So
15:56
you're announcing a great theme. And
15:59
that theme is... that what Dr.
16:01
Fei-Fei Li has invented makes
16:05
the discipline to which you have
16:07
dedicated your life, philosophy, even
16:10
more important, not less so. Yeah, that's why we're
16:13
co-directors. The power of the future makes the
16:15
human being more important, not less so. Am I
16:17
being glib? Or is that on to something
16:19
else? So let me tell you a
16:21
story about... So
16:24
Fei-Fei used to live next door to me, or close
16:26
to next door to me. And
16:29
I was talking to her... I'm not sure whether that would
16:31
make me feel more safe or more... And
16:34
I was talking to her... I was still
16:36
privileged. And she
16:38
said to me, you and
16:40
John Hennessy start a lot of institutes that
16:44
brought technology into other
16:46
parts of the university. We
16:49
need to start an institute
16:51
that brings philosophy and ethics
16:54
and the social sciences into
16:56
AI. Because
16:58
AI is too dangerous to
17:01
leave it to the computer scientists alone.
17:06
Nothing wrong with it. There are many stories about how
17:08
hard it was to persuade him when he was provost,
17:10
and you succeeded. Can I... just
17:12
one more boogeyman briefly? Yeah.
17:14
And we'll return to that theme that you just gave
17:17
us there, and then we'll get back to the Stanford Institute.
17:21
I'm quoting you again. This is from your memoir. The
17:24
prospect of just automating all billion of us
17:26
out of our jobs. That's the phrase you
17:28
used. Well, it turns out that
17:31
it took me mere seconds using
17:33
my AI-enabled search algorithm,
17:36
search device, to find
17:38
a Goldman Sachs study from last year, predicting
17:41
that in the United States and Europe,
17:43
some two-thirds of all jobs could
17:46
be automated, at least to some degree.
17:49
So why shouldn't
17:51
we all be terrified? Henry
17:53
Kissinger, world apocal... All right, maybe that's a
17:55
bit too much. But my job!
18:00
I think job change is
18:02
real. Job change is real
18:04
with every single technological advances
18:07
that humanity, human civilization has
18:09
faced. That
18:11
is real and that's not to be taken
18:13
lightly. We also have to
18:16
be careful with the word job. Job
18:18
tends to describe a holistic profession
18:21
or that a person attaches
18:25
his or her income as well. It's
18:27
not an identity rule. But there
18:29
is also within every job, pretty much
18:32
within every job, there are so many
18:34
tasks. It's hard to imagine
18:36
there's one job that has only
18:38
one singular task. Being
18:42
a professor, being a scholar, being a doctor,
18:44
being a cook, all
18:46
of this job has multiple tasks.
18:49
What we're seeing as technology is
18:52
changing how some of these tasks can be
18:55
done. And it's true as
18:57
it changes these tasks, some of them,
18:59
some part of them could
19:01
be automated. It's starting
19:03
to change how the jobs are and eventually
19:05
it's going to impact jobs. So this is
19:08
going to be a gradual process and it's
19:10
very important we stay on top
19:12
of this. This is why Human
19:15
Center AI Institute was founded is
19:17
these questions are profound. They're
19:19
by definition multidisciplinary. Computer
19:23
scientists alone cannot do all
19:25
the economic analysis, but economists
19:27
now understanding what these
19:30
computer science programs
19:33
do will not by themselves understand
19:35
the shift of the jobs. Okay,
19:38
John, may I tell you? Go ahead. But
19:40
let me just point something out. The
19:43
Goldman Sachs study said
19:45
that such and
19:47
such percentage of jobs will be
19:49
automated or can be automated at
19:51
least in part. Yes. Now
19:54
what they're saying is that a certain number
19:56
of the tasks that go into a particular
19:58
job. So
20:00
Peter, you said it only
20:03
took me a few seconds to
20:06
go to the computer and find
20:08
that article. Guess
20:10
what? That's one
20:13
of the tasks that would have taken you
20:15
a lot of time. So
20:17
part of your job has
20:20
been automated. Okay,
20:22
now let me tell you a story. But
20:24
also empowered. Empowered, okay fine, thank
20:27
you, thank you, thank you, you're making me feel good. Now
20:29
let me tell you a story. All
20:31
three of us live in California, which means all three
20:34
of us probably have some friends down in Hollywood. And
20:37
I have a friend who was involved in the writers
20:39
strike. Yeah. Okay,
20:41
and here's the problem. To
20:44
run a sitcom, you
20:46
used to run a writers room. And
20:49
the writers room would employ seven, a dozen,
20:52
on the Simpsons show, the cartoon show. They'd keep,
20:54
they'd had two, a couple of writers rooms running.
20:56
They were employing 20. And these
20:59
were the last kind of person you'd
21:01
imagine a computer could replace
21:03
because they were well educated and
21:05
witty and quick with words. And
21:09
you think of computers as just
21:11
running calculations. Maybe spreadsheets, maybe someday
21:13
they can eliminate accountants, but writers,
21:15
Hollywood writers. And
21:18
it turns out, and my friend illustrated
21:20
this for me by saying, doing
21:24
the artificial intelligence thing where it had
21:26
a prompt, draft a
21:29
skit for Saturday
21:31
Night Live in which
21:33
Joe Biden and Donald Trump are
21:35
playing beer pong. 15
21:39
seconds. Now professionals
21:41
could have tightened it up but
21:43
it was pretty funny and it was instantaneous.
21:45
And you know what that means? That
21:48
means you don't need four
21:50
or five of the seven writers. You need a senior
21:52
writer to assign intelligence
21:55
the artificial, and you need maybe one other writer
21:57
or two other writers to tighten it up. redraft
22:00
it, it is
22:02
upon us. And your artificial
22:04
intelligence is going to get bad press when
22:07
it starts eliminating the jobs of
22:09
the chattering classes, and that has
22:11
already begun. Tell me I'm wrong.
22:13
Do you know, before the
22:16
agricultural revolution, something
22:18
like 80, 90 percent of all the people
22:22
in the United States were
22:25
employed on farms. Now
22:29
it's down to 2 percent or 3 percent,
22:33
and those same farms, that same
22:35
land, is far, far more productive.
22:38
Now, would you say that your
22:41
life or anybody's life now
22:43
was worse off than it
22:45
was in the 1890s when
22:47
everybody was
22:51
working on the farm? No. So
22:53
yes, you're right. It
22:56
will change jobs, it will
22:58
make some jobs easier, it
23:00
will allow us to do things
23:02
that we could not do before, and yes,
23:05
it will allow fewer people to
23:07
do more of what they were doing before, and
23:09
consequently there will
23:17
be fewer people in that line of work.
23:20
That's true. I also want
23:22
to just point out two things. One is
23:24
that jobs is always changing,
23:26
and that change is always painful. And
23:28
as computer
23:30
scientists, as philosophers, also as citizens of
23:33
the world, we should be empathetic of
23:35
that, and nobody is saying we should
23:37
just ignore that change in
23:40
pain. So this is why we're studying
23:42
this, we're trying to talk to policymakers,
23:45
we're educating the population. In the
23:47
meantime, I think we should give
23:49
more credit to human creativity in
23:52
the face of AI. I
23:54
start to use this example
23:57
that's not even AI. Think
23:59
about the advanced speaking of
24:02
Hollywood graphics technology
24:05
CGI and all that right
24:08
the video gaming industry or animation and all that
24:10
right one of many
24:13
of our including our children's
24:16
favorite animation series is by
24:18
Ghibli studio you know
24:21
princess no Mononaki my neighbor
24:24
Totoro spirited
24:26
a wall all of
24:28
these were made during
24:30
a period where computer graphics
24:32
technology is far more advanced
24:34
than these hand-drawn animations
24:38
yet they're the beauty
24:40
the creativity the emotion the
24:42
uniqueness in this film continue
24:44
to inspire and just entertain
24:47
humanity so I think we
24:49
need to still
24:51
have that pride and also give
24:54
the credit to humans let's
24:56
not forget our creativity
24:58
and the emotion and intelligence is unique
25:01
it's not going to be taken away
25:03
by technology thank you I feel
25:05
slightly reassured I'm still
25:07
nervous about my job but I feel slightly reassured but
25:10
you mentioned government a moment ago which
25:12
leads us to how we should regulate
25:15
AI let me
25:17
give you two quotations I'll begin
25:19
I'm coming to the quotation from the two of you
25:21
but I'm going to start with
25:23
a recent article in the Wall Street Journal
25:25
by Senator Ted Cruz of Texas and former
25:28
senator Phil Graham also of Texas quote the
25:31
Clinton administration took a hands-off approach
25:34
to regulating the early in the
25:36
internet in so doing it unleashed
25:38
extraordinary economic growth and prosperity the
25:42
Biden administration by contrast is
25:44
impeding innovation in
25:46
artificial intelligence with aggressive
25:48
regulation close quote that's
25:50
them this is you also
25:53
a recent article in the Wall Street
25:56
Journal John Echamendi and Fei-Fei Li quote
25:59
President Biden signed an executive
26:01
order on artificial intelligence that
26:03
demonstrates his administration's commitment to harness
26:06
and govern the technology. President Biden
26:08
has set the stage and now
26:10
it is time for Congress to
26:12
act. Cruz and Graham,
26:15
less regulation. Echamendi and
26:17
Lee, Biden administration has done well. Now
26:19
Congress needs to give us even more.
26:22
No. All right, John. No,
26:24
I don't agree with that. So I
26:27
believe regulating any kind of technology
26:29
is very difficult and
26:32
you have to be careful not
26:34
to regulate too soon or
26:38
not to regulate too late. Let
26:41
me give you another example. You talked
26:43
about the Internet and it's true. The
26:45
government really was quite hands off and
26:47
that's good. That's good. It worked out.
26:50
It worked out. But now
26:52
let's also think about social media. Social
26:55
media has not worked
26:58
exactly, worked out exactly the
27:00
way we want
27:03
it. We originally believed that we
27:05
were going to enter
27:08
a golden age in which
27:10
friendship, comity, well, and everybody
27:12
would have a voice and
27:15
we could all live
27:18
together, kumbaya and so forth. That's not
27:20
what happened. Jonathan
27:22
Haight has a new book out on
27:24
the particular pathologies among young people from
27:26
all of these social media. Not an
27:28
argument, it's an argument, but it's based
27:30
on lots of data. So
27:34
it seems to me that I'm
27:37
in favor of very light
27:40
handed and
27:42
informed regulation
27:45
to try to put up sort of
27:47
bumpers, I don't know what
27:49
the analogy is, for the technology. I
27:53
am not for heavy
27:56
handed top down regulation
27:58
that stifles innovation. Okay,
28:00
here's another, let me get on
28:02
to this, I'm sure
28:04
you'll be able to adapt your answers to this question. Okay.
28:07
I'm continuing your Wall Street Journal piece. Big
28:10
tech companies can't be left to govern
28:12
themselves. Around here, Silicon
28:14
Valley, those are fighting words. Academic
28:17
institutions should play a leading role in
28:20
providing trustworthy assessments and benchmarking of
28:22
these advanced technologies. We
28:24
encourage an investment in
28:26
human capital to bring more talent to the field of
28:28
AI with academia and the government,
28:30
close quote. Okay, now, it
28:33
is mandatory for me to say this,
28:35
so please forgive me my fellow Stanford
28:39
employees, apart from anything else. Why
28:42
should academic institutions be trusted? Half
28:44
the country has lost faith in
28:46
academic institutions. DEI, the
28:49
whole woke agenda, anti-Semitism
28:52
on campus. We've got a Gallup, recent
28:54
Gallup poll showing the proportion of Americans who
28:56
expressed a great deal or quite a lot
28:58
of confidence in higher education. This
29:00
year came in at just 36%, and
29:04
that is down in the last eight years from 57%.
29:08
You are asking us to trust you
29:10
at the very moment when we believe we have good
29:12
reasons and knock it off. Trust
29:14
you? Okay, Faith. So,
29:16
I'll start with this first half
29:18
of the answer. I'm sure John has a
29:21
lot to say. I do want to make
29:23
sure, especially we're in the heads of co-directors
29:25
of HAI, when we
29:28
talk about the relationship between government
29:30
and technology, we tend to use
29:32
the word regulation. I really, really
29:34
want to double-click. I
29:36
want to use the word policy. And
29:39
policy and regulations are
29:41
related but not the same. When
29:44
John and I wrote that Wall
29:46
Street Journal opinion piece, we really
29:48
are focusing on a piece of
29:50
policy that is to resource
29:53
public sector AI, to resource academia,
29:55
because we believe that AI is
29:57
the only way to do it.
30:00
such a powerful technology and
30:02
science, and academia and public
30:04
sector still has a role
30:06
to play to create public
30:09
good. And public
30:11
goods are a curiosity-driven
30:13
knowledge exploration, are cures
30:16
for cancers, are
30:18
the maps of biodiversity
30:20
of our globe, are
30:22
discovery of nanomaterials that
30:25
we haven't seen before,
30:27
are different ways of
30:29
expressing in theater,
30:31
in writing, in music. These
30:33
are public goods. And when
30:35
we are looking, when we are
30:37
collaborating with the government on policy,
30:40
we're focusing on that. So
30:42
I really want to make sure. Regulation
30:44
we all have personal opinion, but
30:46
there's more than regulation in policy.
30:48
Yeah. So, yeah. We,
30:51
yeah, I, let me make one last run
30:53
at you. In my theory
30:55
here, although I'm asking questions that
30:57
you'd, I'm quite sure you'd like
31:00
to take me out and swap me around at this point,
31:02
John. But this is
31:04
serious. You've got the Stanford Institute for
31:06
Human Centered Artificial Intelligence, and that's because
31:09
you really think this is important. But
31:12
we live in a democracy, and you're going
31:14
to have to convince a whole lot of people. So let me
31:16
take one more run at you and then hand it back to
31:18
you, John. Your article in the
31:21
Wall Street Journal, again, let me repeat this. We
31:23
encourage an investment in human capital to
31:25
bring more talent to the field of AI with academia
31:27
and the government. That means money.
31:29
An investment means money, and it means
31:32
taxpayers' money. Here's what Cruz
31:34
and Graham say in the Wall Street Journal. The
31:36
Biden regulatory policy on AI has everything to do
31:38
with special interest rent seeking. Stanford
31:42
faculty make well above the national
31:44
average income. We are sitting at
31:46
a university with an endowment of
31:48
tens of billions of dollars. John,
31:52
why is not your article in the Wall
31:54
Street Journal the very
31:56
kind of rent seeking that
31:58
Senator Cruz and Graham have? and Senator
32:00
Graham are saying, are you kidding? Peter,
32:04
let's take another example.
32:07
So one of the greatest policy
32:10
decisions that this country has ever
32:12
made was when Vannevar
32:15
Bush, advisor to
32:18
at the time President Truman, convinced,
32:21
he stayed on through Eisenhower as I recall,
32:24
is by partisan. Exactly. No, no,
32:26
it was not a partisan issue
32:29
at all, but convinced Truman
32:33
to set up the NSF for
32:37
funding, Curiosity-based
32:40
research, advanced research
32:43
at the universities, and
32:46
then not to say
32:49
that companies don't have any role, not to
32:52
say that government has no role, they both
32:54
have roles, but they're different
32:56
roles. And companies
33:00
are, tend to be better
33:02
at development, better at producing
33:04
products, and tapping into
33:06
things that can, within a
33:08
year or two or three, can
33:10
be a product that will be useful. Scientists
33:15
at universities don't
33:17
have that constraint. They don't have to worry
33:19
about when is this going to be commercial.
33:21
And that has, I
33:25
think, had such
33:28
an incalculable effect
33:31
on the prosperity of
33:33
this country, on the fact that
33:36
we are the leader in every
33:38
technology field. It's
33:40
not an accident that we're the leader in
33:42
every technology field. We weren't, we didn't use,
33:45
and, and does it affect your argument if
33:47
I add, it also enabled us,
33:49
or contributed to a victory in
33:52
the Cold War, the weapons systems
33:55
that came out of universities? All
33:57
right. Well, no, absolutely. And, you
33:59
know, In
34:01
other words, it ended up being a defensive demand.
34:03
You could argue from all kinds of points of
34:06
view that it was a good ROI
34:08
for taxpayers' money. So
34:10
we're not arguing for higher
34:13
salaries for faculty or anything of that
34:15
sort. But we
34:17
think, particularly in AI,
34:20
it's gotten to the
34:22
point where scientists at
34:24
universities no
34:26
longer play in the game
34:29
because of the cost of the
34:31
computing, the cost, the inaccessibility of
34:33
the data. That's why
34:35
you see all of these developments coming out of companies. That's
34:38
great. Those are great developments. But
34:42
we need to have also
34:44
people who are exploring
34:46
these technologies without looking
34:49
at the product, without being driven
34:52
by the profit motive. And
34:54
then eventually, hopefully, they will develop
34:57
discoveries, they will make discoveries, will
35:00
then be commercializable. Okay. I
35:02
noticed in your book, Feifei, I was very struck
35:04
that you said, I think it was about a
35:06
decade ago, 2015, I
35:08
think was the, that you noticed
35:10
that you were beginning to lose colleagues to
35:12
the private sector. Yeah. Presumably,
35:15
because they just pay so phenomenally well around here
35:18
in Silicon Valley. But then there's also the point
35:20
that to get to make progress in AI,
35:23
you need an enormous amount of
35:25
computational power. And
35:27
assembling all those ones and
35:30
zeros is extremely expensive. So
35:33
chat GPT, what is the parent company? OpenAI.
35:36
OpenAI got started with an
35:38
initial investment of a billion dollars. An
35:42
initial, friends and family capital of a billion
35:44
dollars is a lot of money even around
35:46
here. Okay. Yes.
35:50
All right. It
35:52
feels to me as though every one of these topics is worth
35:54
a day long. Actually,
35:57
I think they are. And by the way.
36:00
This has happened before, where the
36:03
science has become so expensive
36:06
that it could no longer ... that
36:09
university-level research and researchers could no longer
36:11
afford to do the science.
36:14
It happened in high-energy physics.
36:17
High-energy physics used to mean you had
36:19
a Van de Graaff generator in your
36:22
office, and that was your accelerator. You
36:24
could get it. You could do what
36:26
you needed to do. And
36:30
then it no longer was ... the
36:33
energy levels were higher and
36:35
higher. And what happened? Well,
36:38
the federal government stepped in and said,
36:40
we're going to help. We're going to
36:42
build an accelerator. Stanford
36:45
linear accelerator. Exactly. Sandia Labs, Lawrence
36:47
Livermore, all these are at least
36:49
in part federal established. CERN. CERN,
36:53
which is European. Well, Fermilab.
36:55
So the first accelerator was
36:58
Slack, Stanford linear accelerator center,
37:00
then Fermilab, and
37:03
so on and so forth. Now, right. CERN
37:05
is late ... actually late in the
37:07
game, and it's European
37:09
consortium. But the thing
37:12
is, we
37:14
could not continue the science
37:18
without the help of
37:20
the government, in the government. Well,
37:22
there is another ... and then in addition to
37:24
high energy physics, and then
37:27
bio, right? Especially
37:29
with genetic sequencing and high
37:31
throughput genomics, and biotech
37:34
is also changing. And now you
37:36
see a new wave of
37:40
biology labs that are actually heavily
37:42
funded by the combination of government
37:44
and philanthropy and all that, and
37:47
that stepped in to supplement
37:49
what the traditional
37:53
university model is. And so we're
37:55
now here with AI and computer
37:57
science. We
38:01
have to do another show on that one alone, I think. The
38:05
Singularity. Oh, good! This
38:07
is good. Reassuring. You're both
38:10
rolling your eyes. Wonderful. I
38:12
feel better about this already. Good. Ray
38:15
Kurzweil, you know exactly where this is going. Ray Kurzweil
38:17
writes a book in 2005. This
38:19
gets everybody's attention and still scares lots of
38:21
people to death, including me.
38:24
The book is called The Singularity is Near.
38:27
And Kurzweil predicts a singularity that will
38:30
involve, and I'm quoting him, the merger
38:33
of human technology with human
38:35
intelligence. He's not saying
38:38
the tech will mimic more and more closely
38:40
human intelligence. He is saying they will merge.
38:43
I set the date for the singularity
38:45
representing a profound and disruptive transformation in
38:47
human capability as 2045. Okay.
38:52
That's the first quotation. Here's the
38:54
second. It comes from the Stanford
38:56
course catalog's description of the philosophy
38:58
of artificial intelligence. A
39:01
freshman seminar that was taught
39:03
last quarter, as I recall,
39:05
by one John Echimendi. Here's
39:09
from the description. Is it really
39:11
possible for an artificial
39:14
system to achieve genuine intelligence,
39:16
thoughts, consciousness, emotions? What
39:19
would that mean? John,
39:21
is it possible? What would it mean? I
39:27
think the answer is actually no. And
39:30
thank goodness. You kept me
39:33
waiting for a moment.
39:35
I think the fantasies that
39:38
Ray Kurzweil and others have
39:43
been spinning up, I guess
39:45
that's the way to put it, stem
39:48
from a lack of understanding of
39:52
how the human being really
39:54
works and don't
39:56
understand how crucial biology
39:59
is. is to the
40:01
way we work, the way
40:03
we are motivated, how we
40:05
get desires, how we get
40:07
goals, how we get how
40:09
we become
40:12
humans, become people. And
40:14
what AI has done so
40:16
far, AI is capturing what
40:19
you might think of as the
40:22
information processing piece
40:26
of what we do. So part of
40:28
what we do is information processing. So
40:31
it's got the right frontal cortex but hasn't got
40:33
the left frontal cortex yet. Yeah,
40:35
that's an oversimplification. But yeah, imagine that
40:37
on television. So
40:41
I actually think it is, first
40:43
of all, the date. 2045
40:46
is insane. That
40:52
will not happen. And secondly, it's not even clear
40:54
to me that we will ever go back.
40:57
I can't believe I'm saying this.
40:59
In his defense, I don't think
41:01
he's saying that 2045 is the
41:03
day that the machines become conscious
41:06
beings like humans. It's
41:10
more an inflection point of the
41:12
power of the technology that is
41:14
disrupting the society. He's
41:18
late. We're already there. Exactly.
41:21
That's what I'm saying. I think you're
41:23
being overly generous.
41:29
I think that what he means by the singularity
41:31
is the date at which we
41:33
create an artificial intelligence system
41:36
that can improve itself
41:39
and then get into a cycle,
41:41
a recursive cycle, where it
41:43
becomes a super intelligence.
41:46
And I deny that. He's
41:48
playing the 2001 Space Odyssey game here. Different
41:53
question but related question. In some ways, this
41:55
is a more serious question, I think. Although
41:58
that's series two. Here's the
42:01
late Henry Kissinger again. Quote, we
42:03
live in a world which
42:05
has no philosophy.
42:08
There is no dominant philosophical
42:11
view. So
42:13
the technologists can run wild. They
42:16
can develop world-changing things and there's
42:18
nobody to say, we've got to integrate
42:20
this into something. All
42:23
right, I'm going to put it crudely again. But
42:26
in China a century
42:28
ago, we still had Confucian thought,
42:31
dominant among, at least among the educated
42:33
classes on my very thin understanding of
42:35
Chinese history. In
42:37
this country until the day before
42:40
yesterday, we still spoke without irony
42:42
of the Judeo-Christian tradition,
42:44
which involved certain concepts
42:46
about morality, what it meant
42:49
to be human. It
42:52
assumed a belief in God, but it turned out you
42:54
could actually get pretty far along, even
42:56
if you didn't believe in it. And
42:59
Kissinger is now saying it's all fallen
43:01
apart. There is no dominant
43:03
philosophy. This
43:05
is a serious problem. Is it not? There's
43:08
nothing to integrate AI into.
43:11
You take his point. You're the philosopher. You're
43:13
the philosopher. I think this is a great,
43:16
first of all, thank you for that quote. I
43:26
didn't read that quote from Henry
43:29
Kissinger. This is
43:31
why we founded the Schumann Center
43:33
AI Institute. These are the fundamental
43:35
questions that our
43:37
generation needs to figure out. That's not
43:40
just a question, that's the question. It
43:42
was one of the fundamental questions. That's
43:44
also one of the fundamental questions that
43:46
illustrates why universities are
43:48
still relevant today. One
43:53
of the things that Henry Kissinger said
43:55
in that quote is that there is
43:57
no dominant philosophy.
44:00
no one dominant philosophy
44:02
like the Judeo-Christian tradition, which
44:04
used to be the dominant
44:06
tradition in the... This was
44:08
a different conversation in Paris in the 12th century, for
44:10
example. The university in Paris... In order
44:12
to have... In order
44:14
to take values into account
44:16
when you're creating an AI
44:18
system, you don't need a
44:20
dominant tradition.
44:22
I mean, there's... What
44:25
you need, for example, for
44:27
most ethical traditions, is the
44:29
golden rule. Okay,
44:32
so we can still get along with each
44:34
other, even when it comes
44:36
to deep, deep questions of values such as
44:38
this. We still have enough common ground. I
44:42
believe so. I have
44:45
yet another sigh of relief. Okay,
44:47
let's talk a little bit. We're talking a little
44:49
bit about a lot of things here, but so
44:52
it is. Let us speak of many things as
44:55
it is written in Alice in Wonderland, the Stanford
44:57
Institute. The Stanford
45:00
Institute for Human Centered Artificial
45:02
Intelligence, of which you are co-directors,
45:04
and I just have two questions and
45:07
respond as you'd like. Can you give
45:09
me some taste, some feel for what you're
45:11
doing now, and
45:15
in some ways more important, but more elusive, where
45:17
you'd like to be in just five years, say.
45:19
Everything in this field is moving. So if I
45:21
would... My impulse is to say 10 years because
45:23
it's a rounder number. It's too far
45:26
off in this field. Fei-Fei? I
45:29
think what really has happened in the
45:31
past five years by Stanford High, among
45:33
many things... I just want to make
45:35
sure everybody is following you. HAI, Stanford High,
45:37
is the way it's known on this campus.
45:39
Yes. All right, go ahead. Is
45:41
that we have put
45:43
a stick on the ground for
45:46
Stanford as well as for everybody
45:48
that this is an interdisciplinary study.
45:53
AI, artificial intelligence, is
45:55
a science of its own. It's a
45:57
powerful tool. What
46:00
happens is that you can welcome
46:02
so many disciplines to
46:04
cross-pollinate around the topic of
46:07
AI, or use the
46:09
tools of AI to make
46:11
other sciences happen, or to
46:14
explore other new ideas. And
46:16
that concept of making this
46:18
an interdisciplinary and multidisciplinary field
46:21
is what I think Stanford High
46:24
brought to Stanford, and also hopefully
46:26
to the world. And just
46:28
like you said, computer science is kind of
46:30
a new field. Only the
46:33
late John McCarthy coined the term
46:35
in the late 50s. Now
46:39
it's moving so fast. Everybody feels it's
46:42
just a niche computer science field
46:45
that's just making its way into the
46:47
future. But we're saying,
46:49
no, look abroad. There's
46:52
so many disciplines that can be put
46:54
here. Who competes with the Stanford Institute
46:56
and Human-Centered Design? Is there such an institute
46:58
at Harvard or Oxford or Beijing? I
47:01
just don't know what this is. In
47:03
the five years since we launched, there have been
47:05
a number of similar institutes
47:07
that have been created
47:10
at other universities. We don't see that as competition
47:12
in any way. If these arguments you've been making
47:14
are valid, then we need them. We need them.
47:16
We should walk you back there. We need them
47:19
as a movement. We need them. And
47:21
part of what I think we've succeeded
47:23
to a certain extent doing is
47:26
communicating this vision of
47:29
the importance of keeping the
47:31
human and human
47:34
values at the center when
47:36
we are developing this
47:38
technology, when we are
47:41
applying this technology. And
47:44
we want to communicate that to the
47:46
world. We want other centers
47:48
that adopt a similar standpoint.
47:52
And importantly, one
47:55
of the things that they didn't mention is, one
47:58
of the things we try to do is edge the world. and
48:00
educate, for example, legislators
48:05
so that they understand
48:07
what this technology is, what it
48:09
can do, what it can't
48:11
do. So you're traveling to
48:13
Washington or the very generous
48:16
trustees of this institution are bringing
48:18
congressional staff and they're both? Both.
48:20
Both are happening. So,
48:22
Feifei, first of all, did you teach that
48:25
course in Stanford HAI or was
48:27
the course located in the philosophy department or cross-listed?
48:29
I'm just trying to get a feel for what's
48:31
actually taking place there now. Yeah,
48:33
I actually taught it in the
48:35
confines of the HAI building. Okay,
48:37
so it's an HAI? No,
48:40
it's a philosophy. It's listed as
48:42
a philosophy course but taught in the HAI.
48:44
He's the former provost. He's an
48:47
inter-flood disciplinary, walking wonder. And
48:50
your work in AI-assisted
48:52
healthcare, is that taking
48:54
place in HAI or is it at
48:57
the medical school? Well, that's the beauty.
48:59
It's taking place in HAI, computer
49:01
science department, the medical school, even
49:04
has collaborators from the law school,
49:06
from the political science
49:08
department. So that's the beauty.
49:10
It's deeply interdisciplinary. If
49:13
I were the provost, I'd say this is starting to
49:15
sound like something that's about to run amok. Doesn't
49:17
that sound a little too interdisciplinary, John? Don't
49:21
we need to define things a little bit here? Let
49:23
me tell you, let me say something. So,
49:26
Steve Denning, who was the
49:29
chair of our board of trustees for
49:31
many years, and has been a long,
49:33
long time supporter of the
49:36
university in many, many ways. In
49:39
fact, we are the Denning co-directors
49:41
of Stanford HAI. Steve
49:46
saw five, six years
49:48
ago, he said, you know, AI
49:51
is going to impact
49:53
in a free department at
49:55
this university. And
49:58
we need to have an institute
50:00
that makes sure that that
50:03
happens the right way. That
50:05
that impact does
50:08
not run amok. Where
50:12
would you like to be in five years? What's
50:14
a course you'd like to be teaching in five years? What's
50:17
a special project? I
50:19
would like to teach a freshman
50:21
seminar called The Greatest Discoveries by
50:23
AI. Oh, alright.
50:27
Okay. A
50:30
last question. I
50:33
have one last question, but that does not mean that
50:35
each of you has to hold yourself to one last
50:38
answer, because it's a kind of open-ended question. I
50:42
have a theory, but
50:44
all I do is wander around this campus. The
50:46
two of you are deeply impeded here, and you ran the place
50:48
for 17 years, so you'll know more than
50:51
I will, including you may know that
50:53
my theory is wrong, but I'm going to trot
50:55
it out, modest though it may be even so.
51:00
Milton Friedman, the late Milton Friedman, who when
51:02
I first arrived here, was a colleague at
51:04
the Hoover Institution. In fact, by some miracle,
51:06
his office was on the same hallway as
51:08
mine, and I used to stop in on
51:10
him from time to time. He
51:13
told me that he went into economics because
51:16
he grew up during the Depression, and
51:19
the overriding question in
51:22
the country at that time was, how
51:24
do we satisfy our material needs? There
51:27
were millions of people without jobs. There
51:30
really were people who had trouble feeding
51:32
their families. Alright. I
51:35
think of my own generation, which
51:38
is more or less John's generation. You'll come
51:40
much later, Fez-Fetti. Thank you. And
51:43
for us, I
51:45
don't know what kind of discussions you had in the dorm room, but
51:47
when I was in college, there were both sessions
51:49
about the Cold War. Were the Russians
51:51
going... The Cold War was real
51:53
to our generation. That
51:56
was the overriding question. How
52:00
can we defend our way of life? How can we defend our
52:02
fundamental principles? All right. Here's
52:05
my theory. For
52:08
current students, they've
52:11
grown up in a period
52:13
of unimaginable prosperity. Material
52:16
needs are just not the problem.
52:19
They have also grown up during
52:22
a period of relative peace. The
52:24
Cold War ended, you could put different – the
52:27
Soviet Union declared itself defunct in 1991. Cold
52:31
War is over at that moment at the latest. The
52:35
overriding question for these kids today
52:39
is meaning. What is
52:41
it all for? Why
52:44
are we here? What does it
52:46
mean to be human? What's the
52:48
difference between us and
52:51
the machines? And if my
52:53
little theory is correct, then
52:56
by some miracle, this
52:59
technological marvel that you have
53:01
produced will lead
53:04
to a new flowering of the humanities. Do
53:07
you go for that, John? Do
53:12
I go for it? I would go for it if
53:15
it were going to happen. Do I put that in
53:17
a slightly sloppy way? I
53:21
think it would be wonderful. It's something to hope
53:24
for. So
53:26
far – now I'm going to be the
53:28
cynic – so
53:31
far what I see in students is more
53:33
and more focus – for
53:36
Stanford students – more
53:38
and more focus on
53:40
technology. Computer science is still the
53:42
biggest major at this university. And
53:47
we have tried at HAI. We
53:49
have actually started
53:51
a program called Embedded Ethics,
53:54
where the CS at the end of
53:56
ethics is capitalized,
53:58
so it's confusing. computer science. That
54:02
will catch the kids' attention. No,
54:04
we don't have to catch their attention. What
54:07
we do is virtually
54:10
all of the courses in
54:12
computer science, the introductory courses, have
54:16
ethics components built in. So
54:19
a problem set, a
54:21
problem set this week, and that will
54:23
have a whole bunch of very
54:26
difficult problems,
54:30
computer science problems, and then it will have
54:32
a very difficult ethical challenge.
54:35
It will say, here's the situation.
54:37
You are programming a computer,
54:40
programming an AI system, and
54:43
here's the dilemma. Now
54:46
discuss. What are you going to do? So
54:49
we're trying to bring—this is
54:52
what Fei-Fei wanted. We're trying to
54:54
bring— This is new. —ethics
54:56
within the last couple of years,
54:59
two, three years. We're
55:01
trying to bring the attention to
55:03
ethics into the computer science
55:05
curriculum. And
55:08
partly that's because they're not—students
55:12
tend to follow the path
55:14
of least resistance. Well, they also— Let's put
55:16
it again. I'm saying things crudely
55:18
again and again, but someone must say it. They
55:20
follow the money. So as
55:23
long as this valley that surrounds
55:25
us rewards brilliant young
55:27
kids from Stanford with CS
55:29
degrees as richly as it
55:31
does, and it is amazingly
55:33
richly, they'll go get CS
55:35
degrees, right? Well,
55:37
I do think it's a little
55:39
crude. I
55:44
think money is one
55:46
surrogate measure of also
55:50
what is advancing in our
55:52
time. You know, the technology
55:54
right now truly is
55:56
one of the biggest drivers of the
55:58
changes of our— of our
56:01
civilization. When you're talking about what does
56:03
this generation of students talk about, I
56:05
was just thinking that 400 years ago,
56:07
you know, when the scientific
56:10
revolution was happening, what is in
56:12
the dorms? Of course it's all
56:14
young men in Cambridge
56:16
or Oxford, but that must also be
56:19
a very exciting and interesting time. Of
56:21
course there was an Internet and social
56:23
media to propel the travel
56:25
of the knowledge, but imagine there
56:28
was, you know, the
56:31
blossoming of discovery and of
56:33
our understanding of the physical
56:35
world. Right now we're
56:37
in that kind of great era
56:40
of technological blossoming. It's
56:42
a digital revolution. So the
56:45
conversations in the dorm, I think, is
56:48
a blend of the meaning of who
56:50
we are as humans as well as
56:52
our relationship to these technology we're building.
56:55
And so it's
56:58
a properly taught
57:01
technology can
57:04
subsume or embed philosophy,
57:08
literature. Of course, can inspire.
57:10
And also think about it,
57:12
what follows scientific revolution is
57:14
a great period of change
57:16
of political, socio-economical change. And
57:19
we're seeing that. All for the better. Right.
57:22
And I'm not saying it's necessarily
57:25
for the better, but we
57:27
are seeing, we're having even
57:29
peaked the digital revolution, but
57:32
we're already seeing the political,
57:34
socio-economic changes. So this is
57:36
again back to Stanford High
57:38
when we founded it five years ago. We
57:41
believe all this is happening
57:44
and this is an institute
57:46
where these kind of conversations,
57:48
ideas, debates should
57:50
be taking place. Education programs should
57:53
be happening. And that's part of
57:55
the reason we did this.
58:00
Let me tell you, as you pointed
58:02
out, I just finished teaching a course
58:04
called Philosophy of Artificial Intelligence. About which
58:06
I found out too late, I would
58:08
have asked permission to audit your course,
58:10
John. No, now you're too old. And
58:15
about half of the students were computer
58:17
science students, who were planned to be
58:19
computer science majors. Another
58:22
quarter planned to be
58:24
symbolic systems majors, which
58:26
is a major that is
58:29
related to computer science. And then
58:32
there was a smattering of others. And
58:36
these were people, every one of
58:38
them, at the end of the
58:40
course, and I'm not saying this to brag,
58:42
every one of them said, this is the
58:44
best course we've ever taken. And
58:47
why did they say that? It
58:49
inspired, it made them think.
58:53
It gave them a framework for
58:55
thinking, a framework for trying
58:57
to address some of these problems, some
58:59
of the worries that you've brought out today.
59:01
And how do we
59:04
think about them, and how do we
59:06
not just become panicked because
59:10
of some science fiction movie that
59:12
we've seen, or because we
59:14
read Ray Kurzweil. Maybe
59:18
it's just as well I didn't take the course.
59:20
I'm sure you're going to give me a C-minus at best.
59:24
Great inflation. So
59:27
it's clear that these
59:29
kids, the students, are
59:34
looking for the
59:38
opening to think these things
59:41
and to understand how to
59:43
address ethical questions, how
59:46
to address hard philosophical
59:48
questions. And
59:52
that's what they got out of the course. And
59:55
that's a way of looking for meaning in
59:57
this time. Yes, it is. Dr.
1:00:00
Fei-Fei Li and Dr. John
1:00:03
Echamendi, both of the Stanford
1:00:05
Institute for Human-Centered Artificial Intelligence.
1:00:08
Thank you. Thank you, Peter. Thank you, Peter. For
1:00:11
Uncommon Knowledge and the Hoover Institution and Fox
1:00:13
Nation, I'm Peter Robinson.
Podchaser is the ultimate destination for podcast data, search, and discovery. Learn More