Episode Transcript
Transcripts are displayed as originally observed. Some content, including advertisements may have changed.
Use Ctrl + F to search
0:01
So, you've got an idea for a
0:03
business, the store of your dreams. There's
0:05
just one thing to figure out. Everything.
0:07
That's why Shopify's all-in-one commerce platform makes
0:10
it easy to sell online, in-person, and
0:12
everywhere else. Sell on social media, source
0:14
products with an app, to get that
0:16
first sale feeling. It's the only solution
0:18
that gives you everything you need to
0:20
sell everywhere you want. So when you're
0:23
ready to bring your idea to life,
0:25
power it up with Shopify. Sign up
0:27
for a $1 per month trial period
0:29
at shopify.com/profits23. The
0:33
Economist So we've
0:35
just arrived at an
0:38
industrial estate outside of
0:41
Manchester. We're
0:43
looking at a sort of low-grey industrial building with
0:47
some huge tanks outside. Looks
0:49
very unassuming from the outside, but
0:52
there's something pretty special going on inside.
0:54
Ainslie Johnson is a very famous economist.
0:57
Pretty special going on inside. Ainslie
0:59
Johnston is a data journalist and science
1:01
correspondent for The Economist. Hello. Hello, Ainslie.
1:03
Hi, good to meet you. Hi, it's
1:05
very nice to meet you. Hi,
1:08
I'm Steve. Welcome to the UK Biobank
1:10
Imaging Centre. She recently
1:12
went to visit a brain imaging lab in
1:14
the north of England. The
1:17
UK Biobank imaging study went up
1:19
with each participant contributing about 9,000
1:21
images. Dawood
1:23
Dassu is the head of imaging
1:25
operations at UK Biobank. These
1:27
are things that tell you about the size,
1:30
volume, the structure of the brain, but
1:32
also tells you about brain function as
1:34
well. So which parts of the brain
1:37
are active during certain tasks. And
1:39
we also have something which gives us a
1:41
measure of flow of blood in
1:43
key parts of the brain as
1:45
well. So each participant contributes just
1:47
from brain around two and a
1:50
half thousand variables to the dataset
1:52
that we upload for researchers to
1:54
use. The UK Biobank
1:56
Maintains a huge database of
1:58
biomedical data. It collects... everything
2:00
from genome sequences to information
2:02
on people's diets. The. Imaging
2:05
study that Darwin is talking about
2:07
here aims to scan everything from
2:09
the hearts to the bones and
2:11
absence of all the participants. Their
2:14
stance will help scientists delve into
2:16
the intricacies of one of the
2:19
most complicated objects in the entire
2:21
universe. The. Human brain. Well,
2:25
participants light inside of them are
2:28
ice gonna take. If it's it's
2:30
it's tough to do to stop
2:32
trying to get three images. three
2:34
cases of to match the top.
2:36
Phase two either the last or
2:39
the was inside and button in
2:41
my layman's terms lights up the
2:43
pause for the brain that were
2:45
involved in decision making. The compare
2:48
that to what was happening earlier
2:50
run when the same Magnus Feals
2:52
were being applied for those noticed.
2:58
We all know that human
3:00
brains a remarkable somehow from
3:02
a tangle of billions of
3:04
brain cells and a super
3:06
chemical reactions emerges a vast
3:08
range of skills: language, memory,
3:10
vision, the ability to process
3:12
information that he be control
3:14
muscles and much much more.
3:17
And the some is much greater
3:19
than the parts, because human brains
3:22
are also the center of what
3:24
we call intelligence. Human
3:26
intelligence has driven the success of
3:29
our species, which perhaps makes it
3:31
odd that we still have so
3:33
much to learn about what human
3:36
intelligence in fact, any intelligence actually
3:38
is. But. Understanding
3:40
human intelligence has to be the
3:43
starting point if you want to
3:45
understand the artificial time to. That's
3:48
our goal and this special four
3:50
part series on the science that
3:53
built the I revolution. I'm
3:57
Anna charm issues babish for the
3:59
Economist. He didn't show. Will
4:01
look at the very earliest ai systems
4:03
and how they took inspiration from the
4:05
human brain. This. Is
4:07
the first a full episodes in
4:10
which will examine the scientific ideas
4:12
and innovations that have led to
4:14
the current moments in? Will.
4:17
Gonna get behind the hype buzzwords and
4:19
jargon and explore eight ideas that we
4:21
think you need to know if you
4:24
want to understand how the generous is
4:26
A I have today came to be.
4:30
That explore what artificial neural networks
4:32
we love. The. Nerve Cell Look
4:34
Up Up Up Up Up Up Up
4:36
Up Up Up Up Us When we
4:39
come to think about new really inspired
4:41
artificial systems, the fact that it's sending
4:43
a pulsar isn't is the critical insight
4:45
the gives us all the information processing
4:47
power that we use Now from the
4:49
earliest attempts to model the human brain
4:52
in silicon, the since we were building
4:54
were sued them and so weak and
4:56
so difficult to train to the technologies
4:58
that enable both models to be scaled
5:00
up image. That was the turning point.
5:03
Of a eyes history Recognizing
5:05
how critical it is to
5:07
use big data. Will
5:09
he was finally around a
5:11
decade ago a I got
5:13
astonishingly good. All of a
5:15
sudden things are working and people pay
5:17
attention to what we do. We have
5:20
a number of examples where a computer
5:22
vision systems can beat human experts at
5:24
that on game. On how
5:26
those systems just kept on
5:28
getting better. The. Change. From
5:30
say Gp T to to Gp to
5:32
three was huge. The change from Tpg
5:34
three to Tpp for was huge. I
5:37
do not think the large language. Models would
5:39
work as well as they are. I just thought
5:41
I kids just throw the whole internet at its
5:43
and be able to get next word predictions that
5:45
have that seem like a human you know on
5:47
our dead wrong. You. Can. If
5:52
you want to understand the origins
5:54
of artificial intelligence, it's best to
5:56
start with the second of those
5:58
words. Though our
6:01
first question in this
6:03
series is this: what
6:05
is intelligence. To
6:15
figure out exactly how the human brain
6:17
works, let's pick up where we left
6:19
off with our correspondent, Ainsley Johnston. The
6:26
Uk Biobank Center just outside of
6:28
Manchester. Scans patients brains seven days
6:30
a week. And they work towards
6:33
their goal of imaging a hundred thousand people.
6:37
Steve. Garrett, the imaging program
6:39
manager explained the process the
6:41
participants undergo. We
6:44
are in the missing clinics we've
6:46
got. participants in here are coming
6:48
for around of souls five hour
6:51
visit. Are they doing tests
6:53
and things on the computers? We
6:55
have a touchscreen question i was
6:57
a gif really comprehensive analysis or
6:59
anything about health and lifestyle for
7:01
they also do the commission. Of.
7:06
Oh sure, I'll see. That
7:08
sewer nice. One of the health research
7:11
assistants at the clinic fans by of
7:13
the say that the going sailors adultery
7:15
with twenty five minute com section on
7:17
some games, puzzles and memory tests as
7:19
well so have a read in the
7:21
yellow and then when you're ready press
7:23
that smile in exports. And. Pop.
7:27
Into the game will have three pairs right.
7:30
Case I can see six cards in front of
7:32
me and that have been turned over and gotta
7:34
find a kid the some of the man in
7:36
the from as a man. In
7:40
the sun. Titan
7:45
isn't. As a game. Where.
7:51
Okay, some being asked to add the
7:54
following numbers together One. C
7:56
Three Four Five.
7:59
okay So that equals 15. If
8:04
Truda's mother's brother is Tim's
8:07
sister's father, what relation
8:09
is Truda to Tim? Truda's
8:13
mother's brother. That's
8:16
Truda's uncle. Tim's
8:18
sister's father. That's Tim's father.
8:22
Truda's uncle is Tim's father. Truda
8:27
must be his aunt, I think. Oh
8:30
god. At this point,
8:32
I'm a god master's master at pen and paper. I
8:35
think I need a pen and paper. I
8:38
feel like we've probably got enough of this and I
8:41
think I'm probably embarrassing myself. Uh
8:44
oh. These
8:51
tests are about a lot more than just making
8:53
fun of journalists though. The
8:55
scores from each of the tests help
8:57
to paint a unique picture of participants'
9:00
cognitive abilities. This is
9:02
powerful data for researchers, particularly
9:04
in combination with the biomedical data that's
9:06
about to be collected. And
9:09
then they'll go and get changed. And
9:11
after they're changed, one of them will go
9:13
for their brain scan. At
9:18
the end of a corridor full of warning
9:20
signs for strong magnetic fields is the brain
9:22
MRI machine. These
9:24
machines look like giant donuts. The
9:27
participant lies down on a bed and
9:29
then their head and shoulders are moved inside the bore
9:31
of the scanner. We
9:34
entered the control room next door. From
9:37
here, a radiographer controls the scanner,
9:40
checks the quality of the brain images that are
9:42
being collected, and makes sure that
9:44
the participant is happy and comfortable. How's
9:47
the party now? Angela
9:50
Emmons, one of the radiographers, took me through
9:52
the process. It's a half
9:54
hour scan of the brain. First
9:56
25 minutes you need to keep nice and still. then
10:00
there's a task coming up. The task
10:02
is just to look at the brain
10:05
when it's actually working. We run
10:08
an earlier sequence when
10:10
they are at rest and then just run two
10:12
minutes of that when they're undertaking a game of
10:14
snap. We show them a series of
10:16
shapes and we show them a
10:18
series of faces. Runs about two and
10:21
a half minutes and then when
10:23
that comes to an end they've got about another
10:25
two minutes left in the scanner. While
10:28
the participant's in the scanner what can you see in
10:30
the control room? Lots of images come
10:32
up, the images come up in real time. We
10:35
check the resolution, make sure
10:37
we've got good images, participants settled
10:40
and then just follow that through the
10:42
sequences. This
10:46
gives you an idea of intelligence. That's
10:49
so interesting. Can we be heard at the start of
10:51
the podcast? You can look at
10:53
how much of that variation amongst
10:55
participants is explained by genome data.
10:57
You can look at our imaging
10:59
data. You might
11:02
even be looking at history as well as
11:04
the lifestyle, job, diet and things like that.
11:06
You could look at all of that as
11:08
well and I'm sure somebody will figure out
11:10
a way of looking at all of that
11:12
together. Using
11:16
the Biobank data, scientists have discovered that
11:18
having a larger brain and
11:20
in particular a larger frontal cortex
11:23
is associated with higher intelligence. There
11:26
are also certain patterns in how different parts
11:29
of the brain communicate with each other that
11:31
can predict people's scores on cognitive tests.
11:35
There's still a lot of variability in intelligence
11:37
that scientists can't explain using these measures of
11:39
the brain though. But
11:42
access to enormous data sets like the
11:44
UK Biobank is allowing scientists
11:46
to pick apart how the
11:48
tangle of neurons inside our heads have
11:51
enabled us to develop vaccines, send
11:53
a man to the moon and even Create
11:56
AI. Lots.
12:07
Of researchers from around the world
12:10
used data from the Uk, Biobank
12:12
and other sources to investigate brain
12:14
intelligence. But intelligence in human brains
12:16
is not something that's easy to
12:18
pinpoint. There isn't one bit of
12:21
the brain that's responsible for it.
12:23
For example, And the more
12:25
you get into it, the harder it
12:27
gets to define what intelligence even is.
12:30
So. Let's take a step back
12:32
and look at how the brain works
12:34
at a more basic level. To
12:37
do that, I spoke to Daniel
12:39
Glazer. He's a neuroscientist at the
12:41
Institute of Philosophy, part of the
12:43
University of London. He works at
12:45
the intersection of neuroscience and Ai.
12:49
We know a lot about how the brain
12:51
structures, and we know a lot about how
12:53
it works in the sense of how the
12:55
molecular level works. I can tell you an
12:57
exquisite detail about the structure of the individual
12:59
neurons and at the level of the whole
13:01
brains. I can tell you what the front
13:04
doesn't, what the bat does. Makes
13:10
the difference at the macroscopic levels. So although
13:12
I know all of these levels of description
13:14
of the brains, I can't give you a
13:17
coherent story that tells you how the overall.
13:19
Behavior The Reiser Melissa exquisite detail but
13:21
I do know about the molecules. Let's
13:23
go into a bit of exquisite detail
13:25
than just describing Natsumi for me and
13:27
how the anatomy functions to brains are
13:30
collections of neurons which a nerve cells
13:32
and while nerve cells exist throughout the
13:34
body, that pain detectors and all sorts
13:36
of things like that in the brain,
13:38
they're all clump together in a big
13:41
watch and the principal property that almost
13:43
all nerve cells house is that they
13:45
use electricity to send signals over a
13:47
distance and this to things. that derive
13:49
from that so one is that these cells
13:51
are often philo gated so most says nobody
13:54
kind of roundy clumpy the have a a
13:56
shape like that nerve cells characteristically have go
13:58
along extended process which tend to call
14:00
an axon and you really can think
14:02
about this extended process like a wire.
14:04
And like a wire, nerve cells send
14:07
information along this long process using electricity.
14:09
So nerve cells are signalling devices that
14:11
get, if you like, information from one
14:13
bit of the cell to the other
14:15
bit of the cell along a long
14:17
bit called the axon and they use
14:19
electricity to do that. Just
14:21
in terms of how that manifests in sensing
14:23
the world, just explain to me how a
14:25
network of these cells smells
14:27
something or learns something.
14:30
I think to understand how this works, you
14:32
can actually go back in evolution around about
14:34
70 million years. You could use chemicals to
14:36
send information, ooh there's something nasty there pulled
14:38
back and you could retract your feelers. But
14:40
that only works at very short distances and
14:42
for animals and cells to get bigger, organisms
14:44
to get bigger, they needed to communicate information
14:46
about smells, about predators, about food over longer
14:48
distances. And so what evolution did, if we
14:50
can say it that way, about 70 million
14:52
years ago, is to use some of these
14:54
proteins that were being used for signalling within
14:56
cells and wire them up
14:58
to an electrical signal. And then at the
15:01
other end they turned them back into chemical
15:03
information which they then used to set off
15:05
other cells in the network. And that insight
15:07
interestingly, which was about signalling, is
15:10
paralleled in the evolution, in human terms,
15:12
of what we would call telegraphy. If
15:14
you want to send reliably a signal
15:16
of long distances you want to be
15:18
using some kind of code, for example
15:20
Morse code. And so the first transatlantic
15:22
cable used pulses, dit-dit-dit, da-da-da, which could
15:24
be reliably read out at the other
15:26
end. And it turns out that 70
15:28
million years ago evolution came up with
15:30
the same insight. So the critical thing
15:32
about nerve cells is that they
15:35
use electricity to signal. But that code is not
15:37
a kind of more less, more
15:39
less, more, you know, it's not
15:41
a continuously modulated signal, it's pulses.
15:44
And this transmission of information by pulses,
15:46
either fires or it doesn't, is a
15:48
critical thing you need to know about
15:50
nerve cells. When we come to think
15:52
about neurally inspired artificial systems, the fact
15:54
that it's thresholding, it's sending a pulse
15:57
or it isn't, it's doing a yes-no
15:59
firing pattern. The critical insight that gives
16:01
us all the information processing power that we use
16:03
now and it's kind of in a very sort
16:05
of crude were kind of a digital signal in
16:07
that respect. Not really tall is a digital signal
16:09
in the sense that the information is a one
16:12
or zero the to sell or the files or
16:14
it doesn't and replaced. Think about nerve cells. Problem
16:16
is not so much to start within the brains
16:18
but to think about flexing a muscle tone to
16:20
send a signal from your spinal cord to your
16:22
muscle in your arm if you want the muscles
16:25
contract more. Allocca get a sound like
16:27
a neuron for a second the nerve cell will go
16:29
Up Up Up Up Up Up Up Up Up Up
16:31
A baths if you want to contract a little bit.
16:33
Org Up Up Up Up Up By Similarly, if you
16:35
have a pain receptors and something's a bit painful you'll
16:38
get done. That.
16:40
That. That if something's really really painful,
16:42
the nerve cell signals that like I
16:44
pop up up up up up up
16:46
up up up Up And so the
16:49
rate coding We would say the rate
16:51
at which things fire in the can
16:53
be more subtle code is a yes
16:55
no signal that contains information over time
16:57
rather than an amplitude modulated smooth signal
17:00
as you might have in the nuances
17:02
of your voice. So it's in your
17:04
brain to neurons are very close together.
17:06
They it's exists in networks which represent
17:08
all sorts of functionality and memory. Etc
17:11
in your brain, Out
17:13
of the brain cells, the new ones
17:15
work together to learn something, whether it's
17:17
a language or what part of the
17:20
looks like or whatever else. When you're
17:22
on, the connected to each other individually
17:24
or networks, There was a strength of
17:26
the connection so you don't get the
17:28
same bang for your buck from each
17:30
of the cells that connects into a
17:32
particular sell. So much. And you've got
17:34
a cell. It's got thousands of other
17:37
cells. Connecting into. it comes thousands of
17:39
other sills of foreign. That
17:41
feats of a pulse is from those cells
17:43
does not give you the same input to
17:45
the so that the targets so we can
17:47
control the amounts of input that you get
17:49
from the sell by the strength of what
17:51
we call a sign ups the why that
17:53
comes in and if you liked to take
17:55
an analogy from humankind you might ask all
17:57
of your mates for a restaurant with me.
18:00
Station. But you're gonna pay more attention to
18:02
one of your friends who's good with food
18:04
or likes that kind of cuisine, knows the
18:06
city than another. So they're all saying pizza
18:08
Burger, we should go to the Indian place,
18:10
we should go to that Asian restaurant. Whatever.
18:12
but listening and you might say well I
18:14
hear all of those inputs but I'm gonna
18:16
up regulate one them down regularly, the other.
18:18
So that's the strength of connections in learning
18:20
is. It turns out that the restaurant you
18:22
chose was a good one. From your experience
18:24
you go to the restaurant was great, your
18:26
site off a restaurant was amazing and then
18:28
you say to was recommended. That restroom Iraq.
18:31
What? You know what? Next time I'm
18:33
looking for recommendation for restaurant I'm gonna
18:35
up. wait. Alex. Signal compared
18:37
to the other guys who didn't when
18:39
women out of them. So since the
18:41
fall together was together when a new
18:43
one fires it's has okay I got
18:45
excited. Now I'm asking what was the
18:47
input that got me to the place
18:49
I am on. I'm going to subtly
18:52
up regulate those input so that in
18:54
future the ones that got me to
18:56
this good place are more likely to
18:58
get me going again. That learning that
19:00
strengthen connection at a chemical never was
19:02
happening. At a chemical level
19:04
there are neurotransmitters, which seems generally
19:07
speaking, the structure of the dendrites.
19:09
so there are things called spines
19:11
the basic least allow. Each.
19:13
Neuron that fires to release more neurotransmitter
19:16
to that sells. So it changes the
19:18
neurochemistry and so small extent the neural
19:20
and estimates. It really just chase this
19:22
microstructure off the neurons so that you
19:25
get more input to protect yourself from
19:27
the cells that fired previously. Let's zoom
19:29
out. People always ask this question about
19:32
intelligence and on human brains. Intelligent? where
19:34
does that come from and all of
19:36
this but like if you look it
19:38
up online or or elsewhere me alone
19:41
as I would with any respected interview.
19:43
an interviewer i looked into from wikipedia before i
19:45
came out this morning and if you look up
19:48
unless the homeless or his intelligence it says it's
19:50
that thing which humans are good at rights that's
19:52
a bit facetious but there is a sort of
19:54
sex so for example when we look for intelligence
19:56
in animals or indeed implants the some nice stuff
19:59
about forests being it intelligence makes it, they can
20:01
just speed up forests, and they kind
20:03
of think things through, and they're generous, and they look
20:05
after each other, and they feel pain when their fellows
20:07
are chopped down. When we say that,
20:09
when we look for intelligence in animals, broadly speaking, we're
20:12
looking for things that they do that are like things
20:14
that we do, right? So I can
20:16
do better than this, but actually as a
20:18
starting point, intelligence is what we think like.
20:20
And so just break that down, what does
20:22
intelligence mean? Even if we can't define it
20:24
exactly, what are the kind of components of
20:26
what we think of as intelligence? So intelligence
20:28
is the ability to think things through, and
20:31
the evidence for that is that you can apply it
20:33
to different domains, you can abstract things to
20:35
look at something and see their structure, to
20:37
apply it to other things, to bring knowledge
20:40
of different domains to bear on certain things,
20:42
that requires kind of memory and breadth of
20:44
reference and understanding. It turns out that language
20:46
is quite a useful tool in helping one
20:48
to be intelligent, so it's difficult maybe to
20:50
imagine a human or a creature that doesn't
20:52
have any kind of symbolic abstract thought like
20:54
language and is still intelligent, it seems to
20:57
be very helpful to do that. Although,
20:59
when we start to look at other organisms
21:01
like octopuses, they exhibit behaviors which you might
21:03
think of intelligent, they solve problems, they learn
21:06
from experience, they think things through, they try
21:08
stuff and try things again differently from that,
21:10
and they probably don't have internal language of
21:12
thought. It's interesting, Alok, if you think of
21:14
any given thing, so for example, the ability
21:16
to project into the future, to think about
21:18
a future, you might think of that planning
21:21
as intelligent thing. The problem is,
21:23
as soon as you write down a single thing
21:25
that's about intelligence, you can usually
21:27
find an animal that does that particular thing, right?
21:29
So if you want planning, go for corvids like crow-like
21:31
creatures. We call crows intelligent, people say all the time.
21:34
Quite so, and that's because they share a thing which
21:36
we think of as intelligent ourselves, which is the
21:38
ability to plan, the ability, so for example, when crows
21:40
hide stuff, if they're observed hiding a thing by another
21:42
crow or in sometimes a different species, they'll kind
21:44
of wander away, and then when they're sure the person
21:46
who saw them like the piece of food is gone,
21:49
they'll go back and move the food hiding place
21:51
to a place somewhere else. Now why would you do
21:53
that? It's because you kind of have fought through that
21:55
when your back is turned, if you don't come
21:57
back soon, the person who saw you hiding it
22:00
is gonna come and move it. So we used
22:02
to think that only humans could do that. The
22:04
problem is once you, as you're encouraging me to
22:06
do, Alok, once you define a single thing, which
22:08
is, yeah, do you know what, intelligence is that,
22:10
I can probably find you an animal that can
22:12
do something like that. What I can't find you
22:15
an animal that can do is all the things
22:17
that we count of as intelligent, but that's a
22:19
bit circular again because we call them intelligent because
22:21
we do them. Yeah, so it is a bit
22:23
reductive and it's not at all comprehensive in the
22:25
way that you can define
22:27
intelligence. But as scientists,
22:30
you want to try and test hypotheses. You wanna
22:32
try and measure specific things
22:35
in this sort of slightly confusing world.
22:38
So in terms of intelligence in humans, what
22:40
are the ways that neuroscientists or others would
22:42
try and measure that or test it? So
22:44
we can certainly look at what's going on
22:46
in people's brains when they do things that
22:48
we would consider intelligence. And we can also
22:50
particularly do that in the bits of brains
22:52
of which we have more than animals that
22:54
are less intelligent than we are. So we
22:56
can learn by looking at the bits of
22:58
the brain which are different in us from
23:00
monkeys, and we can draw out
23:02
the circuits which enable us to do that kind
23:04
of complex thought. I do think
23:07
that intelligence is something that allows us
23:09
to manipulate objects. It's very rare
23:11
for somebody to be just intelligent without using some
23:13
kind of external system, even if they've internalized it.
23:15
So language would be an example of an external
23:17
system which you put in your head. But actually
23:19
smart people use tools well. While
23:21
we're talking a lot, we've got somebody very friendly
23:24
in the room who's operating some complex sound recording
23:26
equipment, and you're using a Mac to structure your
23:28
thinking and look at the questions. That's
23:30
intelligence. We use these prosthetics. And actually again,
23:32
when we come to think about large language
23:35
models and the contemporary developments in AI, one
23:37
of the things that intelligent people like us
23:39
do is to make good use of these
23:41
tools. Now we also fool ourselves that they
23:43
might be intelligent too, but nobody
23:45
thinks that their phone is intelligent really, but
23:48
they use it to enhance their own intelligence
23:50
if their smart orphan can defeat your intelligence
23:52
by too much scrolling. But you can use
23:54
it to extend yourself by judicious use of
23:57
Wikipedia on the fly or storing information in
23:59
a helpful. way. And this ability to
24:01
use tools is something that we observe
24:03
in the history of man, actually, when
24:05
these frontal lobes developed, as something that
24:07
is a marker of a time when
24:09
our intelligence probably really took off. It's
24:12
interesting with the phone example, actually, isn't
24:14
it? A mobile phone that's connected to
24:16
the internet, basically a small computer has
24:18
memory, it has some sorts of
24:20
reasoning capabilities too. These markers, as you say, of
24:22
intelligence, but it doesn't have all of the things.
24:24
It doesn't plan or abstract things in the way
24:26
that humans do. But I guess it's a different
24:28
type of intelligence in that respect, but we
24:30
would never call it intelligent. You're right. In
24:32
general, that's right. I think it's an interesting
24:35
question about ascribing intelligence is worth pondering for
24:37
a second. Fast forwarding to
24:39
LLMs, artificial neural networks like
24:41
large language models and machine learning, I
24:43
think our inevitable ability, we can't
24:45
turn it off to make them
24:47
seem intelligent, allows us to use
24:50
these tools more effectively. It doesn't
24:52
mean they are intelligent, but treating
24:54
them like they're intelligent enables us
24:56
to engage with them in more
24:58
effective ways. When we come to
25:00
ask, as I'm sure you will, Alot,
25:02
whether these machines are smart or not,
25:04
we must always beware of this innate
25:06
capacity of humans to ascribe intelligence to
25:08
others and to machines. That will
25:10
mislead us when we try to make judgments
25:13
about the new machines that we've built. All
25:15
right. Well, we've talked about the difficulty of
25:17
defining human intelligence. We talked about the difficulty
25:19
of actually trying to understand
25:21
it at all the different levels, from
25:24
the whole level to the
25:26
cellular level. Clearly, huge amounts still to
25:29
learn. I guess if we try
25:31
to understand where all of
25:33
this knowledge leads into how to do
25:35
the artificial bit of the artificial intelligence.
25:38
When we're talking about computer scientists who
25:40
were looking for ways of being inspired
25:42
by intelligence to make artificial versions of
25:44
it, was it a good idea to
25:46
try and build artificial intelligences on the
25:48
human brain? I suppose it's the only
25:50
way they had, right? When
25:52
computer scientists tried to make smarter machines, one of
25:54
the observations that they made is that maybe what's
25:57
important about the way that humans think is the
25:59
way that Is the wet stuff?
26:01
Is the neurons? And so we can ask
26:03
what are the properties of neurons that they lit
26:05
upon and how did they Implement
26:07
them and actually they did go right back
26:09
to basics. So to understand a neural network
26:11
in the sense of computers That's the way
26:14
that most machine learning algorithms work. You really
26:16
just start with a neuron It's a device
26:18
which takes inputs from a bunch of other
26:20
neurons Not all the
26:22
neurons affect it to the same extent. Those
26:24
are called weights This is true of a
26:26
tiny little worm each of the neurons that
26:28
comes on to another neuron excites it to a
26:30
different extent and It works out
26:33
on the base of those inputs whether it's past
26:35
a threshold for excitement or not And if it
26:37
does it goes boom and that ping that spike
26:39
goes to the next one taking that architecture
26:42
and layer in the pond it a Learning
26:45
rule which as we said before things
26:47
that fire together wire together So by
26:49
adjusting the weights between the neurons to
26:51
up regulate things that tended to make
26:53
things fire in a good context Those
26:56
two simple insights give you quite a
26:58
powerful Computational learning
27:00
machine now when we
27:02
talk about these neural networks, they're actually
27:04
being implemented in digital architecture So funny
27:06
enough, you've got a good old-fashioned digital
27:08
computer like the kind that works in
27:10
your desktop PC or in your phone
27:13
But it's running a simulation of
27:15
these very simple neurons and again
27:17
if you think about the exquisite
27:19
Microarchitecture of human neurons it would
27:22
take you know years to describe
27:24
even a single human neuron So
27:26
no we abstract it into some
27:28
inputs some weights a firing pattern
27:30
So this very simplified neuron
27:32
is at the basis of all
27:34
of the artificial neural networks that
27:37
underlie machine learning and current
27:39
AI So
27:56
Next We'll continue that thought and move
27:58
from human cells. The silicon Chips,
28:00
a look at the first attempts to
28:03
create artificial versions of the human brain
28:05
and one of the godfathers of modern
28:07
A I will tell us about the
28:09
first time his computer system showed some
28:11
of the skills that deglaze it has
28:13
been telling about. The. So Common
28:15
law. First though, just
28:18
a quick reminder that this is a
28:20
free episode of Babbage. To continue this
28:22
thing to ah special series on a I
28:24
you'll need to sign up to Economists
28:26
podcast plus a now's the perfect time
28:28
to do so. We've got a sale on
28:30
subscribe for less than two dollars fifty
28:32
a month. But hurry your friends! On
28:34
Sunday the seventeenth of March. And
28:36
as a subscriber, he would only
28:39
have access to all of our
28:41
specialist Meekly podcasts. You'll be able
28:43
to join us to Babbage's first
28:45
ever life events following the conclusion
28:47
of this very series that's going
28:49
to be held on Thursday, April
28:51
The fourth where we're going to
28:53
answer as many of your questions
28:55
is because on the science behind
28:57
artificial intelligence, don't miss out. You
28:59
can submit your questions, check the
29:01
start time in your region, and
29:03
book your place by going to
29:05
economists.com/a I. Isn't. All one word blink
29:08
is in the show. Today
29:27
on Babbage, we've heard about how
29:29
the brain works and we're trying
29:31
to unpack how computer science has
29:33
C Wanted to build intelligent systems.
29:35
were inspired by what neuroscientist said
29:37
already. Fund. But. Rather
29:39
than building an artificial version of
29:42
a physical nerve cell, computer scientists
29:44
wanted to build virtual once. And.
29:47
That leads us to the next step as
29:49
we build our understanding of the science behind
29:51
More Than A I. Question
29:54
to. What? was the
29:56
first artificial neural To
30:15
answer this question, we travelled across
30:17
the Atlantic Ocean to Boston in Massachusetts.
30:21
Main Street, which carries traffic after crossing
30:23
the Charles River from central Boston, is
30:25
awash with offices of some of the
30:27
world's biggest tech firms, Google,
30:29
Facebook and IBM. This
30:32
part of the city has been called the
30:34
most innovative square mile on the planet. Companies
30:38
are lured in because the
30:40
area is dominated by two
30:42
institutions, Harvard University and the
30:44
Massachusetts Institute of Technology, or
30:46
MIT. Few
30:48
places on the planet have played
30:50
a more central role in the
30:52
evolution of modern artificial intelligence. Our
30:55
quest to mathematically think about
30:58
intelligence and model our
31:00
brains goes back to 1943, where
31:06
Warren McCulloch and Walter Pitts introduced
31:09
the concept of neural networks. Daniela
31:12
Ruz is the director of the
31:14
MIT Computer Science and Artificial Intelligence
31:16
Laboratory, also known as CSAIL.
31:19
And they published the first mathematical
31:21
model that at
31:24
that time was believed to capture
31:26
what is happening in our brain.
31:29
If the way that neurons work in the
31:31
brain can be explained by mathematics, then
31:34
the brain's network surely could be
31:36
replicated using computer code. Professors
31:40
McCulloch and Pitts thought that
31:42
machines with brain-like architecture could
31:44
have a lot of computational
31:46
power. The early
31:48
artificial neuron was a very
31:51
simple mathematical model. You
31:53
had a computational unit that
31:56
took as input data from other
31:58
sources, maybe other units. units. The
32:01
input was weighted
32:04
by parameters. And then
32:06
inside the artificial neuron, the computation
32:08
was very simple. It was a
32:10
thresholding computation, essentially, if the
32:13
sum total of what came
32:15
in was larger than given
32:17
threshold, the neuron output
32:19
1, otherwise the neuron output
32:22
0. So
32:24
the computation was discrete and very
32:26
simple, essentially a step function. You're
32:29
either above or below a value.
32:33
Neurons in the human brain also
32:35
operate using discrete functions, which Dan
32:37
Glaser mentioned earlier. They either
32:39
fire or they don't fire. A psychologist
32:43
at Cornell University called Frank Rosenblatt
32:45
went on to develop this model
32:48
to create an artificial neuron, a
32:51
mathematical function that he called a
32:53
perceptron. At first,
32:55
the perceptron seemed promising. After
32:57
learning some examples, perceptrons could do
33:00
some basic things, giving
33:02
a yes or no answer to
33:04
an input that hadn't been previously
33:06
analysed by the machine. Let's say
33:08
you've fed the model some data about
33:10
the strength and speed of athletes in
33:12
a sports team. Learning
33:14
from those two variables, the model could
33:16
answer whether or not a new athlete
33:18
would be likely to be accepted into
33:20
a team. As the
33:22
field matured, however, flaws
33:24
in the perceptron became clearer. Because
33:27
perceptrons only worked like a single
33:29
artificial neuron, they couldn't be
33:32
trained to recognise patterns that were more
33:34
complex. What about, for example, athletes who
33:36
were neither particularly fast nor strong but
33:39
had really good technique? In
33:42
1969, Marvin Minsky and
33:45
Seymour Papert co-authored Perceptrons,
33:48
which is a book that demonstrated
33:50
that mathematically, if all
33:52
you have is a single layer
33:54
neural network, then you could only
33:57
compute linear functions. function
34:00
and you can have a closed form solution,
34:02
there's no need for machine learning. And actually
34:05
this work triggered the
34:07
first AI winter because
34:09
people lost faith in what would be
34:11
possible. It
34:14
became clear that if artificial neural
34:16
networks were to work, they'd have
34:18
to have more layers of perceptrons
34:21
to deal with the complexity of
34:23
the real world. During
34:26
the AI winter that Daniela mentioned,
34:28
funding dissipated and interest in
34:30
a very idea of creating
34:33
artificial neural networks dwindled. There
34:36
was very little progress until the
34:38
1980s. But
34:40
some researchers did persist within that
34:42
time and they found
34:45
other pathways to creating intelligent
34:47
machines. In 1950 Alan
34:49
Turing introduced a
34:52
very important benchmark for machine
34:55
intelligence in his paper
34:57
Computing Machinery and Intelligence. This
34:59
is the paper that introduced the Turing
35:02
test. The idea
35:04
that you know whether
35:06
a machine is intelligent or not
35:08
if you aim to converse with
35:11
it and if you cannot tell whether on
35:13
the other side you're talking to
35:15
a machine or to a person. So
35:17
there have been many efforts to
35:20
build intelligent machines that meet
35:22
the Turing test in particular
35:24
at MIT. One
35:27
of the earliest systems called
35:29
ELISA was introduced and
35:32
ELISA enchanted people for
35:34
a long time. ELISA was one of
35:36
the first so-called chatterbots which took the
35:38
world by storm in 1966. Its creator
35:41
Joseph Weisenbaum
35:44
put the bot in the role
35:46
of a psychotherapist. A
35:48
user would type a message on
35:50
an electric typewriter and the
35:52
machine would reply. Men
35:57
are all alike In what way? They're
36:01
always talking about something or other.
36:03
Can you think of a specific
36:06
example? Well my boyfriend Nathan come
36:08
here. Your boyfriends me to come
36:10
here he says and I'm depressed.
36:13
much. A time I am
36:15
sorry to hear the word
36:17
press is true. I. Am
36:19
unhappy. The
36:22
early Ai systems used what is
36:24
called a lot of rule they
36:26
systems so you'll need to solve
36:28
a problem. You identify some rules
36:30
and then you're right. The program
36:32
that makes decisions according to the
36:34
set of rules to this is
36:36
sometimes called good old. Fashioned A I.
36:42
Realize it didn't use an artificial
36:44
neural network and it didn't learn
36:46
from it's input. Instead, the language
36:49
model recognized key word and reflected
36:51
the back in the form of
36:53
simple phrases. All questions. Supposedly.
36:55
Modeling The kind of conversation is expect
36:58
with a therapist. It was almost like
37:00
a mirror. Allies: I did not
37:02
pass the turing test. Which was in
37:04
fact the point. The researchers
37:06
behind the bought designed Eliza to
37:09
show how superficial the state of
37:11
human to conversation really was, but
37:13
in reality hits have the opposite
37:16
response. People. Became engaged
37:18
in long, deep conversations with
37:20
the computer program. You
37:22
know that's really incredible. It's as if it
37:24
really understood what I was saying, but it
37:26
doesn't. Of course, it's just a bag of
37:29
tricks. Oh, I get it at have the
37:31
faintest idea what I'm talking about. Eliza
37:35
was not an intelligent machines, but
37:37
it made people stop and think
37:39
about what the world might be
37:41
like if artificial intelligence did come
37:43
along. It was perhaps
37:45
also the first time that humans showed
37:47
her winning. We all are to believe
37:49
that computers could be intelligent fish. They
37:51
spoke to us and our own language.
37:54
is another example of what done
37:56
glaser described earlier as the innate
37:58
desire of humans anthropomorphize everything
38:01
in the world around us. Of
38:04
course in the decades since
38:06
Eliza, chatabots became chatbots and
38:09
that's not all. These days our
38:12
conversations with chatbots easily
38:15
pass the Turing test. But
38:19
how did the skills of chatbots that we
38:22
see today emerge from the primitive AI
38:24
of the 1960s? What was
38:27
it that made the theory of
38:29
artificial neural networks actually work in
38:31
practice? At
38:33
its core the answer lies in the insight
38:35
that artificial neurons had to be layered on
38:37
top of each other like neural
38:39
networks are in the human brain. And
38:42
so at the end of the 1960s researchers
38:45
came up with the idea of
38:47
the deep neural network. The
38:50
deep learning revolution that came several
38:52
decades later happened in no
38:54
small part thanks to three scientists who
38:56
would later become known as the godfathers
38:59
of AI. The systems we
39:01
were building were so dumb and
39:03
so weak and so difficult to train.
39:06
That's one of the so-called godfathers, Yoshua
39:08
Bengio. He is a computer scientist at
39:10
the University of Montreal and he was
39:12
a key figure in the development of
39:14
deep learning. What got me
39:16
really excited when I started reading some
39:18
of the early neural net papers from
39:21
the early 80s is
39:23
the idea that our
39:25
own intelligence with our
39:28
brain could be explained by a
39:30
few principles just like think
39:32
of how physics works. Could it be
39:34
possible that we would
39:37
do something similar for understanding
39:39
intelligence and of course take
39:41
advantage of those principles to design intelligent machines?
39:44
And in fact it goes also in the
39:46
other direction because there are experiments we can
39:48
run in computers that we can't run on
39:50
real brains and so the
39:52
work we've been doing in
39:55
AI is informing
39:57
also theories of how the brain
39:59
works. two-way street. So that
40:01
synergy and that
40:03
idea that maybe there is an explanation
40:06
for intelligence that we can communicate as
40:08
a scientific theory is
40:10
really what got me into this field. Talk
40:13
to us about what the challenge was
40:15
in trying to model the human brain
40:17
in silicon. Well, we didn't try
40:19
to model the human brain in silicon because
40:21
that would have seemed to daunting
40:23
a task. Instead,
40:26
we looked at the simplest possible
40:30
models that come from
40:32
neuroscience and see
40:34
how we can tweak them. In
40:36
the early days when I was doing my PhD,
40:39
we were trying to use these
40:42
systems to classify simple patterns
40:44
like shapes of characters
40:47
or phonemes using the
40:49
sound recording of mesing
40:52
R E O. Can
40:54
a neural network, which is this very
40:57
simplified calculation inspired by neurons in
40:59
the brain, can a neural
41:02
network learn to distinguish between
41:04
those different categories of objects in the
41:06
input? I've been working on
41:08
this from the mid 80s to the
41:11
mid 2000s. What
41:13
were some of the first things you tried to do
41:15
with the neural networks to prove that they could be
41:18
useful? In
41:20
the 90s, I
41:22
worked on these pattern recognition tasks,
41:24
both speech and image
41:27
classification.
41:30
Industrial applications emerged. For example,
41:32
I worked on a project
41:34
to use neural nets for
41:37
classifying amounts on checks to
41:39
automate the process of making sure that
41:41
a check you deposited the bank has
41:43
the right amount and
41:46
that was actually deployed in banks
41:48
in the 90s and
41:50
processed a large number of checks. All of
41:52
the approaches that had been tried before didn't
41:55
do very well because there is
41:57
so much variability between people. We
42:00
write in different ways. So
42:03
it was not trivial, and
42:05
it is something that had
42:07
a lot of economic value already
42:10
to address that challenge. Next
42:15
week, we'll look at exactly how
42:17
artificial neural networks allowed machines to
42:19
learn, and we'll also examine
42:21
the clever maths that allowed all of
42:23
this to happen. People
42:25
realize that if you could insert a
42:28
middle layer, which is sometimes called a
42:30
hidden layer, these systems
42:32
could actually compute many more functions.
42:36
That's next time on Babbage. Thanks
42:42
to Daniel Glaser, Daniela Ruz, Yoshua
42:44
Bengier, the economist, Aimee Johnston, and
42:46
all the people she spoke to
42:49
at the UK Biobank. And
42:51
thank you for listening. To follow the next
42:53
stage of our journey to understand modern
42:55
AI, subscribe to Economist, Test Test
42:58
Plus. Find out more by clicking the
43:00
link in the show notes. Babbage
43:02
is produced by Jason Haskin and Kannar
43:04
Patel, with mixing and sound design by
43:06
Nuka Rofast. The executive
43:08
producer is Hannah Mourinho. I'm
43:11
Alok Jha, and in London, this
43:13
is The Economist. Thank
43:22
you.
Podchaser is the ultimate destination for podcast data, search, and discovery. Learn More