Podchaser Logo
Home
Killer Robots and the Future of War

Killer Robots and the Future of War

Released Monday, 26th October 2020
 1 person rated this episode
Killer Robots and the Future of War

Killer Robots and the Future of War

Killer Robots and the Future of War

Killer Robots and the Future of War

Monday, 26th October 2020
 1 person rated this episode
Rate Episode

Episode Transcript

Transcripts are displayed as originally observed. Some content, including advertisements may have changed.

Use Ctrl + F to search

0:15

Pushkin. You're

0:25

listening to Brave New Planet, a

0:28

podcast about amazing new technologies

0:30

that could dramatically improve our world. Or

0:33

if we don't make wise choices, could

0:35

leave us a lot worse off. Utopia

0:38

or dystopia. It's up to us.

0:47

On September twenty sixth, nineteen

0:49

eighty three, the world almost

0:52

came to an end. Just

0:54

three weeks earlier, the Soviet Union

0:57

had shot down Korean Airlines

0:59

Flight Double O seven, a passenger

1:02

plane with two hundred and sixty nine

1:04

people aboard. I'm coming before you tonight

1:06

about the Korean Airline massacre. President

1:09

Ronald Reagan addressed the nation

1:11

the attack by the Soviet Union against

1:14

two hundred and sixty nine innocent men,

1:16

women, and children aboard an unarmed

1:18

Korean passenger plane. This

1:20

crime against humanity must never be forgotten

1:23

here or throughout the world. Cold

1:25

War tensions escalated, with the two

1:28

nuclear powers on high alert. World

1:31

War three felt frighteningly

1:33

possible. Then, on

1:35

September twenty sixth, in a command

1:38

center outside of Moscow, an

1:40

alarm sounded. The Soviet

1:42

Union's early warning system reported the

1:45

launch of multiple intercontinental

1:47

ballistic missiles from bases in the

1:49

United states. Statislav

1:52

Petrov, a forty four year

1:54

old member of the Soviet Air Defense

1:56

Forces, was the duty officer

1:59

that night. His role was

2:01

to alert Moscow that an attack

2:03

was under way, likely triggering

2:06

Soviet nuclear retaliation and

2:08

all out war. Petrov

2:12

spoke with BBC News in twenty

2:14

thirteen. The sirens sounded

2:17

very loudly, and I just sat there for

2:19

a few seconds, staring at the screen

2:21

with the word launch displayed

2:23

in bold red letters.

2:26

A minute later, the siren went off again.

2:29

The second missile was launched, then

2:31

the third, and the fourth, and the fifth.

2:34

The computers changed their alerts from

2:37

launch to missile strike. Petrov's

2:40

instructions were clear, report

2:42

the attack on the motherland, but something

2:45

didn't make sense. If

2:47

the US were attacking, why only

2:49

five missiles rather than an entire

2:52

fleet? And then I made

2:54

my decision. I would not trust

2:56

the computer. I picked up the

2:58

telephone handset, spoke to

3:00

my superiors and reported that

3:02

the alarm was false. But

3:05

I myself was not sure.

3:07

Until the very last moment. I

3:10

knew perfectly well that nobody

3:12

would be able to correct my mistake if

3:15

I had made one. Petrov,

3:20

of course, was right. The false alarm

3:22

was later found to be the result of a rare

3:24

and unanticipated coincidence sunlight

3:27

glinting off high altitude clouds over

3:29

North Dakota at just the

3:31

right angle to fool the Soviet satellites.

3:34

Statislav Petrov's story comes up

3:36

again and again in

3:38

discussions of how far we should go and

3:41

turning over important decisions, especially

3:44

life and death decisions, to artificial

3:47

intelligence. It's not an easy call.

3:50

Think about the split second decisions

3:52

and avoiding a highway collision.

3:55

Who will ultimately do better a

3:57

tire driver or a self driving

3:59

car? Nowhere

4:01

is the question more fraught than

4:04

on the battlefield. As technology

4:06

evolves, should weapons systems

4:08

be given the power to make life and death

4:10

decisions? Or do we need to

4:12

ensure there's always a human a

4:15

Stanislav Petrov in the loop.

4:18

Some people, including winners of the

4:20

Nobel Peace Prize, say that

4:22

weapons should never be allowed to make

4:25

their own decisions about who or

4:27

what to attack. They're

4:29

calling for a ban on what they

4:31

call killer robots. Others

4:35

think that idea is well meaning but

4:37

naive. Today's

4:40

big question lethal autonomous

4:43

weapons. Should they ever

4:46

be allowed? If so, when,

4:50

if not, can we stop them?

5:00

My name is Eric Lander. I'm a scientist who

5:02

works on ways to improve human health.

5:04

I helped lead the Human Genome Project, and

5:07

today I lead the Broad Institute of and

5:09

Harvard. In the twenty first

5:11

century, powerful technologies

5:13

have been appearing at a breathtaking pace,

5:16

related to the Internet, artificial intelligence,

5:19

genetic engineering, and more. They

5:21

have amazing potential upsides,

5:24

but we can't ignore the risks that come

5:26

with them. The decisions aren't just

5:28

up to scientists or politicians,

5:31

whether we like it or not, we all

5:33

of us are the stewards of a brave

5:36

New Planet. This generation's

5:38

choices will shape the future as

5:40

never before. Coming

5:47

up on today's episode of Brave New

5:49

Planet fully autonomous

5:52

lethal weapons or killer

5:55

robots, we

5:57

hear from a fighter pilot about

5:59

why it might make sense to have machines in

6:01

charge of some major battlefield

6:04

decisions. I know people who

6:06

have killed civilians,

6:09

and in all cases

6:11

where people made mistakes, it was

6:13

just too much information. Things were happening

6:16

too fast. I speak with one of the world's

6:18

leading robo ethesis. Robots

6:20

will make mistakes too, but hopefully, if

6:22

done correctly, they will make far far less mistakes

6:24

than human beings. We'll

6:27

hear about some of the possible consequences

6:29

of autonomous weapons. Algorithms

6:32

interacting at machine speed faster

6:35

than humans couldn't respond might result

6:37

in accidents, and that's something like a

6:39

flash war. I'll speak with a

6:42

leader from Human Rights Watch. The campaign

6:44

to stop Killer Robots is seeking new

6:46

international lure in the form of a new

6:49

treaty. And we'll

6:51

talk with former Secretary of Defense Ash

6:53

Carter. Because I'm the guy who asked to go out

6:56

the next morning after some women and children

6:58

have been accidentally killed. As

7:00

suppose I go out there, Eric and I say,

7:02

oh, I don't know how it happened. The machine

7:04

did it. I would be crucified. I should

7:07

be crucified. So

7:09

stay with us. Chapter

7:14

one, Stanley

7:17

the self driving Car. Not

7:21

long after the first general purpose computers

7:23

were invented in the nineteen forties, some

7:26

people began to dream about fully

7:28

autonomous robots, machines

7:31

that used their electronic brains to

7:33

navigate the world, make decisions,

7:35

and take actions. Not surprisingly,

7:38

some of those dreamers were in the US Department

7:40

of Defense, specifically the

7:43

Defense Advanced Research Projects

7:45

Agency or DARPA, the

7:47

visionary unit behind the creation of the

7:49

Internet. They saw a lot

7:51

of potential for automating battlefields,

7:54

but they knew it might take decades. In

7:57

the nineteen sixties, DARPA

7:59

funded the Stanford Research inst to

8:01

build Shaky the Robots,

8:04

a machine that used cameras to

8:06

move about a laboratory. In the nineteen

8:08

eight ease, it supported universities

8:11

to create vehicles that could follow lanes

8:13

on a road. By the early

8:16

two thousands, DARPA decided

8:18

that computers had reached the point that

8:20

fully autonomous vehicles able

8:23

to navigate the real world might

8:25

finally be feasible. To

8:27

find out, DARPA decided

8:30

to launch a race. I talked

8:32

to someone who knew a lot about

8:34

it. My name is Sebastian Thrun.

8:37

I mean the smartest person on the planet and the best looking.

8:39

That's kidding. Sebastian Thrun

8:42

gained recognition when his autonomous car,

8:44

a modified Volkswagon with a computer

8:47

in the trunk and sensors on the roof,

8:50

was the first to win the DARPA Grand

8:52

Challenge. A dun Challenge was his momentous

8:55

government sponsors robot raises epic

8:57

RaSE, can you bid a robot that can

9:00

navigate one hundred and thirty punishing

9:02

miles through the Mohabi Desert and

9:05

the best robot like seven miles

9:07

and then literally end up in many

9:10

many mini researchers had concluded can't be done.

9:12

In fact, many of my colleagues told

9:14

me I'm going to waste my time and my name

9:17

if I engaged in this kind of super hard race.

9:19

And that made you more interested in doing it, of course, and

9:21

so you built Stanley. Yeah,

9:24

So my students built Stanley and started as

9:26

a class And Stanford students

9:28

are great. If you tell them go to the moon

9:31

in two months, they're going to go to the moon. So

9:33

then two thousand or five, the actual

9:36

government sponsored race, how

9:38

did Stanley do. We came in first,

9:41

so we are focused

9:43

insanely strongly on software and specifically

9:45

on machine learning, and that differentiated as

9:48

from pretty much every other team that focused

9:50

on hardware. But the way I look at this is there

9:52

were five teams that finished this ruling

9:54

race within one year, and it's the

9:56

community of the people that build all these

9:58

machines that really won. So

10:00

nobody made it a mile in the first

10:03

race, and five different

10:05

teams made it more than one hundred and thirty

10:07

miles through the desert, just a year later, Yeah,

10:09

that's kind of amazing to me.

10:12

That just showed how fast this technology

10:14

can possibly evolve. And what's

10:17

happened since then? I worked

10:19

at Google for a vile and eventually

10:21

this guy, Larry Page came to

10:23

me and says, hey, Sebastian, I thought about this long and

10:26

heart. We should build a self driving car

10:28

they can drive on all streets in the world. And

10:30

with my entire authority, I said that

10:32

cannot be done. We just had driven

10:35

a desert raised there was never pedestrians

10:37

and bicycles and all the other people that we could kill

10:40

in the environment. And for me, just the sheer imagination

10:42

we would drive a self driving car to San Francisco

10:45

sounded always like a crime. So you

10:47

told Larry Page, one of the two

10:50

co founders of Google, that

10:52

the idea of building a self driving car that could

10:54

navigate anywhere was just not

10:57

Yeah, feelous. Later came back and said, he Sebastian,

10:59

look, I trust you, you're the expert,

11:01

but I want to explain Eric Schmidt, then the Google

11:03

CEO, and it's my co founder, surgery brain, why

11:06

it can be done. Can you give me the technical

11:09

reason? So I went home in agony,

11:12

thinking about what is

11:15

the technical reason why it can be done?

11:17

And I got back the next day and I said, so,

11:20

what is it? And I said, I

11:22

can't think of any and Lomi

11:24

Hoold. Eighteen months later, roughly ten engineers

11:27

we drove pretty much every street in California.

11:30

Today, autonomous technology is

11:33

changing the transportation industry. About

11:35

ten percent of cars sold in the US are

11:37

already capable of at least partly

11:40

guiding themselves down the highway.

11:42

In twenty eighteen, Google's

11:44

self driving car company Weymo

11:47

launched a self driving taxi service

11:49

in Phoenix, Arizona, initially

11:52

with human backup drivers behind every wheel,

11:55

but now sometimes even without.

11:57

I asked Sebastian why he thinks

11:59

this matters. We lose more

12:03

than a million people in traffic accidents

12:05

every year, almost exclusively

12:08

to us not pay attention. When

12:10

it was eighteen, my best friend died

12:13

in a traffic accident and it was

12:15

a split second poor decision

12:18

from his friend who was driving in who also died. To

12:21

me, this is just unacceptable. Beyond

12:23

safety, Sebastian sees

12:26

many other advantages for autonomy.

12:28

During a commute, you can do something else

12:31

that means you're probably willing to commute

12:34

further distances. You could sleep, or watch the movie, or

12:36

do email. And then eventually people

12:38

can use cars that today can't operate

12:41

them blind people, old

12:43

people, children, babies.

12:45

I mean, there's an entire spectrum of people that are kindly

12:47

excluded. They would now be able to be mobile.

12:53

Chapter two the Tomahawk

12:56

so darpest efforts over the decades

12:58

helped give rise to the modern self

13:00

driving car industry, which promises

13:03

to make transportations safer, more

13:05

efficient, and more accessible. But

13:08

the agencies primary motivation was

13:10

to bring autonomy to a different challenge,

13:13

the battlefield. I traveled to Chapel

13:15

Hill, North Carolina, to meet with someone who

13:18

spends a lot of time thinking about the consequences

13:20

of autonomous technology. We

13:22

both serve on a civilian advisory

13:25

board for the Defense Department. My

13:27

name is Missy Cummings. I'm a

13:29

professor of electrical and computer engineering

13:32

at Duke University, and I

13:34

think one of the things that people find

13:36

most interesting about me is that

13:38

I was one of the US military's

13:40

first female fighter pilots in the Navy.

13:43

Did you always want to be a fighter

13:45

pilot? So when I was growing up,

13:47

I did not know that women could

13:50

be pilots, and indeed, when I was

13:52

growing up, women could it be pilots. And it

13:54

wasn't until the late

13:57

seventies that women actually became

13:59

pilots in the military.

14:02

So I went to college.

14:05

In nineteen eighty four, I was

14:07

at the Naval Academy, and then of course,

14:09

in nineteen eighty six Top Gun came out, and then

14:11

I know who doesn't want to be a pilot

14:13

After you see the movie Top Gun. Missy

14:16

is tremendously proud of the eleven years

14:18

she spent in the Navy, but she also

14:20

acknowledges the challenges of being part

14:22

of that first generation of woman

14:24

fighter pilots. It's no secret

14:27

that the reason I left the military was

14:29

because of the hostile

14:31

attitude towards women. None

14:33

of the women in that first group stayed in to make

14:36

it a career. The guys were very angry

14:38

that we were there, and

14:40

I decided to leave when they started sabotaging

14:43

my flight gear. I just thought, this

14:45

is too much. If something really bad happened,

14:47

you know, I would die. When

14:49

Missy Cummings left the Navy, she decided

14:52

to pursue a PhD in Human Machine

14:55

interaction. In my last three years

14:57

flying IF eighteens, there

14:59

were about thirty six people I knew

15:01

that died, about one person a month. They

15:04

were all training accidents. It just really

15:06

struck me how many people were dying

15:09

because the design of the airplane

15:11

just did not go with the human

15:13

tendencies. And so I

15:16

decided to go back to school to find out what

15:18

can be done about that. So I went

15:20

to finish my PhD at

15:22

the University of Virginia, and

15:24

then I spent the next ten years at MT

15:27

learning my craft. The

15:29

person I am today is half because

15:31

of the Navy and half because of MIGHT. Today,

15:34

Missy is a Duke University

15:36

where she runs the Humans an Autonomy

15:38

Lab, or for short HOW.

15:41

It's a nod to the sentient computer that

15:43

goes rogue in Stanley Kubrick's

15:46

film two thousand and one, A Space

15:48

Odyssey. This mission is too

15:50

important for me to allow you to jeopardize

15:53

it. I don't

15:55

know what you're talking about. How. I

15:58

know that you and Frank were planning to disconnect

16:00

me, and I'm afraid that's something

16:02

I cannot allow to happen. And

16:05

so I intentionally named my lab

16:08

how so that we

16:10

were there to stop that from happening. Right,

16:13

I had seen many friends die, not

16:15

because the robot became sentient, in

16:17

fact, because the designers of

16:19

the automation really had no

16:21

clue how people would or would not use this technology.

16:25

It is my life's mission statement

16:28

to develop human collaborative computer

16:31

systems that work with each other

16:33

to achieve something greater than either would alone.

16:35

The Humans and Autonomy Lab works on

16:37

the interactions between humans

16:40

and machines across many fields,

16:43

but given her background, Missy's

16:45

thought a lot about how technology

16:47

has changed the relationship between

16:50

humans and their weapons. There's

16:52

a long history of us distancing

16:55

ourselves from our actions. We

16:57

want to shoot somebody, we wanted

16:59

to shoot them with bows and arrows. We

17:01

wanted to drop bombs from five miles

17:03

over a target. We want cruise muscles

17:05

that can kill you from another country. Right, it

17:08

is human nature to back that

17:10

distance up, Missy

17:12

season inherent tension. On

17:15

one hand, technology distances

17:17

ourselves from killing. On

17:19

the other hand, technology is letting

17:21

us design weapons that are more accurate

17:24

and less indiscriminate in their

17:26

killing. Missy rotor PhD

17:29

thesis about the Tomahawk missile, an

17:31

early precursor of the autonomous

17:33

weapons systems being developed today.

17:36

The Tomahawk missile has these stored

17:38

maps in its brain, and

17:40

as its skimming along the nap of the

17:43

Earth, it compares the pictures that it's

17:45

taking with its pictures and its database to

17:47

decide how to get to its target. This

17:49

Tomahawk was kind of a set it and forget

17:52

it kind of thing. Once you launched it,

17:54

it would follow its map to the right place

17:56

and there was nobody looking over its shoulders.

17:58

Well, so the

18:00

Tomahawk missile that we saw in the Gulf

18:03

War, that was a fire and forget missile

18:05

that a target would be programmed

18:08

into the sole and then it would be fired

18:10

and that's where it would go. Later, around

18:13

two thousand, two thousand and three,

18:16

then GPS technology was coming

18:18

online, and that's when we got the tactical Tomahawk,

18:21

which had the ability to be redirected

18:23

in flight. That success

18:25

with GPS and the Tomahawk opened the military's

18:28

eyes to the

18:30

ability to use them in drones. Today's

18:33

precision guided weapons are far more accurate

18:36

than the widespread aerial bombing that

18:38

occurred on all sides in World War Two,

18:41

where some cities were almost entirely

18:44

leveled, resulting in huge numbers

18:46

of civilian casualties. In

18:48

the Gulf War, Tomahawk missile

18:50

attacks came to be called surgical

18:53

strikes. I

18:56

know people who have killed

18:59

civilians, and I

19:01

know people who have killed friendlies. They

19:04

have dropped bombs on our own forces and killed

19:06

our own people. And

19:10

in all cases where people made

19:12

mistakes, it was just too

19:14

much information. Things were happening too

19:16

fast. You've seen some pictures

19:18

that you've got in a brief hours ago, and you're

19:20

supposed to know that what you're

19:22

seeing now through this grainy image thirty

19:25

five thousand feet over a target is the

19:27

same image that you're being asked to bob.

19:29

The Tomahawk never missed its

19:31

target. It never made a

19:34

mistake unless it was programmed

19:36

as a mistake. And that's old

19:38

autonomy, and it's only gotten better

19:40

over time. Chapter

19:45

three, Kicking down Doors.

19:49

The Tomahawk was just a baby step

19:51

toward automation. With the ability

19:53

to read maps, it could correct its course,

19:56

but it couldn't make sophisticated decisions.

19:59

But what happens when you start adding modern

20:01

artificial intelligence? So

20:04

where do you see autonomous weapons

20:06

going? If you could kind of map out

20:08

where are we today and where do you think we'll be

20:10

ten twenty years from now. So,

20:13

in terms of autonomy and weapons, by

20:15

today's standards, the Tomahawk

20:17

missile is still one of the best ones that we

20:19

have, and it's also still one of the most advanced.

20:23

Certainly, there are research arms of the military

20:25

who are trying very hard to

20:29

come up with new forms of autonomy.

20:31

There was the Predicts that came out

20:33

of Lincoln Lab, and this was basically

20:36

a swarm of really tiny UAVs

20:39

that could coordinate together.

20:42

A ua V, an unmanned

20:45

aerial vehicle is military

20:47

speak for a drone. The Predicts

20:50

the drones that Missy was referring to.

20:53

They were commissioned by the Strategic Capabilities

20:55

Office of the US Department of Defense. These

20:58

tiny flying robots are able to communicate

21:00

with each other and make split second

21:02

decisions about how to move as a group.

21:05

Many researchers, have you been using bio inspired

21:07

methods? Be right? So

21:10

bees have local and global intelligence.

21:12

Like a group of bees, these drones

21:14

are called a swarm collective

21:17

intelligence on a shared mission.

21:19

A human can make the big picture decision

21:22

and the swarm of microdrones can then

21:24

collectively decide on the most efficient

21:26

way to carry out the order in the moment.

21:30

I wanted to know why exactly

21:32

this technology is necessary,

21:34

so I went to speak to someone who

21:37

I was pretty sure would know. I'm Ash

21:39

Carter. Most people will probably

21:42

have heard my name as the Secretary

21:44

of Defense who proceeded Gimatus.

21:48

You will know me in part from the fact

21:50

that we knew one another way back in Oxford

21:52

when we were both young scientists, and I guess

21:54

agents start there. I'm a physicist. When you were

21:57

doing your PhD in physics, I was doing

21:59

my PhD in mathematics at Oxford.

22:01

What was your thesis on? It was

22:03

on quantum chrominynamics.

22:05

That was the theory of quarks

22:08

and gluons. And how in the world

22:10

is somebody who's an expert

22:12

in quantum chromodynamics become

22:14

the Secretary of Defense. It's an

22:16

interesting story. The people

22:19

who were the seniors

22:22

in my field of physics, the

22:25

mentors, so to speak, were all

22:27

members of the Manhattan Project

22:30

generation. They had built

22:32

the bomb during World War Two, and

22:35

they were proud of

22:37

what they'd done because they

22:40

believed that it had ended the war with

22:42

fewer casualties than otherwise

22:44

there would have been in a full scale of invasion

22:47

of Japan, and also that it had

22:49

kept the peace through the Cold War, so they were proud

22:51

of it. However, they knew there was a

22:53

dark side, and they conveyed

22:55

to me that it was my

22:58

responsibility as

23:00

a scientist to be involved

23:02

in these matters. And the

23:06

technology doesn't determine what the balance

23:08

of good and bad is. We human beings

23:10

do. That was the lesson, and

23:13

so that's what got me started, and

23:15

then my very first Pentagon job, which was

23:17

in nineteen eighty one, right

23:19

through until the last time I walked

23:22

out of the Pentagon of Sectary Defense, which

23:24

was January of twenty seventeen.

23:26

Now, when you were secretary, there

23:29

was a Strategic Capabilities

23:31

Office that it's been

23:34

publicly reported, was experimenting

23:36

with using drones to

23:39

make swarms of drones that

23:41

could do things, communicate

23:43

with each other, make formations. Why

23:45

would you want such things? So it's a good question. Here's

23:48

what you do with the drone like that, You

23:50

put a jammer on it, a little radio

23:53

beacon, and you fly it right

23:55

into the eye of

23:58

a enemy radar. So

24:00

all that radar c's is

24:03

the energy emitted by that

24:05

little drone, and it's essentially dazzled

24:07

or blinded. If there's one

24:10

big drone, that radar

24:12

is precious enough that the defenders

24:14

going to shoot that drone down. But

24:17

if you have so many out there, the

24:19

enemy can't afford to shoot them all down. And

24:23

since they are flying right

24:25

up to the radar, they don't have to

24:27

be very powerful. So there's

24:29

an application where lots

24:31

of little drones can have

24:34

the effect of nullifying enemy

24:37

radar. That's a pretty big deal for a

24:39

few little, little microdrones. To

24:41

learn more, I went to speak with Paul Shari.

24:44

Paul's the director of the Technology and National

24:46

Security Program at the Center

24:49

for a New American Security. Before

24:51

that, he worked for Ash Carter at the Pentagon

24:53

studying autonomous weapons and

24:56

a recently authored a book called Army

24:59

of None, Autonomous Weapons

25:01

and the Future of War. Paul's

25:03

interest in autonomous weapons began when

25:05

he served in the Army. I enlisted

25:09

in the Army to become an Army ranger.

25:11

That was June of two thousand and one,

25:13

did a number of tours overseas and the

25:16

wars and a rock Afghanistan. So

25:18

I'll say one moment that stuck out for me

25:21

where I really sort of the light bulb

25:23

went on about the power of robotics in

25:25

warfare. I was in

25:27

a rock in two thousand and seven. We

25:29

were on a patrol, driving along in

25:31

a Striker armored vehicle. Came

25:33

across an ID improvised explosive

25:35

device makeshift road type bomb, and

25:38

so we called up bomb disposal folks. So

25:40

they show up and I'm expecting the

25:42

bomb tech to come out in that big bomb

25:45

suit that you might have seen in the movie The hurt Locker,

25:47

for example, and instead out

25:49

rolls out a little robot and

25:52

I kind of went, oh, that makes a lot

25:54

of sense. Have the robot diffused the bomb.

25:56

Well, it turns out there's a lot of things in war that are super

25:59

dangerous where it makes sense to have robots

26:01

out on the front lines, getting people

26:03

better stand off a little bit more separation

26:06

from potential threats. The bomb diffusing

26:08

robe are still remote controlled

26:11

by a technician, but ashe Carter

26:13

wants to take the idea of robots doing the

26:15

most dangerous work a step further

26:18

somewhere in the future. But I'm certain will

26:20

occur. Is I think there will

26:22

be robots who will be part

26:24

of infantry squads and that will

26:26

do some of the most dangerous jobs

26:29

in an infantry squad, like kicking down

26:32

the door of a building and being the first

26:34

one to run in and clear

26:36

the building of terrorists or whatever. That's

26:39

a job that doesn't sound like something

26:42

I would like to have a young American

26:44

man or woman doing if I could replace

26:46

them with a robot. Chapter

26:51

four Harpies,

26:55

Paul Shari gave me an overview of the sophisticated

26:58

unmanned systems currently used by

27:00

militaries. So I think it's worth separate running

27:02

out the value of robotics

27:05

versus autonomy removing

27:07

a person from decision making. So

27:09

what's so special about autonomy.

27:12

The advantages there are really about

27:15

speed. Machines can make decisions

27:17

faster than humans. That's why automatic

27:19

breaking and automobiles is valuable. But

27:21

you could have much faster reflexes

27:23

than a person might had. Pul separates the

27:25

technology into three baskets. First,

27:29

semi autonomous weapons. Semi

27:31

autonomous weapons that are widely

27:33

used around the globe today, where automation

27:35

is used to maybe help identify targets,

27:38

but humans are in the final decision

27:41

about which targets to attack. Second,

27:43

there are supervised autonomous

27:45

weapons. There are automatic

27:48

modes that can be activated on

27:50

air and missile defense systems

27:52

that allow these computers to defend

27:54

the ship or ground vehicle

27:57

or land base all on its own against

27:59

these incoming threats. But humans supervise

28:02

these systems in real time. They could,

28:04

at least in theory, intervene. Finally,

28:07

there are fully autonomous weapon There

28:09

are a few isolated

28:11

examples of what you might consider

28:14

fully autonomous weapons where there's no human

28:16

oversight and they're using an offensive capacity.

28:19

The clearest today that's an operation is the Israeli

28:21

Harpy drone that can

28:23

load over a wide area after

28:26

about two and a half hours at a time to search

28:28

for enemy radars, and then when it finds

28:30

one, it can attack it all on its

28:32

own without any further

28:35

human approval. Once it's launched,

28:37

that decision about which particular target to attack

28:39

that's delegated to the machine. It's

28:42

been sold to a handful of countries Turkey,

28:44

India, China, South Korea.

28:47

I asked Missy if she saw advantages to

28:49

having autonomy built into lethal weapons.

28:52

While she had reservations, she pointed

28:54

out that in some circumstances

28:56

it could prevent tragedies. A human

28:59

has something called the neuromuscular

29:01

lag in them. It's about a half second delay. So

29:04

you see something, you can execute

29:06

an action a half second later. So

29:10

let's say that that guided weapon fired

29:13

by a human is going into

29:15

a building, and then right before

29:17

it gets to the building, at a half second,

29:20

the door opens and a child walks out.

29:22

It's too late. That child is dead.

29:25

But a lethal autonomous weapon who

29:28

had a good enough perception

29:30

system could immediately

29:33

detect that and immediately guide itself

29:35

to a safe place to explode. That

29:38

is a possibility in the future. Chapter

29:43

five bounded morality.

29:48

Some people think, and this point

29:50

is controversial, that robots

29:52

might turn out to be more humane

29:55

than humans. The history of

29:57

warfare has enough examples

29:59

of atrocities committed by soldiers

30:01

on all sides. For example,

30:03

in the middle of the Vietnam War in March

30:06

nineteen sixty eight, a company of American

30:08

soldiers attack the village in South Vietnam,

30:11

killing and estimated five hundred and four

30:14

unarmed Vietnamese men, women,

30:16

and children, all noncombatants.

30:19

The horrific event became known as

30:21

the Melai massacre in

30:24

nineteen sixty nine. Journalist

30:26

Mike Wallace of Sixty Minutes sat

30:29

down with Private Paul Medloe,

30:31

one of the soldiers involved in the massacre.

30:33

Well, I'm might a kill about ten

30:35

or fifteen of them men, women,

30:38

and children and babies

30:41

and babies. You're

30:43

married, right, children

30:46

too? How can a father of two young

30:48

children shoot babies? I

30:52

don't know when to sworn in things. Of

30:54

course, the vast majority of soldiers do

30:57

not behave this way. But humans

30:59

can be thoughtlessly violent. They

31:01

can act out of anger, out of fear, they

31:04

can seek revenge, they can murder

31:06

senselessly. Can robots

31:08

do better? After all, robots

31:11

don't get angry, They're not impulsive. I

31:13

spoke with someone who thinks that lethal

31:16

autonomous weapons could ultimately be

31:18

more humane. My name is Ronald Arkin.

31:21

I'm a regents professor at the Georgia

31:23

Institute of Technology in Atlanta, Georgia.

31:25

I am a roboticist for close

31:28

to thirty five years. I've

31:30

been in robot ethics for maybe the past

31:32

fifteen. Ron wanted to make it clear

31:35

that he doesn't think these robots are perfect,

31:37

but they could be better than our current

31:40

option. I am absolutely not pro

31:42

lethal autonomous weapons systems because

31:44

I'm not pro lethal weapons of

31:47

any sort. I am against killing

31:49

in all of its manifold forms. But

31:51

the problem is that humanity

31:53

persist in entering into warfare. As

31:55

such, we must better protect the

31:57

innocent in the battlespace, far better than

32:00

we currently do. So Ron

32:02

thinks that lethal autonomous weapons could

32:04

prevent some of the unnecessary violence

32:06

that occurs in war. Human being

32:09

don't do well in warfare in

32:11

general, and that's why there's

32:13

so much room for improvement. There's

32:16

unnamed fire, there's mistakes,

32:18

there's carelessness, and in the worst case,

32:20

there's the commission of atrocities, and

32:23

unfortunately, all those things lead

32:25

to the depths of noncombatants. And

32:28

robots will make mistakes too. They probably

32:30

will make different kinds of mistakes, but hopefully,

32:32

if done correctly, they will make far far less

32:35

mistakes than human beings do in certain

32:37

narrow circumstances where human beings are

32:39

prone to those errors. So how old the robots

32:42

follow these international humanitarian

32:45

standards? The way in which

32:47

we explored initially is

32:49

looking at something referred to as bounded morality,

32:52

which means we look at very narrow situations.

32:55

You are not allowed to drop bombs

32:57

on schools, on hospitals, mosques,

33:00

or churches. So the point

33:02

is, if you know the geographic location

33:05

of those, you can demarcate those

33:07

on a map, use GPS, and

33:10

you can prevent someone from pulling a trigger.

33:13

But keep in mind these systems are not only

33:15

going to decide when to engage it, but also

33:17

when not to engage a target. They

33:20

can be more conservative. I believe

33:22

the potential exists to reduce noncombatant

33:24

casualties and collateral damage in almost all

33:26

of its forms over what we currently have, so

33:33

autonomous weapons might operate more efficiently,

33:35

reduce risk to one's own troops, operate

33:38

faster than the enemy, decreased civilian

33:40

casualties, and perhaps avoid

33:43

atrocities. What

33:46

could possibly go wrong? Chapter

33:51

six? What could possibly go wrong?

33:55

Autonomous systems can do some pretty remarkable

33:58

things these days, but of

34:00

course, robots just do what their computer

34:02

code tells them to do. The

34:04

computer code is written by humans,

34:07

or, in the case of modern artificial intelligence,

34:10

automatically inferred from training

34:12

data. What happens

34:14

if a robot encounters a situation that

34:16

the human where the training data didn't

34:19

anticipate, well, things

34:22

could go wrong in a hurry. One

34:24

of the concerns with autonomous

34:27

weapons is that they might malfunction

34:29

in a way that leads them to begin

34:32

erroneously engaging targets.

34:34

Robots run amock, and this is

34:37

particularly a risk for

34:40

weapons that could target on their own. Now,

34:42

this builds on a

34:45

flaw and known malfunction of machine

34:47

guns today called a runaway gun.

34:50

A machine gun begins firing for one

34:52

reason another and because of the nature

34:54

of a machine gun where one

34:56

bullets firing cycles the automation

34:59

and brings in an ex bullet. Once it

35:01

starts firing, human doesn't have

35:03

to do it, and it will continue firing bullets. The

35:05

same sort of runaway behavior can

35:07

result from small in computer

35:10

code, and the problems

35:12

only multiply when autonomous

35:14

systems interact at high speed.

35:17

Paul Shara points to Wall Street

35:19

as a harbinger of what can go

35:21

wrong, and we end up some places like where

35:23

we are in stock trading today, where

35:26

many of the decisions are highly

35:28

automated, and we get things like flash

35:30

crashes. What

35:33

the pack is going on down here? I

35:35

don't know. There is fear. This is capitulation.

35:38

Really. In May twenty

35:40

ten, computer algorithms drove

35:42

the Dow Jones down by nearly

35:45

one thousand points in thirteen

35:47

minutes, the steepest drop it

35:49

had ever seen in a day. The

35:51

concern is that a world where militaries

35:54

have these algorithms

35:57

interacting at machine speed, faster

36:00

than humans can respond, might

36:02

result in accidents. And that's something

36:04

like a flash war. By a flash

36:06

war, you mean this thing just cycling out of

36:08

control somehow, right, But

36:10

the algorithms are merely following their programming,

36:13

and they escalate a conflict into

36:15

a new area of warfare, a new

36:18

level of violence, in a way that might

36:20

make it harder for humans

36:22

to then dial things back and

36:24

bring things back under control. The system

36:26

only knows what it's been programmed or

36:28

been trained to know. The human

36:30

can bring together all of these other pieces

36:33

of information about context, and

36:35

human can understand what's at stake. So there's

36:37

no Stanislav Petrov on the loop.

36:40

That's the fear, right, is

36:42

that if there's no Petrov there

36:45

to say no, what might the machines

36:47

do on their own? Chapter

36:51

seven, slaughter Bots.

36:56

The history of weapons technology includes

36:58

well intentioned efforts to reduce violence

37:01

and suffering that end up backfiring.

37:04

I tell in the book the story of the Gatling

37:06

Gun, which was invented by Richard

37:08

Gatling during the American Civil War, and

37:11

he was motivated to invent this

37:13

weapon, which was a forerunner of the machine gun,

37:16

as an effort to reduce soldiers

37:18

deaths and more. He saw all of these soldiers coming

37:20

back maimed and injured from the Civil

37:22

War, and he said, would it be great if we needed

37:24

fewer people to fight? So we invented

37:27

a machine that could allow four

37:29

people to deliver the same lethal effects

37:31

in the battlefield as a hundred. Now, the

37:34

effect of this wasn't actually to reduce the number

37:36

of people fighting, and we got to World War

37:38

One, we saw massive

37:40

devastation and a whole generation

37:43

of young men in Europe killed because of this technology.

37:46

And so I think that's a good cautionary

37:48

tale as well, that sometimes

37:50

the way the technology evolves and how it's used

37:53

may not always be how we'd like it to

37:55

be used. And even if regular armies

37:58

can keep autonomous weapons within the confines

38:00

of international humanitarian law, what

38:03

about rogue actors? Remember

38:06

those autonomous swarms we discussed

38:08

with Ash Carter, those tiny drones

38:10

that work together to block enemy radar.

38:13

What happens if the technology spreads

38:15

beyond armies? What if a terrorist

38:17

adds a gun or an explosive and maybe

38:20

facial recognition technology to

38:22

those little flying bots. In

38:25

twenty seventeen, Berkeley professors

38:27

Stuart Russell and the Future of Life Institute

38:30

made a mock documentary called slaughter

38:32

Bots, is part of their campaign against

38:35

fully autonomous lethal drones. The

38:37

nation is still recovering from yesterday's

38:40

incident, which officials are describing as

38:42

some kind of automated attack which

38:44

killed eleven US senators at the Capitol

38:46

Building. They flew in from every rare,

38:48

but attack just one side of the aisle. It

38:51

was people were spreading. Unlike

38:54

nuclear weapons, which are difficult to build,

38:57

you know, it's not easy to obtain or work

38:59

with weapons grade uranium, the

39:01

technology to create and modify autonomous

39:03

drones is getting more and more accessible.

39:06

All of the technology you need from

39:08

the automation standpoint either exists

39:11

in the vehicle already or you can

39:13

download from GitHub. I

39:15

asked former Secretary of Defense As Carter,

39:18

if the US government is concerned about

39:20

this sort of attack, you're right to worry

39:22

about drones and Chris. It only

39:25

takes a depraved person who

39:27

can go to a store and buy a

39:30

drone to at least scare

39:32

people and quite possibly threaten people

39:35

hanging a gun off of it or

39:37

putting a bomb of some kind

39:39

on it, and then suddenly people don't feel safe

39:42

going to the super Bowl or landing

39:44

at the municipal

39:47

airport. And we can't have that. I mean

39:49

it, certainly. As your former secretary of Defense,

39:51

my job was to make sure that we didn't put up with

39:53

that kind of stuff. I'm supposed to protect

39:56

our people, and so how do

39:58

I protect people against drones? In

40:00

general? They can be shot down, but

40:03

they can put more drones up than it I can

40:05

conceivably shoot at. Not

40:08

to mention, shooting at things in a

40:10

Super Bowl stadium is an inherently

40:12

dangerous solution to this problem. And

40:15

so there's a more subtle way of dealing

40:17

with drones. I will either

40:20

jam or take

40:22

over the radio link, and

40:24

then you just tell it to fly away and

40:27

go off into the countryside

40:29

somewhere and crash into a field. All

40:31

right, So help me out if I have

40:34

enough autonomy, couldn't

40:36

I have drones without radio links

40:38

that just get their assignment and go off

40:40

and do things. Yes, and then

40:43

your mind as a defender goes

40:45

to something else. Now that they've got

40:47

their idea of what they're looking for a set

40:49

in their electronic mind. Let

40:52

me change what I look like, Let me

40:54

change what the stadium looks like to it,

40:56

let me change what the target looks like. And

40:59

for the Super Bowl, what do I do about that? Well,

41:02

once I know I'm being looked at, I

41:04

have the opponent in

41:07

a box. A few people

41:09

know how easy facial recognition

41:11

is to fool. Because

41:14

I can wear the right kind of goggles. Are

41:17

stick ping pong balls in my cheeks?

41:20

There's always a stratagem

41:24

memo toself. Next time I go

41:26

to Gillette Stadium for a Patriots game, bring

41:29

ping pong balls? Really?

41:36

Chapter eight, The Moral Buffer.

41:39

So we have to worry about whether lethal autonomous

41:41

weapons might run them up or fall into

41:43

the wrong hands. But there

41:46

may be an even deeper question. Could

41:49

fully autonomous lethal weapons change

41:51

the way we think about war? I

41:54

brought this up with Army of non author Paul

41:56

Shari. So one of the concerns about autonomous

41:59

weapons is that it might lead to a breakdown

42:01

in human more responsibility

42:04

for killing and war. If the

42:06

weapons themselves are choosing targets, the

42:08

people no longer feel like they're the ones doing

42:10

the killing. Now, on the plus

42:13

side of things, that might mean

42:15

to less post traumatic stress in war.

42:17

These things have real burdens

42:20

that weigh on people, but some

42:22

argue that the burden of killing should

42:25

be a requirement of war. It's

42:27

worth also asking if nobody

42:29

slept uneasy at night, what

42:31

does that look like? Would there be less

42:34

restraint in war and more killing as

42:36

a result. Missy Cummings,

42:38

the former fighter pilot and current Duke

42:40

professor, wrote an influential paper

42:42

in two thousand and four about how increasing

42:44

the gap between a person and

42:46

their actions creates what

42:49

she called a moral buffer.

42:52

People ease the

42:55

psychological and emotional pain of

42:57

warfare by

43:00

basically superficially

43:03

layering in these other technologies to

43:05

kind of make them lose track of what they're doing.

43:07

And this is actually something that I do think it's a problem

43:10

for lethal autonomous weapons. If

43:13

we send a weapon and we'd tell it to

43:15

kill one person and it

43:17

kills the wrong person, then

43:19

it's very likely that people will

43:21

push off their sense of responsibility and accountability

43:24

onto the autonomous agent

43:26

because they say, well, it's not my fault,

43:28

it was the autonomous agent's fault. On

43:31

the other hand, Paul Scharide tells

43:33

a story about how when there's no

43:35

buffer, humans rely on an

43:37

implicit sense of morality that

43:39

might be hard to explain to a robot.

43:42

There was an incident early in the

43:44

war where I was part of an army ranger sniper

43:46

team up on the Afghanistan Pakistan

43:49

border and we were watching

43:51

for Taliban fighters infiltrating

43:53

across the border, and when dawn

43:55

came, we weren't nearly as concealed as we

43:57

had hoped to be, and very quickly

44:00

a farmer came out to relieve himself

44:02

in the fields and saw us, and we knew

44:04

that we were compromised. What I

44:07

did not expect was what they did next, which

44:09

was I sent a little girl to scout at our position. She

44:12

was maybe five or six, She was

44:14

not particularly sneaky. She stared

44:17

directly at us and we heard the

44:19

chirping of what we later realized was probably

44:21

a radio that you had on her, and she was reporting

44:23

back information about us, and

44:25

then she left it. Not long after, some fighters

44:27

did come and then The gun

44:30

fight that ensued brought out the whole valley, so we had

44:32

to leave. But later that day we were talking

44:34

about how it would treat a situation

44:36

like that. Something that just didn't come up in conversation

44:40

was the idea of shooting this little girl. Now,

44:42

what's interesting is that under the laws of war, that

44:45

would have been legal. The laws

44:47

of war don't set an age for combatants.

44:50

Your status as a combatant just based on your actions,

44:53

and by scouting for the enemy,

44:55

she was directly participating on hostilities.

44:58

If you had a robot that was programmed

45:00

to perfectly comply with the laws of war, it

45:03

would have shot this little girl. There

45:05

are sometimes very difficult decisions

45:07

that are forced on people in but I don't think

45:09

this was one of them. But I think it's

45:12

worth asking how would a robot know the difference between what's

45:14

legal and what's right, and

45:16

how would you even begin to prehend that into a machine.

45:22

Chapter nine, The Campaign

45:24

to stop killer robots. The

45:28

most fundamental moral objection

45:30

to fully autonomous lethal weapons

45:33

comes down to this, As

45:35

a matter of human dignity, only

45:38

a human should be able to make the decision

45:40

to kill another human. Some things

45:43

are just morally wrong, regardless

45:45

of the outcome, regardless of whether or not you know,

45:48

torturing one person saves a

45:50

thousand, its torture is wrong.

45:53

Slavery is wrong. And

45:55

from this point of view, one might say, well,

45:58

look, it's wrong to let a machine

46:00

decide whom to kill. Humans have to

46:02

make that decision. Some people have been working

46:04

hard to turn this moral view into

46:07

binding international law. So

46:09

my name is Mary Warem. I'm the advocacy

46:12

director of the Arms division of Human Rights

46:14

Watch. I also coordinate

46:16

this coalition of groups called

46:19

the Campaign to Stop Killer Robots,

46:21

and that's a coalition of one hundred

46:23

and twelve non governmental organizations

46:26

in about fifty six countries

46:28

that is working towards a single goal, which

46:31

is to create a prohibition

46:33

on fully autonomous weapons. The

46:36

campaign's argument is rooted in the

46:38

Geneva Conventions, a set

46:40

of treaties that establish humanitarian

46:43

standards for the conduct of war.

46:46

There's the principle of distinction, which

46:48

says that armed forces must recognize

46:51

civilians and may not target them.

46:53

And there's the principle of proportionality,

46:56

which says that incidental civilian deaths

46:59

can't be disproportionate to an

47:01

attack direct military advantage.

47:04

The campaign says killer robots

47:06

fail these tests. First,

47:09

they can't distinguish between combatants and noncombatants

47:12

or tell when an enemy is surrendering. Second,

47:15

they say, deciding whether civilian

47:18

deaths are disproportionate inherently

47:20

requires human judgment. For

47:23

these reasons and others, the campaign says,

47:26

fully autonomous lethal weapons

47:28

should be banned. Getting

47:30

an international treaty to

47:32

ban fully autonomous lethal weapons

47:35

might seem like a total pipe dream, except

47:38

for one thing. Mary warm

47:40

In her colleagues already pulled

47:42

it off for another class of weapons,

47:45

land mines. The signing of this historic

47:47

treaty at the very end of the

47:50

century is this generation's

47:52

pledge to the future. The

47:54

International Campaign to Ban Landmines

47:57

and its founder, Jody Williams, received

47:59

the Nobel Peace Prize in nineteen ninety

48:01

seven for their work leading to the Ottawa

48:03

Convention, which banned the use, production,

48:06

sale, and stockpiling of an anti

48:08

personnel mines. While

48:10

one hundred and sixty four nations joined

48:13

the treaty, some of the world's major

48:15

military powers never signed

48:17

it, including the United States, China,

48:20

and Russia. Still,

48:22

the treaty has worked and even

48:24

influence the holdouts. So the United

48:26

States did not join, but it went

48:28

on to I think prioritize

48:30

clearance of anti personnel land mines and

48:33

remains the biggest donor to

48:35

clearing landlines an unexploded

48:37

ordinance around the world. And then

48:39

under the Obama administration, the

48:42

US committed not to use anti personnel

48:44

land mines anywhere in the world other

48:47

than to keep the option open for the

48:49

Korean peninsula. So slowly,

48:52

over time countries do I

48:54

think come in line. One major

48:56

difference between banning land mines and

48:59

banning fully autonomous lethal weapons

49:01

is, well, it's pretty

49:03

clear what a land mine is, but a fully

49:05

autonomous lethal weapon that's

49:08

not quite as obvious. Six

49:10

years of discussion at the United Nations

49:13

have yet to produce a crisp definition.

49:16

Trying to define autonomy is also

49:18

a very challenging task, and this is

49:20

why we focus on the need for

49:22

meaningful human control. So

49:25

what exactly is meaningful

49:27

human control? The ability for

49:29

the human operator and the weapon system to communicate

49:32

the ability for the human to intervene

49:35

in the detection, selection and engagement of targets

49:37

if necessary to cancel the operation.

49:40

Not surprisingly, international talks

49:42

about the proposed ban are complicated.

49:45

I will say that a majority of the countries

49:48

who have been talking about killer robots

49:50

have called for illegally binding

49:52

instruments and international treaty. You've

49:54

got the countries who want to be helpful, like

49:57

France who was proposing working groups,

50:00

Germany who's proposed political

50:02

declarations on the importance of human

50:04

control. There's a lot of proposals,

50:07

I think from Australia about legal

50:09

reviews of weapons. Those

50:11

efforts are being rebuffed by a smaller

50:14

handful of what we call militarily powerful

50:16

countries who don't want to see new

50:19

international law. The United States

50:21

and Russia have probably been amongst the most problematic

50:24

on dismissing the calls for any

50:26

form of regulation. As with

50:28

the landlines, Mary Wareham sees a path

50:30

forward even if the major military powers

50:33

don't join at first. We cannot stop

50:35

every potential use. What we want

50:37

to do, though, is stigmatized, so that

50:39

everybody understands that even

50:41

if you could do it, it's not right and

50:44

you shouldn't. Part of

50:46

the campaign strategy is to get other groups

50:48

on board, and they're making some progress.

50:51

I think a big move in our favor

50:53

came in November when the United Nations

50:56

Secretary General Antonio Guterres,

50:59

he made a speech in which he called for them

51:01

to be banned under international law. Machine

51:05

the power and the

51:07

dispression to take human

51:10

lives are politically and

51:12

acceptable, are morally

51:14

impartment and should be banned

51:17

by international law. Artificial

51:23

intelligence researchers have also been

51:25

expressing concern. Since twenty

51:27

fifteen, more than forty five

51:30

hundred AI and robotics researchers

51:32

have signed an open letter calling

51:35

for a ban on offensive

51:37

autonomous weapons beyond meaningful

51:40

human control. The signers

51:42

included Elon Musk, Stephen

51:44

Hawking, and Demis Assabas,

51:46

the CEO of Google's Deep Mind. An

51:49

excerpt from the letter quote, if

51:52

any major military power pushes

51:54

ahead with AI weapon development, a

51:57

global arms race is virtually

51:59

inevitable, and the endpoint

52:02

of this technological trajectory

52:04

is obvious. Autonomous weapons

52:07

will become the Khalishnikov's Tomorrow

52:12

Chapter ten. To ban or

52:15

not to ban? Not

52:17

Everyone, however, favors the idea of an

52:19

international treaty banning all lethal

52:22

autonomous weapons. In fact,

52:24

everyone else I spoke to for this episode,

52:27

Ron Arkin, Missy Cummings, Paul Shari,

52:30

and Ash Carter oppose it, interestingly,

52:33

though each had a different reason and

52:35

a different alternative solution. Robo

52:38

ethicist Ron Arkin thinks

52:41

we'd be missing a chance to make wars safer.

52:44

Technology can, must, and should

52:46

be used to reduce noncombatant

52:48

casualties. And if it's not going to be

52:50

this, you tell me what

52:53

you are going to do to address that

52:55

horrible problem that exists in the world right

52:57

now, with all these innocence being slaughtered in the battlespace.

53:00

Something needs to be done, and

53:02

to me, this is one possible way.

53:04

Paul Shari thinks a comprehensive

53:07

ban is just not practical. Instead,

53:10

he thinks we should focus on banning lethal autonomous

53:12

weapons that specifically target

53:15

people. That is, anti

53:17

personnel weapons. In fact,

53:19

the Landmine Treaty bans anti personnel

53:22

land mines, but not say, anti

53:24

tank land mines. One

53:27

of the challenging things about anti personnel weapons

53:29

is that you can't stop being a person

53:32

if you wan't avoid being targeted. So if

53:34

you have a weapon that's targeting tanks, you can

53:37

come out of a tank and run away. I

53:39

mean, that's a good way to effectively

53:42

surrender and render yourself life of combat. If

53:44

it's even targeting, say handheld

53:46

weapons. You could set down your weapon and

53:48

run away from it. So do you think that'd be practical

53:51

to actually get either a

53:53

treaty or at least an understanding

53:56

that countries should forswear anti

53:58

personnel lethal

54:00

autonomous weapons. I think it's easier

54:03

for me to envision how you might get to actual

54:05

restraint. You need to make sure

54:07

that the weapon that countries are giving up it's

54:09

not so valuable that they can't still defeat

54:12

those you might be willing to cheat.

54:14

And I think it's really an open question how valuable

54:17

autonomous weapons are. But my

54:19

suspicion is that they are not as valuable

54:22

or necessary in an anti personnel

54:24

context. Former fighter

54:26

pilot and Duke professor Missy Cummings

54:28

thinks it's just not feasible to ban

54:31

lethal autonomous weapons. Look,

54:34

you can't ban people

54:36

developing computer code. It's

54:39

not a productive conversation to

54:41

start asking for bands on technology

54:44

that are almost as common as the air

54:46

we breathe. Right, So we

54:49

are not in the world of banning nuclear

54:51

technologies. And because it's a different world, we

54:54

need to come up with new ideas. What

54:56

we really need is that

54:58

we make sure that we certify these technologies

55:01

in advance. How do you actually

55:03

do the test certify that the weapon

55:06

does at least as well as a human. That's actually

55:08

a big problem because no one on the

55:10

planet, not the Department of Defense,

55:13

not Google, not Uber,

55:15

not any driverless car company understands

55:17

how to certify autonomous technologies.

55:20

So four driverless cars can come

55:22

to an intersection and

55:25

they will never prosecute that intersection

55:27

the same way a sun angle

55:30

can change the way that these things think. We

55:32

need to come up with some out of the

55:34

box thinking about how to test these

55:36

systems to make sure that they're seeing the world.

55:39

And I'm doing that in air quotes in

55:41

a way that we are expecting

55:43

them to see the world. And this is why

55:45

we need a national agenda to understand

55:48

how to do testing to

55:51

get to a place that we feel comfortable

55:53

with the results those you are successful

55:55

and you get the Pentagon and

55:58

the driverless car folks to actually

56:00

do real world testing, what

56:02

about rest to world? What's going to happen?

56:06

So one of the problems that we see in

56:09

all technology development is

56:11

that the rest of the world doesn't

56:13

agree with our standards.

56:17

It is going to be a problem going

56:19

forward, So we certainly

56:22

should not circumvent testing because

56:24

other countries are circumventing

56:26

testing. Finally, there's

56:29

former Secretary of Defense Ash Carter Back

56:31

in twenty twelve, ashe was one

56:33

of the few people who were thinking about the

56:36

consequences of autonomous technology.

56:38

At the time, he was the third ranking official

56:40

in the Pentagon in charge of weapons

56:42

and technology. He decided to

56:44

draft a policy, which the Department

56:46

of Defense adopted. It

56:49

was issued as Directive three thousand

56:51

point zero nine Autonomy

56:53

in Weapons Systems. So

56:55

I wrote this directive that

56:58

said, in essence, there will always

57:00

be a human being involved

57:03

in the decision making when it comes

57:05

to lethal force in the

57:07

military of the United States of America.

57:10

I'm not going to accept autonomous weapons

57:12

in a literal sense because

57:14

I'm the guy who has to go out the next morning after

57:17

some women and children have been accidentally killed

57:19

and explain it to a press conference

57:22

or a foreign government or a widow.

57:25

And as suppose I go out there, Eric

57:27

and I say, oh, I

57:29

don't know how it happened, the machine, did it? Are

57:32

you going to allow your Secretary of Defense

57:35

to walk out and give that

57:38

kind of excuse. No way

57:41

I would be crucified. I should be crucified

57:43

for giving a press conference like that, And I

57:45

didn't think any future Secretary Defense should

57:48

ever be in that position, or

57:51

allow him or herself to be in that position. That's

57:53

why I wrote the directive, Because, ashe

57:55

wrote the directive that currently prevents

57:57

US forces from deploying fully autonomous

58:00

lethal weapons, I was curious to know

58:02

what he thought about an international

58:04

ban. I think it's reasonable

58:07

to think about a national ban, and we have and

58:09

we have one. Do I think it's reasonable

58:12

that I get everybody else to sign up to that.

58:14

I don't because I think that

58:16

people will say they'll sign up and then not do

58:19

it. In general, I don't

58:21

like fakeery in serious

58:25

matters, and that's too easy

58:27

to fake. That is the fake

58:29

meaning to fake that they

58:31

have forsworn those weapons,

58:34

and then we find out that they

58:36

haven't, and so it

58:38

turns out they're doing it, and they're lying about

58:40

doing it or hiding that they're doing it. We've

58:42

run into that all the time. I

58:45

remember the Soviet Union said it signed

58:47

the Biological Weapons Convention. They

58:49

ran a very large biological

58:51

warfare bird. They just said they didn't all

58:54

right, but take the situation. Now, what

58:56

would be the harm of the

58:59

US signing up to such a thing, at

59:01

least building the moral approbrium

59:04

around lethal autonomous weapons,

59:06

because you're building something else at the same time,

59:08

which it's an illusion of safety for

59:11

other people. You're conspiring

59:14

in a circumstance in which they are lied to

59:16

about their own safety, and I

59:20

feel very uncomfortable doing that. Paul

59:23

Shari sums up the challenge as well.

59:25

Countries are widely divergent interviews

59:27

on things like a treaty, but there's

59:29

also been some early agreement that at

59:32

some level we need

59:34

humans involved in these kinds of decisions.

59:37

What's not clear is at what level is that

59:39

the level of prisiople choosing every

59:41

single target, people deciding

59:44

at a higher level what kinds of targets are

59:46

to be attacked. How far are we comfortable

59:48

removing humans from these decisions. If

59:51

we had all the technology in the world, what

59:54

decisions would we want humans to make it more? And

59:56

why what decisions in the

59:58

world require uniquely human judgment and

1:00:01

why is that? And I

1:00:03

think if we can answer that question, will

1:00:05

be in a much better place to crapple with

1:00:08

the challenge of a hotness weapons going forward.

1:00:17

Conclusion, choose your planet,

1:00:20

So there you haven't fully autonomous

1:00:23

lethal weapons. They might

1:00:25

keep our soldiers safer, minimize

1:00:27

casualties, and protect civilians,

1:00:31

but delegating more decision making

1:00:33

to machines might have big risks

1:00:36

in unanticipated situations. They

1:00:39

might make bad decisions that could spiral

1:00:41

out of control with no Stanislav

1:00:44

Petrov in the loop. They might

1:00:46

even lead to flash wars. The

1:00:49

technology might also fall into

1:00:51

the hands of dictators and terrorists,

1:00:54

and it might change us as well

1:00:56

by increasing the moral buffer between

1:00:59

us and our actions. But

1:01:01

as war gets faster and more complex,

1:01:04

will it really be practical to keep

1:01:06

humans involved in decisions? Is

1:01:09

it time to draw a line? Should

1:01:11

we press for an international treaty to

1:01:14

completely ban what some call

1:01:16

killer robots? What about a

1:01:18

limited ban or just a national

1:01:21

ban in the US? Or

1:01:23

would all this be naive? Would

1:01:25

nations ever believe each other's promises.

1:01:28

It's hard to know, but the right

1:01:31

time to decide about fully autonomous lethal

1:01:33

weapons is probably now, before

1:01:35

we've gone too far down the path. The

1:01:39

question is what can you do

1:01:42

a lot? It turns out you

1:01:44

don't have to be an expert, and you don't

1:01:46

have to do it alone. When enough

1:01:48

people get engaged, we make wise

1:01:51

choices. Invite friends over

1:01:53

virtually for now in person

1:01:55

what it's safe for dinner and debate

1:01:58

about what we should do. Or

1:02:00

organize a conversation for a book

1:02:02

club or a faith group or a campus

1:02:04

event. Talk to people

1:02:06

with firsthand experience, those

1:02:08

who have served in the military or been

1:02:10

refugees from war. And don't

1:02:13

forget to email your elected representatives

1:02:15

to ask what they think. That's

1:02:17

how questions get on the national

1:02:20

radar. You can find lots

1:02:22

of resources and ideas at our website

1:02:24

Brave New Planet dot org. It's

1:02:27

time to choose our planet. The

1:02:30

future is up to us. ED

1:02:40

don't want a truly autonomous car.

1:02:42

I don't want to come to garage and the concess.

1:02:45

I've fallen in love with the motorcycle and I

1:02:47

won't drive you today because I'm autonomous.

1:02:55

Brave New Planet is a co production of the Broad

1:02:57

Institute of MT and Harvard Pushkin Industries

1:02:59

in the Boston Globe, with support

1:03:02

from the Alfred P. Sloan Foundation. Our

1:03:04

show is produced by Rebecca Lee Douglas

1:03:06

with Mary Doo theme song

1:03:09

composed by Ned Porter Mastering

1:03:11

and sound designed by James Garver, fact

1:03:14

checking by Joseph Fridman and a stitt

1:03:16

An enchant. Special

1:03:19

thanks to Christine Heenan and Rachel Roberts

1:03:21

at Clarendon Communications, to

1:03:23

Lee McGuire, Kristen Zarelli and Justine

1:03:25

Levin Allerhans at the Broad, to Milobelle

1:03:28

and Heather Faine at Pushkin, and

1:03:31

to Eliah Edie Brode who made

1:03:33

the Broad Institute possible. This

1:03:36

is brave new planet. I'm Eric

1:03:38

Lander.

Rate

Join Podchaser to...

  • Rate podcasts and episodes
  • Follow podcasts and creators
  • Create podcast and episode lists
  • & much more

Episode Tags

Do you host or manage this podcast?
Claim and edit this page to your liking.
,

Unlock more with Podchaser Pro

  • Audience Insights
  • Contact Information
  • Demographics
  • Charts
  • Sponsor History
  • and More!
Pro Features