Episode Transcript
Transcripts are displayed as originally observed. Some content, including advertisements may have changed.
Use Ctrl + F to search
0:00
If
0:00
you're struggling with your diet, just can't
0:03
sustain those healthy eating habits, you're not
0:05
alone. Might I suggest the plant power
0:07
meal planner? Our digital toolbox
0:09
of unlimited access to thousands of
0:11
customizable plant based recipes integrated
0:14
with grocery delivery. Expert support
0:16
seven days a week and so much more.
0:19
And to kick start our health intentions
0:21
this new year, we're offering you twenty
0:23
dollars off a one year membership with
0:26
the code power hour twenty throughout
0:28
the entire month of January.
0:31
To learn more and to sign up, go to meals
0:33
dot richroll dot com. Again,
0:35
that's promo code power twenty for
0:38
twenty dollars off at mules
0:40
dot richroll dot
0:41
com. Okay. Let's
0:43
do the show.
0:54
For me to live an ethical life, it's not enough
0:56
just to say, I'm gonna obey some
0:58
simple moral rules like don't steal,
1:01
don't cheat, don't hurt other people, you
1:03
have to think also about what can I
1:05
do positively given the advantages
1:08
that I have and the problems
1:10
that we have in the world? It's often
1:12
easier to see how you can relieve suffering
1:15
than how you can boost happiness. You
1:17
know, some people say, you should
1:20
be a negative utilitarian and only
1:22
focus on reducing pain and suffering.
1:24
I don't think that's right, at least theoretically,
1:27
it's not right because if you
1:29
could greatly increase the happiness of
1:31
large number of people and do
1:33
that without causing any suffering
1:35
or maybe cause, you know, mild headaches to
1:37
few people, clearly that would be the right thing
1:39
to do. So it's not that
1:41
the positive doesn't count at all in
1:43
the scales. It's just that given
1:45
the way the world is, the negative
1:48
pain and suffering is so much more
1:50
apparent and in a way,
1:52
so much easier to prevent in
1:55
the sense that we know what we could do that would
1:57
prevent it, maybe hard to bring that about. But
1:59
sometimes in terms of making people happier,
2:01
we don't even really know how
2:03
to do that. The
2:14
rich roll podcast. Hey,
2:20
everybody. Welcome to the podcast. My
2:22
guest today is just an absolute living
2:24
legend. His name is Peter Singer
2:27
and he is perhaps the world's most
2:29
influential whole living philosopher.
2:31
A grandfather of both the
2:33
modern animal rights and effective
2:35
altruism movements, Peter is a
2:38
professor or bioethics at Princeton, and
2:41
Laureate professor at the University of
2:43
Melbourne. He's published several books
2:45
on our moral responsibility to
2:47
alleviate suffering, including
2:50
the highly influential book animal liberation
2:52
and a book called The Life You Can Save,
2:54
both of which are books we cover in this conversation.
2:57
I should say, as an aside and
3:00
as a gift to our listeners, Peter
3:02
has very generously offered
3:04
to provide everybody with a free
3:06
cop be of his book, the life you can save
3:09
to anyone who wants one. To
3:11
get your copy, visit the life you can
3:13
save dot org slash rich role
3:15
or click the link in the description below.
3:18
Free copies are available for
3:20
US residents only, but all listeners
3:22
regardless of location can download the
3:24
ebook or audiobook for free.
3:26
And the point that I'm really driving at
3:28
is that donations to Peter
3:30
Save Lives Fund can also be made by
3:33
this link. And all donations there will
3:35
be matched dollar for dollar up to twenty
3:37
five thousand dollars. Thanks to a very
3:39
generous anonymous
3:41
donor. I love meeting Peter.
3:43
I love talking to him. I really hope you enjoy
3:45
this conversation. It's all coming up really
3:47
quick. But first,
3:53
We're brought to you today by athletic Greens.
3:56
Most people who listen to this podcast
3:59
already care about what they eat since
4:01
that is such a consistent
4:03
theme of this show. But Even
4:06
the best among us fail to hit that
4:08
super pure Whole Food
4:10
bull's eye every single day. Myself
4:13
definitely included which
4:15
is why it's wise to back
4:17
pocket a nutritional insurance policy.
4:19
For the last several years, my go
4:21
to for this for so many reasons
4:23
is has been and
4:25
will continue to be AG1
4:27
by athletic rains. First,
4:29
it's super convenient, replacing that
4:32
cabinet of supplements with just
4:34
one thing with everything you need
4:36
to meet your vitamin and mineral needs.
4:39
Just one scoop of AG1 in your
4:41
morning smoothie or simply easily
4:43
dissolved in water, taste great.
4:45
It contains seventy five high quality
4:47
vitamins, minerals, Whole
4:49
food source, super foods, probiotics,
4:51
and adaptogens, all of which keeps
4:53
my energy high and sustained.
4:55
It improves focus, It supports
4:58
immunity, and it helps balance my
5:00
gut health. Yes. It's vegan.
5:02
It's also gluten free. And for
5:04
all you pro athletes out there, it's NSF
5:06
certified for sport, which is huge.
5:09
Basically, a simple daily micro
5:11
habit with serious macro benefits.
5:13
So reclaim your health, and arm your
5:15
immune system with convenient daily
5:17
nutrition. To make it easy, athletic
5:19
reins is gonna give you a free one
5:21
year supply of immune supporting vitamin
5:23
d and buy free travel packs
5:25
with your first purchase. All you gotta do
5:27
is visit athletic greens dot com
5:30
slash rich roll. That's athletic greens
5:32
dot com slash rich
5:34
role to take ownership over your health
5:36
and pick up the ultimate daily nutritional
5:38
insurance. It's a new
5:40
year people which means I
5:42
don't have to tell you, you know what it means. It's
5:44
time for a new website because
5:46
that blogspot thing or a type
5:48
pad thing or whatever it is that you
5:50
created back in two thousand eight.
5:52
It's just not cutting it. You know who
5:54
I'm talking about. I see you. So
5:56
whether you're getting that side gig off the
5:58
ground, importing handmade rugs or restoring
6:01
antique motorcycles or whatever
6:03
it is, you need a website
6:05
facelift. And my friends at Squarespace
6:07
they've got you covered. Squarespace has
6:09
the tools you need to get your business off the
6:11
ground, including e commerce templates,
6:14
inventory management, a simple checkout
6:16
process and secure payment. So whatever
6:18
you sell, Squarespace has merchandising
6:20
features to make your products look their best
6:22
online. And you can do it all yourself
6:25
with ease. It's actually bonkers just
6:27
how simple all of it is to
6:29
use. There's no complex coding skills
6:31
or design degrees required Squarespace
6:33
has all sorts of built in analytics tools
6:36
and a suite of integrated features. So
6:38
you can gain all these powerful insights
6:40
into who's visiting your site and
6:42
how they're interacting with your content,
6:44
including page views, traffic sources,
6:46
time on-site, most red content, audience
6:49
geography, and tons more. Here's what
6:51
you're gonna do. You're gonna head on over to
6:53
squarespace dot com slash rich role, and
6:55
you're gonna get a free trial. And when you're
6:57
ready to launch Use offer code
6:59
rich roll to save ten percent off your
7:01
first purchase of a website or
7:02
domain. That's square space dot com
7:05
slash rich roll. Offer code rich
7:07
role.
7:11
Okay. Peter Singer. Peter's
7:14
work has had just a profound influence
7:16
on my life. So it was
7:18
an absolute honor to host this
7:20
discussion, a discussion about
7:23
the ethical obligations we
7:25
have to others, to
7:27
human and nonhuman lives alike
7:29
and have these ideas
7:31
that Peter thinks so deeply about
7:33
can shape our choices and actions
7:35
in the real world. So without
7:37
further ado, here's me and
7:39
Peter Singer. Well,
7:44
Peter's real honor to have you here today
7:46
is somebody who's admired your work for a very
7:48
long time. I'm thrilled with the prospect
7:50
of being able to talk to you and this
7:52
conversation will have been preceded by
7:54
me giving an introduction to
7:56
your work, your kind of formal bio, but I'm
7:58
curious how you articulate,
8:01
what it is that you
8:02
do, like, how do you explain your
8:04
kind of focus and curiosity
8:06
in the world. Right.
8:09
So I've got interested in
8:11
philosophy as an undergraduate. But
8:13
I was always interested in the heart
8:15
of philosophy that connects to real
8:17
life and that can make a difference to how
8:19
we live. So
8:22
some of the courses I did were discussing how
8:24
we know anything about the world, how do we know that
8:26
we're sitting at a table now, that I'm not
8:28
dreaming -- Mhmm. -- that there's not an
8:30
evil demon who's given me
8:32
illusions. Those are interesting
8:34
intellectual problems, but I
8:36
certainly wouldn't have wanted to spend my life
8:38
doing them. But once I realize that
8:40
ethics, the part of philosophy that
8:42
connects with life, really can make a
8:44
difference to how
8:46
you think about your life, your
8:49
values, and how
8:51
you act in the world makes a difference
8:53
to changing the world. And that seemed to me
8:55
to be something important and worthwhile
8:57
as well as intellectually interesting.
8:59
Yeah. I mean, what's interesting is that you have
9:02
fulfilled that promise in
9:04
an era and a time in which
9:06
there does seem to be a
9:08
disconnect between the
9:10
kind of a epic pursuit of philosophy
9:12
and the true utility of it.
9:14
This came up in your in your conversation with Ryan
9:16
Holiday where he was saying, back in,
9:18
you know, ancient Greece and ancient Rome,
9:21
politics and philosophy were
9:23
were very commingled pursuits.
9:26
Whereas now, they don't really
9:28
seem to meet, but I look at you as somebody
9:30
who's had a profound impact
9:32
on on culture and
9:34
how we think about ethics and
9:36
morality in a very utilitarian and
9:38
and real
9:38
way. That's true.
9:41
Although, I think I've been fortunate in
9:43
in the period that I've been living and working
9:46
philosophy, it has moved
9:48
back more like that Greek
9:50
ideal if you like that it
9:53
does connect with how we live. And there are many
9:55
of my students, for example, who are interested
9:57
in taking philosophy courses precisely
9:59
for that reason. They want to think
10:01
about these issues. And that's different from
10:03
when I was an undergraduate, when I was still
10:06
this period of what was known as ordinary
10:08
language philosophy or linguistic
10:10
analysis. Where a lot of
10:12
philosophers said philosophy
10:14
doesn't teach you how to live. It
10:16
simply helps us to understand the meanings
10:18
of the moral terms. Mhmm. And
10:20
then student movement
10:22
of the nineteen sixties started
10:24
to get things back on track. So
10:27
with the Vietnam War, students wanted
10:29
more relevant courses and
10:31
one of the things that
10:33
philosophy could do as well. There's
10:35
this ancient tradition of of when is it
10:37
right to go to war, of just war
10:39
theory. And they started talking about
10:41
that. And then they started talking
10:43
about civil disobedience when he
10:45
justified in disobeying the law. And so I think
10:47
then philosophy got back
10:49
on track to those sorts of topics and
10:51
and moved away from the idea that somehow
10:53
a neutral activity telling you what it
10:56
means to say something is good or
10:57
bad. Mhmm. Yeah. III
10:59
feel that that that era was
11:02
sort of supplanted by, you know, the
11:04
greed is good, sensibility
11:06
of the eighties and and perhaps
11:08
the on way of of
11:10
and the cynicism of of Gen X, which
11:13
is my generation. But I
11:15
too look at this newer
11:18
generation the population, the age
11:20
range of the students, I'm sure you
11:22
teach, who do seem very concerned
11:24
about ethics morality
11:26
an impact in terms of where they're investing
11:28
their academic curiosity and
11:30
their career choices like they really
11:32
wanna be on a track that
11:34
is going to have a net positive on
11:36
the world, which is very different from
11:38
the sensibility of my generation
11:40
when we were in college.
11:42
Yeah. I think there are always some, at least, you
11:44
know, I've been teaching a principal nurse since
11:46
nineteen ninety nine. Mhmm. And I think there are
11:48
always some students who are interested
11:50
in how they could have an
11:52
impact on the world. But
11:54
I agree that it's come back more
11:57
strongly in the last few years and then
11:59
more students wanting to take doses
12:01
for that
12:01
reason. Which of course begs the
12:03
question of how do we think about morality,
12:06
positive impact, ethics, etcetera.
12:08
So when the question is positive
12:10
to you, you know, what does it
12:12
mean to live an ethical
12:14
life? How do you begin to
12:16
unpack that and respond to
12:18
it? In a meaningful way that
12:20
that can help direct somebody
12:22
who's, you know, wanting to know the
12:24
answer to
12:24
that. Yeah. So
12:26
I asked them to think about
12:28
the impact that they can have,
12:30
about the consequences of their
12:32
actions. What they can do to
12:34
make the world a better place than it would have
12:36
been if they hadn't lived in it.
12:38
And clearly, there are a lot of
12:40
opportunities for that. I mean, especially if you're
12:42
living in an affluent society like the United
12:44
States or any of the other
12:46
affluent countries. And you see that
12:49
lot of people in extreme poverty in
12:51
other countries. You see that
12:53
we're damaging the climate of our
12:55
planet. You see that we're
12:57
inflicting vast amounts of suffering
12:59
on non human animals in factory
13:01
farms. There are all
13:03
sorts of choices that you have to make
13:05
about how you're gonna live what
13:07
you're gonna do as a career choice, which
13:10
students are thinking about. But also, what
13:12
are you doing with your spare cash?
13:15
What do you eat, all of those -- Mhmm. --
13:17
things that we can now see as
13:19
ethical questions. So for me
13:21
to live an ethical life, it's not enough
13:23
just to say I'm gonna
13:25
obey some simple moral rules like
13:27
don't
13:27
steal, don't cheat, don't don't
13:29
hurt other people. You have to think
13:32
also about what can I do positively
13:34
given the advantages that I
13:36
have and the problems that we have in the
13:38
world? Mhmm. And and your
13:40
particular lens for that is the
13:42
reduction of suffering. That seems to
13:44
be kind of like the lever through
13:46
which all of this calculus is
13:48
made. Yes, that's
13:50
right. It's primarily the reduction of
13:52
suffering. I do think that producing
13:55
happiness or pleasure is a
13:57
value as
13:57
well. But that's a
13:58
harder thing to get your hands around. Right?
14:01
Exactly. Yes. That's right. It's it's
14:03
often easier to see how you can
14:05
relieve suffering than
14:07
how you can boost happiness. And
14:09
so, you know, some people say
14:11
you should be a negative
14:13
utilitarian and only focus on
14:15
reducing pain and suffering. I
14:17
don't think that's right, at least
14:19
theoretically, it's not right because if
14:22
you could greatly increase the happiness of
14:24
large number of people and
14:26
do that without causing any
14:28
suffering or maybe cause, you know, mild
14:30
headaches to a few people, clearly that would be
14:32
the right thing to do. Mhmm. So it's
14:34
not that it's not that the positive
14:36
doesn't count at all in the scales. It's
14:38
just that given the way the world
14:40
is, the negative, the pain, and
14:42
and suffering is so much more
14:44
apparent. And in a way,
14:46
so much easier to prevent
14:48
in the sense that we know what we could do that would
14:50
prevent it, maybe hard to bring that about.
14:52
But sometimes in terms of making people happier,
14:55
we don't even really know how to
14:57
do
14:57
that. Right. Right. And in the
14:59
in the context of the reduction of
15:01
suffering, this is this is kind of
15:03
the, you know, the the
15:05
landscape from which you're thinking
15:07
on animal liberation, M and As,
15:09
and I wanna get to that, but I kinda wanna put
15:11
that aside for now and
15:13
focus on something that's a little
15:15
bit more current, which is the,
15:17
you know, you being this this sort of
15:19
godfather of the effective altruism
15:21
movement, a movement which is
15:23
very much in the news at the moment
15:25
as a result of Sam
15:28
Sam Bank and Freed and
15:30
FTX and all of that, which has kind of
15:32
put this idea about
15:34
how to effectively give to,
15:36
you know, have the greatest impact on
15:38
the reduction of suffering under the
15:40
microscope of people who
15:42
are now critical of it, and I'm interested. I
15:44
know you've written about this, but
15:46
parsing the behavior
15:48
of human
15:50
being from the philosophical
15:53
underpinnings of this
15:55
movement that you helped pioneer.
15:58
Yeah. So
16:01
I think the effective altruism
16:03
movement in general is saying we
16:05
should try to make a
16:07
positive difference to the world as I've been
16:09
saying. And we should use reason
16:11
and evidence to find the best way of
16:13
doing that. And
16:15
one of the things that the movement
16:17
has talked about is making a
16:19
positive difference doesn't necessarily
16:21
mean becoming a doctor and working
16:23
in a low income country
16:25
or working for one of the
16:27
charities that are helping people in poverty,
16:29
it might mean actually trying
16:31
to earn a lot of money and
16:33
then using that to
16:35
support organizations and are doing good.
16:37
That can be a a valuable
16:40
thing to do. And I think
16:42
Sam Bankman Fried set out to do
16:44
that. I know that he had a conversation with
16:46
Will MacKaskill will early on. We'll
16:49
being one of the founders of the effective
16:51
altruism movement. And we'll
16:53
suggested that because he was mathematically
16:56
gifted, that might be an opportunity
16:58
for him. And I know others, I've had
17:00
friends and students who were in a similar
17:02
situation, who've done that and have given a
17:04
lot of money to effective
17:06
causes. So it certainly can be a good thing to
17:08
do. But Sam, I think it was
17:10
obviously uniquely
17:13
successful accumulating a huge amount of wealth doing that
17:15
and became a kind of a poster
17:17
child in that way for earning
17:20
to give. But he was
17:22
clearly also a huge
17:24
risk taker and somebody
17:26
who was prepared to
17:29
break standard rules of
17:31
how you do business and how
17:33
you look after other people's money that's
17:35
entrusted to you. And I think that's
17:38
what brought about his downfall.
17:40
The fact that he took risks, they didn't all
17:42
come off. He tried to patch it off
17:44
with shifting his customers
17:46
trust funds basically -- Mhmm. -- to
17:48
his research investment
17:50
sort of fund. And
17:53
clearly, he shouldn't have done that. And I don't
17:55
think anybody in the effective altruism
17:58
movement thought that the idea of earning
18:00
money to to give to good causes would
18:02
lead to somebody so flagrantly.
18:04
This is alleged, I suppose, we should say.
18:07
But if if if the charges are
18:09
correct, I don't think anybody in effective
18:11
altruism movement thought that anybody would
18:13
so flagrantly violate
18:15
those basic rules of of
18:17
science practice and ethical
18:19
practice. Right. Well, you know, his misdeeds
18:21
and malfeasance will be, you
18:23
know, adjudicated. But
18:26
from the outside looking in, it
18:28
doesn't look great. And I think, you know, just kind
18:30
of back up for a minute, effective altruism
18:33
being this this this movement
18:36
whereby we try to of
18:38
reduce the amount of emotional
18:41
attachment we have to
18:43
philanthropic ends and look at it from
18:45
a purely objective point of view to
18:47
to understand the best use
18:50
of every dollar given to have
18:52
the maximum impact in terms
18:54
of the reduction of suffering. And
18:56
and those outlets often aren't the
18:59
sexy ones or the ones
19:01
that we feel emotionally attached
19:03
to because we have a relative who's suffering from a
19:05
certain disease. It happens to be things
19:07
like malaria attense and and the like that can that
19:09
are cheap, easy solutions that end
19:11
up saving a lot of lives. And
19:13
in the case of Sam Bankman
19:15
Freed, I see whose
19:18
motives are in question. Like,
19:20
it there is an argument that
19:22
perhaps he leverage this movement
19:24
because it looked good from a from a
19:26
sort of PR perspective to
19:28
say that he wasn't effective all
19:31
culturist. And I'm not so sure, like, how much money he actually
19:33
ended up giving. He gave money to lots
19:35
of different places. And so this
19:37
sort of critique of the movement is that
19:39
it sets in place unhealthy
19:43
incentives whereby the
19:45
end justifies the means. Right?
19:47
Like no matter what end or
19:49
what means you pursue to
19:51
accumulate a certain amount of
19:52
wealth. It's okay because those resources will
19:54
be deployed in an altruistic manner.
19:57
Yeah. As for his
19:59
original motives, I'm prepared to believe
20:01
that he did set out on that career
20:03
in order to be able to give
20:06
I think that's the
20:09
evidence early on. It
20:10
wasn't right from the start. He thought, oh, I'll pretend
20:12
to be an effective outrisk because that'll make
20:14
me more successful personally. And
20:17
figures that I've seen, he certainly gave
20:19
well over one hundred million dollars
20:21
to effective charities. Now that's not very
20:23
much when you're worth twenty billion
20:25
euros twenty five billion euros That's
20:27
true. But I think he was
20:29
on track to do a lot more. He
20:32
also gave political
20:34
donations, and some of those were
20:36
directed towards making the world safer. For example,
20:38
he supported a candidate who was an expert
20:40
on pandemics because he believed that the
20:42
US is not doing enough for pandemic preparation, and I
20:45
think that's obviously true. So
20:47
I don't think that
20:50
it was always just to cover. But it
20:52
may be that he got carried away with his
20:54
success and didn't want to admit, for example,
20:56
that he'd taken a
20:58
big hit because of a bad investment from
21:01
Alameda and so tried to cover that
21:03
up whereas if he'd admitted
21:05
that maybe Alameda had
21:07
gone bankrupt he would have
21:09
still been wealthy and wouldn't be facing
21:11
jail. Mhmm. So I
21:13
think that's probably what
21:15
went wrong. But in terms
21:17
of what you were asking about the
21:19
idea that the end justifies the
21:21
means, I think people often very
21:23
simplistically say, oh, well, you know, he thought
21:25
that the end justify the means and they
21:26
don't. But if you stop and think about
21:29
it,
21:29
I think everybody thinks that sometimes
21:31
the end does justify the means.
21:33
And the classic example of that is, you know,
21:36
if you're if you were hiding a Jewish
21:38
family and you're a seller in Nazi
21:40
Germany and the Gestapo came to
21:42
your
21:42
door, and you might think it's wrong to tell lies,
21:44
including telling lies to the state authorities, is
21:46
clearly wrong. But
21:50
if you can save the family, you're hiding by
21:52
telling a lie to the Gestapo, obviously,
21:54
you should do that. So so
21:56
the question isn't do the
21:58
ends ever justify the means? The question is, when do
22:00
the ends justify the means?
22:03
When are the means
22:05
too bad? Or when is the risk
22:07
too great or the means not
22:09
sufficient. And you have to look at
22:12
those on a case by case
22:14
basis, right? So that
22:16
would play out in terms of a young
22:18
person pondering career choices. They
22:20
could either, you know, go to
22:22
the eighty thousand hours website and
22:25
look at certain types of
22:27
impact oriented careers
22:29
or they could become a
22:32
investment banker and try to accumulate
22:34
as much wealth as possible for the
22:36
purposes of of deploying
22:38
that at a later time. And
22:40
and and from your perspective, both
22:42
of those are are meritorious and
22:45
and worthy of consideration.
22:47
Yes. That's
22:48
right. And in fact, if they go to eighty thousand hours,
22:51
there's a lot of other things that they could
22:53
do as well. They could one
22:55
of the careers suggested is becoming
22:57
a research scientists working in areas that
22:59
will make a difference to people in
23:01
extreme poverty. Another is to go
23:03
into politics. Politics needs more
23:05
people who are really
23:07
serious. About helping people
23:09
in poverty, doing something about climate
23:12
change. So there's a lot of different
23:14
options that people can have. And
23:17
in fact, the effective altruism movement did
23:19
make quite a thing about anything you've in the
23:21
early days. I think partly because that
23:23
was a novelty and it
23:25
was something got media attention. And when
23:28
the movement was small, it was
23:30
important to get media attention for for
23:32
new ideas. So Wilma
23:34
Cascola in particular made
23:36
quite a feature of this. But
23:38
more recently and
23:40
but before the FTX
23:43
collapse, and so not specifically related to
23:45
Sandbank and Fried. They
23:48
have reduced the emphasis that they put
23:50
on that partly because of the
23:52
idea that one of the
23:54
problems with new
23:56
organizations that have great ideas
23:58
about changing the world in the right
24:01
direction is that it's hard for them to get enough talented
24:03
people working for them. So smart
24:06
people like Sam might now
24:08
be more likely to be you
24:10
know, might be suggested that they go
24:12
into helping one of these startups to
24:14
really get organized and to scale up and
24:16
really make a big difference. Rather than to end
24:18
give just because of the sense
24:20
that it's not always lack of
24:22
financial
24:22
resources. It may be lack of talented
24:24
people that are slowing things down.
24:27
Mhmm.
24:27
Mhmm. Yeah. It's
24:30
interesting. In
24:32
thinking
24:32
about you know, the pursuit
24:35
of an ethical life and is somebody
24:37
who, you know, who's who is a, you
24:39
know, moral philosopher Why
24:41
is this important? Is there a
24:43
morality that exists that
24:45
is that is a certain kind of like
24:47
you There's a universality
24:49
to that truth? I mean, you're an atheist. Right? So
24:52
from whence does, you you
24:54
know, this sense of right
24:56
and wrong, and
24:58
and pursuing an ethical life
25:00
from from where does that derive?
25:02
Yes. I I am an Itheist. So, obviously,
25:05
I I don't think it derives from God. Any
25:07
god given commands.
25:09
But and for quite a while, I
25:11
I didn't think there was an objective's
25:14
truth. That was part
25:16
of the era in which I was educated
25:18
in studying philosophy. A lot of philosophers didn't
25:20
think there was, and there has been
25:23
a a shift from a number of philosophers and
25:25
I'm one of them towards
25:27
the idea that, no, there are some things that
25:29
we can really see as self
25:32
evidently good or often more to the
25:34
point self evidently bad. So
25:36
for example, when somebody
25:38
experiences agony if if a
25:40
child is going through
25:42
agony, whether it's an illness or
25:44
an injury or some
25:46
malevolent person deliberately hurting
25:48
them. That's just a bad thing.
25:51
And the
25:52
universe would be a better place if
25:55
that child, we're not experiencing agony.
25:58
So I think from the
26:00
self evidence of that
26:02
judgment and the self evidence of the feeling
26:04
we have ourselves when we experience severe
26:06
pain, that that's a
26:08
bad thing. We can generalize
26:11
that to other sentient beings.
26:13
Any being who can experience agony, it's better
26:16
if they don't. And any
26:18
being who can experience a
26:21
enjoyable
26:21
happy, blissful, worthwhile kind
26:24
of life, fulfilling life for
26:26
them. It's better if they can.
26:27
Mhmm. And how how are you
26:30
making judgments adjudicating better
26:32
and good. You
26:33
know what I mean? Like like if
26:36
this is not emanating
26:38
from some kind of spiritual connection,
26:40
you know, even in a non
26:42
dogmatic, non religious way, it's
26:45
curious to ponder, you
26:47
know, the origin point of why
26:49
the world is better if we do this versus
26:52
that. But I think we we can see that in our
26:54
own case. We we, you know, when
26:56
we experience agnew, we
26:58
just can't avoid seeing that as
27:00
a bad thing for us. And then
27:02
when we take a broader point
27:05
of view, the nineteenth
27:07
century utilitarian Henry Sidewick spoke
27:09
about taking the point of view of the universe.
27:11
And he he was an agnostic
27:13
really rather an atheist, but he he
27:15
wasn't saying, that the universe has a
27:17
point of view. He was just saying,
27:19
imagine that you're looking
27:22
on the universe a whole and
27:24
all the sentient beings in it. Then
27:26
you can see that your own interests, your
27:28
own well-being is no
27:30
more important from that perspective than
27:32
that of any other being who can
27:34
have similar kinds of experiences
27:37
of pain or pleasure. And
27:39
so we should
27:42
as rational beings, we should try
27:44
to reduce the pain and
27:46
agony that is experienced and
27:48
increase the pleasure and happiness because
27:50
that's what we want for ourselves and
27:52
we see that we are just one of
27:54
many similar beings who have
27:56
those
27:56
experiences. Mhmm.
27:57
So much of your work
27:59
is is focused on the
28:01
responsibility of the individual, like, should I
28:03
give money to this versus that?
28:06
Should I not eat animals like
28:08
all of these sort of choices
28:11
that can guide us towards, you know, kind of a more
28:14
ethical way of living. But we live
28:16
in a culture in which incentives
28:19
and kind of momentum
28:21
is pushing us away
28:23
from the kind of
28:26
economy of making those choices. In other
28:28
words, like, those choices tend to kind of cut against
28:30
the grain of, you
28:32
know, what everything else is pushing
28:34
us
28:34
towards. And so I
28:37
can't help but think about incentive structures at
28:40
large and how your
28:42
work being
28:42
so focused on the individual how
28:46
you contemplate, like, system
28:48
change, like governmental regimes
28:50
or, you know, economic,
28:53
tectonic plates that, you
28:53
know, set up situations where we're often making
28:55
the wrong choice versus
28:59
creating
28:59
a new system in which the choices that
29:01
you're advocating for become the
29:04
easier kind of more accessible and
29:06
more incentivized
29:07
choice. Right. Well,
29:11
I certainly want to see changes in the
29:13
systems and in the incentives that the systems
29:16
create. And one of the most obvious
29:18
cases here would be climate change
29:20
because individuals also
29:22
make choices, of course, about the
29:24
greenhouse gases that they emit or the,
29:26
again, what they eat makes an impact
29:29
on their greenhouse gases that they're responsible for
29:31
as does whether they drive a car
29:33
and if they do what sort of car to
29:35
drive. But it's really
29:37
important and shouldn't be that difficult for
29:40
governments to change the incentives there
29:42
by carbon taxes, for example,
29:45
on what produces emissions.
29:47
So that's that's an area
29:49
where going into politics can be
29:51
a really important career because you
29:53
can help to make governments make
29:55
those choices. And and that's true of the other
29:57
things that I talk about at an individual level
29:59
as well. Governments do
30:01
give significant amounts to foreign aid.
30:04
They could give more. United States
30:06
actually gives very little in terms
30:08
as a percentage of its gross national
30:10
income compared to European countries
30:13
generally. And could give more and could
30:15
also give it more effectively. And
30:17
of course, some governments have
30:19
better laws and regulations regarding the treatment
30:21
of animals Even within the United
30:23
States, California has better regulations
30:25
for farm animals than most
30:27
other states in the United States because
30:29
it has citizen initiated
30:31
referendum and it's passed propositions
30:34
to give animals a bit more
30:36
room than they have in
30:38
other states. There are definitely things
30:40
that you can do at the policy level and that it's
30:42
important to do at the policy level.
30:44
But some of these things are really difficult
30:46
to bring about change. And for example, trying
30:48
to Greece, the United States foreign
30:51
aid, has been a long struggle that so far
30:53
has been quite unsuccessful.
30:55
Even presidents who are sympathetic, like
30:58
President Obama, One stage talked
31:00
about raising U. S. Foreign aid
31:02
to zero point five percent of gross national
31:04
product, which would still be
31:06
only about half of the top nations
31:08
in the world had completely failed
31:10
to do that. So
31:12
if that's so difficult to achieve,
31:16
then there is something that we can do
31:18
individually and that can make a difference. So
31:20
let's let's do that. And
31:22
similarly in terms of what we eat,
31:24
it's also very hard to get laws
31:26
and regulations to
31:28
in the United States to give animals more
31:30
space to move around. As I said, there
31:32
are exceptions with those states with citizens
31:34
initiated referendum because it does seem that
31:36
ordinary Americans when given
31:38
that choice will choose better conditions
31:41
for animals. But because the
31:43
agribusiness lobby is so powerful
31:45
at the federal level, it's been
31:47
impossible to get any laws passed
31:49
at the federal level to give animals room to
31:51
move. And that's a contrast with
31:53
Europe, where the entire European Union
31:55
has much better laws than
31:57
the United States has.
31:59
So again, you know, let's try to do what
32:02
we can at the individual level. If
32:04
enough people do that, we'll weaken the power of
32:06
the agribusiness lobby because they won't be selling
32:08
so much. And we'll
32:10
press it'll be in a better position to produce
32:12
that systemic
32:13
change. Mhmm. Well, in the context
32:15
of of animal rights, this has
32:17
been a movement built upon the
32:19
shoulders of the individual. Like, it really has
32:21
been a grassroots movement. And, you know,
32:23
I wanna get into how this all began
32:25
with you. You wrote animal liberation
32:27
in nineteen seventy five. I wanna hear
32:29
how that came into being,
32:32
but in looking back upon, you know, the many years
32:34
since that book came out, it must be
32:36
quite an awesome thing to see
32:39
how much progress has been made, how much
32:41
energy is in this
32:43
movement, while also recognizing
32:45
how little is changed and how much
32:47
work
32:47
remains. Right? Like, how are you thinking about the
32:50
current status quo? Yeah. You've
32:52
got that exactly right. There
32:55
has been significant change. I
32:57
mentioned those laws in the European
32:59
Union, sort of twenty seven countries
33:01
that have better laws. And when I published
33:03
animal liberation and in seventy five and
33:05
the United Kingdom, of course, which is no longer in
33:07
the European Union. So that's
33:10
a significant change for hundreds of millions
33:12
of animals. They have definitely not
33:14
idealized, but they have lives that are somewhat
33:16
better than they were in the seventies.
33:19
But on the other hand, factory
33:21
farming still continues here in the
33:23
United States. A lot of it goes
33:25
on just as bad as it was
33:27
before, in some respects even worse because, for
33:29
example, the breeding of
33:31
chickens for meat has
33:34
increased the speed at which they put on weight to such
33:36
a point that their immature
33:38
legs can't really bear the
33:40
weight of their bodies. They're very young birds when they
33:43
send a market they're about six weeks
33:45
old. And
33:47
they're in pain just from trying to carry their
33:49
body weight and sometimes their legs will
33:52
collapse under them and they'll just be unable
33:54
to move. then because this is such
33:56
a mass production industry with
33:58
twenty thousand birds in a single
34:00
shed, they're probably gonna starve to
34:02
death or or dehydrate to death because
34:04
they can't walk to food and
34:06
water. And basically nobody cares about individual
34:08
chickens. Nobody will even see that
34:10
there's a a down bird and pick it up and
34:12
you mainly
34:14
kill it. So, you know, those things have actually got worse.
34:16
Mhmm. Plus, of course, in other countries in the
34:18
world, particularly in East
34:20
Asia where they
34:23
become more prosperous, which in itself would be a good But that means they're
34:25
producing a lot more meat, more
34:27
demand for meat. And
34:30
factory farming has hugely increased
34:32
there. And
34:33
again, it's pretty
34:36
much unregulated. Mhmm.
34:37
Yeah. We we can celebrate
34:40
the growth of the vegan
34:42
movement in these kind of
34:44
urban pockets across the developed western world.
34:46
But that's myopic
34:48
in that when we canvas our glance
34:51
internationally, we see the expansion of a middle
34:54
class or, you know, new wealth
34:56
sectors who are going to increase their
34:58
consumption of meat at a
35:00
rate that the plant really
35:02
can't sustain. Right? And we're
35:04
seeing the decimation of the rainforest. And
35:06
with China, you know, all of
35:08
these areas that are where
35:10
we're seeing an increase in meat
35:12
consumption at an unprecedented level like
35:14
this is a global problem from
35:16
not just a mass suffering
35:17
perspective, but from a climate change perspective
35:19
as well. Yeah. That's basically
35:21
true. It's interesting
35:24
that at some countries have actually started
35:26
on a decline in meat consumption.
35:28
Germany is one example
35:30
and Sweden is another. So,
35:32
you know, there's some hope that as we become more educated
35:35
and more understanding about what meat does
35:37
not only do animals, but
35:40
to the climate and to the environment more generally.
35:42
We just had this meeting of environmentalists
35:44
concerned to protect species.
35:48
And again, There's been a lot of writing about how meat
35:50
consumption just can't continue to grow,
35:52
that it is destroying the rain forests and
35:54
causing extensions.
35:56
So there's there's some hope that more people will realize
35:58
that, but it's it's
36:00
difficult. And, you know,
36:02
to me, you you mentioned
36:05
pockets of people being vegan. I mean,
36:07
I think being vegan is a great diet
36:09
and a healthy diet and the best diet for
36:11
the for the planet and for animals. But I
36:13
think we have to work towards reduction
36:16
of meat consumption in
36:19
mainstream. Because it's going to
36:21
be a long time before we get a
36:23
a vegan mainstream in in most countries.
36:24
Yeah. I mean, it's it there
36:26
does feel like quite a bit of momentum behind
36:28
that right now. It is mainstreaming in
36:32
that so many restaurants, you can at least get vegan options and people
36:34
don't bulk or and they're not confused when
36:36
you -- Mhmm. -- wanna veganize an
36:38
entrée at a restaurant or what have
36:42
you. But yes, there is so much work to be done and
36:44
your question really brings up this notion
36:46
of effective activism. Like,
36:49
how do you sort of convinced
36:51
the most number of people to change their habits,
36:54
to have the greatest
36:56
impact. Right?
36:58
Is it Is it
37:00
like throwing a bucket of blood on a
37:02
on a on a runway model? You know,
37:04
at a fashion show who's wearing
37:06
a a mink coat? Or
37:08
is it having a, you know, realistic conversation
37:10
with policymakers about a
37:12
slight reduction in harm that could
37:14
actually impact millions people
37:16
and benefit millions of
37:18
animals. Like, how do you think about
37:20
carrying the message from a
37:22
utilitarian
37:23
perspective to leverage the greatest
37:26
change? I think that
37:26
as far as trying to get people that change
37:29
their diet is concerned, probably
37:32
being cool
37:34
and reasonable is better than throwing buckets of blooded people. That's that's
37:36
true. But, you know, we we don't
37:38
fully know and I I would like to see and
37:40
this is part of what effective altruism wants
37:43
to do. Would to more studies about what is the effect
37:45
of people when there
37:48
are protests that are more in
37:50
your
37:50
face. Than others. There's some suggestions that
37:52
it puts people off, but I don't really
37:54
know that
37:54
we know. And for example, on issues like climate
37:57
change, which seems to me to
38:00
be a really urgent issue. I can fully understand
38:03
those ecoactivists who
38:05
through Super Over,
38:08
Vanguard's sunflowers. And let
38:10
me say, they knew it was behind glass, so
38:12
they knew it wasn't gonna damage the
38:14
original painting. But, you know, that was
38:16
a gesture to say, you know, this is really
38:18
something urgent and we're still not doing what we need
38:21
to be doing about it and we have to
38:23
do better and we have to do
38:25
it soon. So I fully sympathize with that, but
38:27
I do want to know what actually is going to work and
38:30
what is going to get governments to
38:32
take the relatively
38:34
simple steps that
38:36
they need to take -- Mhmm. -- to
38:38
shift us away from greenhouse gas emitting products both
38:41
fossil fuels and meat
38:43
in particular. How have your views
38:46
evolved since writing this book
38:48
in nineteen seventy five on this
38:50
subject matter? Well,
38:53
perhaps I was a
38:55
little naive about how easy it might have
38:57
been to to change these
39:00
deeply ingrained habits
39:02
and to combat major
39:05
industries. Because I did think
39:07
that the arguments seemed to me to
39:09
be so clear. I thought that if I could just state
39:12
them clearly and rationally,
39:14
readers would
39:16
decide that that they were right. They would change what they were eating. They
39:18
would talk to their friends about what it was important
39:20
to change what was
39:21
eating. And I hope.
39:24
Amazing. That's how you that's how it happened for
39:26
you. Right? Like, why shouldn't it happen for
39:28
anybody who's reading your book?
39:29
Exactly. That's right. Yeah. I mean, So
39:31
I didn't think about this issue at all until
39:33
I was a graduate student at Oxford,
39:36
twenty four years old. And I hadn't
39:38
thought about it now this was nineteen seventy, so it wasn't really
39:40
discussed. You didn't meet vegetarians or
39:42
certainly not western vegetarians. You might have met
39:44
some Indian vegetarians, but you
39:46
didn't meet
39:48
people who were like you, who
39:50
were vegetarians. Until I Alexia
39:53
would happen to have lunch with
39:55
a Canadian graduate student called Richard
39:58
Cashion who asked whether there was meat in the
40:00
spaghetti sauce that was being served. And when he was
40:02
told there was, he took a salad
40:04
plate instead. And I was surprised and asked him what his problem was with And
40:06
he told me that he didn't think it
40:08
was right to treat animals the way we treat
40:10
them in order to turn them into food.
40:13
And I said, I don't don't they have good lives out in the fields? And
40:16
he said, no. They're increasingly,
40:18
they have crowded inside
40:20
in big dark sheds. I knew nothing
40:22
about that. Made
40:24
in my business to find out. And then
40:26
I also because I was a philosophy student,
40:28
I decided to look at what philosophers
40:30
had said about this, you know, why is it okay that
40:33
to treat animals in this way? Why do
40:35
the bands of morality as it seems at the
40:37
time just stop with us species? And
40:39
I decided both that he was right on the
40:41
facts and that there wasn't an ethical
40:44
justification for disregarding the
40:46
interests of non
40:48
human animals in the way we were doing
40:49
it. So I've
40:49
seen a pretty simple argument to me. And if I could be persuaded
40:51
by that and I could show the facts to other
40:54
people and and look at the
40:56
ethical arguments, that that would
40:58
convince other people. And it convinced some other
41:00
people. That's that's the good
41:02
news. The bad news is that we
41:04
are still living in
41:06
societies where The majority of people are not only eating
41:08
meat, but even buying factory found
41:10
products, not particularly looking
41:12
for more
41:14
organic or free ranging or certified humane animal products?
41:16
Yeah, I think
41:18
that with greater education
41:20
around this issue also comes
41:24
concerted efforts to confuse
41:26
consumers. Right? There's a lot of
41:28
greenwashing going on and there's a lot of
41:31
energy around, you know,
41:34
kind of the grass
41:36
fed free range animals that make
41:38
people feel better about their
41:40
animal consumption. Without fully
41:42
understanding the equation,
41:44
like this idea that we actually
41:46
need the animals to regenerate the
41:49
soil and you eating your animals from these
41:51
farms is actually part of the
41:53
climate solution, and these animals live great
41:55
lives. And certainly, that's a better situation
41:58
than the factory farmed animals, which
42:00
is the big gaping problem that needs
42:02
to be solved. But I
42:04
think it allows people to
42:06
kind of fall into
42:08
sort of an acceptance
42:10
or a delusion that they're
42:12
still not their habits aren't are are
42:14
aren't really resulting in the harm that they're actually resulting
42:16
in. Yeah. I don't say that very inelegantly,
42:18
but I think you know what I'm getting at. I
42:20
know what you're getting at.
42:21
Yes. Yeah. And in fact,
42:24
it is a delusion, I think. And I'm
42:26
not sure maybe people are aware of it,
42:28
but because if you ask
42:30
people if they meet and when they
42:32
say yes, you asked them do they mostly buy
42:34
organic or certified humane
42:36
or grass fed, something like that?
42:38
The percentage that answers yes is
42:42
just wildly more than the amount that is actually
42:44
produced -- Mhmm. -- by a
42:46
high multiple. So
42:48
either people that
42:50
they're buying these better products when they're
42:52
not or they're just lying in
42:54
the in the answers that they're giving.
42:57
Because if you look for example at chicken meat
42:59
production, the example I heard
43:01
earlier, I think it's ninety-nine point
43:03
eight percent is factory farmed in
43:05
the U. S. It's a tiny, tiny
43:07
percentage far less than one percent
43:10
So if people say they're eating
43:12
humanely produce
43:14
certified humane Sorry.
43:17
If people say they're eating, you mainly
43:19
produced chicken. They're almost certainly
43:21
not. We're
43:26
brought to you today by Inside Tracker. No
43:28
two bodies are the same.
43:31
Well, general principles can
43:33
guide all of us on
43:35
some level, I think it's fair to say we're all of n of one
43:37
experiments. There is no one size
43:39
fits all plan when
43:41
it comes to optimizing
43:44
your unique biology. We all have
43:46
our own unique needs, our own unique
43:48
challenges, which is why the
43:50
value proposition of Insight
43:52
Tracker is so unique and so compelling. Okay, But
43:54
what is it? What are you talking about? Well,
43:56
Insight Tracker is a digital
43:58
platform. It's also an app.
44:02
And it was designed by experts in aging from Harvard,
44:04
Tufts, and MIT. And what
44:06
it does is it takes a personalized
44:10
approach to health spanned and longevity based
44:12
on an analysis of your
44:14
submitted blood work, your submitted
44:16
DNA, your
44:18
lifestyle habits, and real
44:20
time feedback from your body. To
44:22
deliver through this highly
44:24
intuitive interface, ultra
44:26
personalized guidance with actionable
44:28
recommendations. Basically, a custom
44:30
wellness plan to help
44:32
you live healthier, longer. And
44:34
right now for a limited time, all you guys, my listeners,
44:37
can get twenty percent off the entire Inside Tracker
44:39
store when you
44:42
sign up. At insight tracker dot com slash
44:44
rich role. You'll get a personalized
44:46
health analysis and a
44:48
custom plan to improve your
44:50
health span build
44:52
strength, speed recovery, and
44:54
optimize your health for the long haul. Grab
44:56
your discount at insight tracker
44:58
dot
44:59
com slash richroll That's inside tracker
45:02
dot com slash rich
45:04
role. If
45:07
the reduction of suffering is the
45:10
rubric, there is an interesting
45:12
philosophical exploration to
45:14
be had when it comes to the
45:17
the kind of carnivore people who
45:19
call who call what they eat
45:21
like nose to tail. Like, From
45:23
a suffering perspective, if somebody's going to take
45:25
one cow and they're gonna consume the
45:27
entirety of that,
45:30
is that a more
45:32
ethical choice than the
45:34
vegan who's eating plants
45:36
that are, you know, sort of, thresholds
45:39
in a traditional way where lots of
45:41
rodents and insects are being
45:44
sacrificed as a result of the
45:46
harvesting of these many plants or gofers
45:48
having to be killed, etcetera,
45:50
where in other words, like lots
45:52
of different animals
45:54
are sacrificed for the
45:56
production of these plant foods
45:58
versus the person who eats the cow who
46:00
says, well, this is just one
46:02
sentient being. Like, from a philosophical,
46:04
ethical perspective, like, how do you think about that
46:06
or parse the difference? Yes.
46:11
So there are couple of things to be said about that.
46:14
One is that from a
46:16
climate point of view,
46:18
cows and beef is
46:20
really the worst of the animal
46:22
products in terms of the
46:24
quantity of greenhouse gases because they
46:26
produce methane and methane is an
46:28
extremely potent
46:29
green ice gas. And they've had to consume a
46:31
lot of resources to get to the
46:33
point before they're killed for
46:36
food. Right?
46:37
Right. Well, I mean, So if if
46:39
they're in feedlots or fatten the last few months in
46:41
feedlots in engrained, then all of those
46:43
problems are bad the
46:45
rodents that get killed with the threshing are
46:48
going to be there because
46:50
they will have eaten far more
46:52
grain than a vegan would
46:54
eat. Because you only get back from feeding grain to
46:56
cattle, you get back
46:58
somewhere between five percent and ten percent of the
47:00
food value
47:02
of the grain that you're putting in. So if you're
47:04
eating the grains directly, you eat far fewer grains. But if
47:06
people are saying, well, I'm just
47:08
eating fully grass fed beef,
47:12
which is, again, is quite a small proportion of U. S.
47:14
Produced beef. Then you're not killing the
47:16
rodents when you harvest the grain because
47:19
they're eating grass. But they actually
47:21
produce more greenhouse gases than the feed lots. And
47:24
that's because the reason cattle are put in
47:26
feed lots is they faten up
47:28
faster on
47:30
grain. So if they're on grass, they have to live longer
47:32
to reach the same weight, to produce
47:34
the same quantity of meat for people
47:38
to eat. And all the time they're living and digesting the grass, they're
47:40
producing the methane. So
47:42
in terms of the impact on the climate,
47:45
it's really bad. It may be better from an animal
47:48
welfare point of view, much better than
47:50
eating chicken, for example, both because they're
47:52
outside and they're better lives. And also because talking
47:54
about one animal with a lot
47:56
of meat, whereas chickens you have your
47:58
people in eat and who eat chicken
48:01
eating a lot of chickens over
48:03
their
48:03
lifetime. But in terms of
48:06
greenhouse gases, it's
48:08
actually worse. Beyond that, there isn't enough land
48:10
to support the production of of
48:12
cattle in that manner anyway
48:14
to meet global demand
48:16
for
48:16
meat. Well, that's right. So it's not a scalable,
48:18
sustainable solution. No. And and some of
48:20
it and and the demand for beef is causing rainforest
48:23
be cleared, causing the Amazon to be cleared for grazing
48:25
land, for example, or even to grow
48:27
more soybeans in Brazil, which
48:30
also about seventy percent of the
48:32
soybean crop gets fed to cattle. And I
48:34
think something over twenty percent goes to
48:36
biofuels. And
48:38
people say, I don't need tofu because
48:41
soybeans are but actually it's about
48:44
seven percent of the whole soybean crop
48:46
is actually eaten directly by humans, either
48:48
as beans or as as tofu. And
48:50
the great majority is getting funnel through
48:52
cattle. And again, we lose most of the food
48:54
value of the soybeans when we do
48:58
that. Back to the earlier question
49:00
about how your ideas have evolved
49:02
since nineteen seventy five, are
49:04
there other other thing.
49:06
Like, if you so you are got you're
49:08
you're reprising this
49:09
book. Right? You're coming out
49:10
of a new edition of it. So I suspect
49:13
there are, you know, things that you wanna change
49:15
or I don't know how much you can talk
49:17
about that specifically, but
49:20
maybe generally, how your thinking has changed and evolved in in the
49:22
many interceding years?
49:23
Yeah. That's right. I'm producing,
49:25
you know, what's effect actively
49:27
a a new book first being called Animal Liberation
49:30
Now, which has
49:32
maintains the key ethical ideas completely
49:36
updates the relevant facts
49:38
and looks at progress
49:41
that we've made and that we've not made from a
49:43
more global perspective. So it has a lot of new things in
49:46
it that went in the
49:48
original edition. My
49:52
thinking has
49:54
developed in in various
49:58
respects. I suppose some of the
50:00
things that I'm more
50:02
concerned with now, questions
50:04
about wild animals, about should
50:06
we be concerned about the
50:08
suffering of wild animals and
50:10
what might we do -- Mhmm. -- with
50:12
that. I'm also interested in
50:14
the development of
50:16
alternatives to to meat. I see that as a positive
50:18
sign both plant based meats
50:20
and the development of
50:22
cellular meats.
50:24
A meat that is actually produced from
50:26
animal cells, but does not require any living
50:30
animal organism. And
50:32
therefore is, again, far lower on
50:34
greenhouse gas emissions, maybe has about three
50:36
percent of the greenhouse gas emissions
50:38
of meat
50:40
from animals. And doesn't involve the animal
50:42
suffering, of course, because there's no conscious animal
50:44
there. So if we could do
50:46
that and if
50:48
we could produce it at
50:50
an economically competitive price
50:52
with the meat that is being sold from
50:54
animals. That
50:56
might be another way of breaking this
51:00
deadlock of trying to get people to
51:02
move
51:03
away. Mhmm.
51:04
Eating animal products sort of so bad for animals and for
51:06
the environment? Well, all indications is that we're
51:09
we are headed in that direction. It may
51:11
take a little bit more time
51:14
because this is an expensive
51:16
problem to solve. Right?
51:18
Figuring out how to culture
51:21
these cells and create these, you
51:23
know, quote unquote, meat products.
51:25
They're able to do it. They've established that
51:27
it can be done, but doing
51:29
it economically. So it's
51:32
on par with what it would cost to
51:34
go to McDonald's or what have
51:35
you. There's still a lot of work to be
51:38
done. Right? Yes. That's right. You can actually buy
51:40
cellular chicken in Singapore and arrange
51:42
for sale there. But
51:44
it's yes. It's expensive. And
51:47
I think problem is they need to scale up
51:50
and there's some questions, but
51:52
building these huge bioreactors in
51:54
which the process occurs, can that
51:56
be done? Will
51:58
there be problems with things going
52:00
astray? We really don't
52:01
know, but there's quite a lot of capital
52:03
being invested in it. A lot of
52:05
capital. Yeah. But also the question
52:08
of just, you know,
52:10
consumer acclimation to it, there is
52:12
that sort of getting
52:14
over that icky factor of, like, what is this? And where does it
52:16
come from? And, you know,
52:18
having,
52:19
you know, consumers acclimate
52:22
to the idea of of this
52:24
new food. That's true.
52:27
Although consumers seem to think
52:29
that the meat that they're buying is
52:31
somehow natural, and and that's obviously transformed tremendously in the
52:33
last fifty years. Yeah. You know,
52:35
the animals are are bred differently as I was
52:37
saying. I mean,
52:40
the the the chickens can't really live to
52:42
maturity mostly because they're they're bred
52:44
to eat so fast and put on weight
52:48
so That a lot of them will just collapse and
52:50
die if they were
52:52
kept to to older birds. In
52:54
fact, it's so it's so bad that
52:56
with the breeding birds because the parents, of course, have to have the same genes as the
52:58
one we eat. They have to be starved basically
53:00
because if you've fed them as
53:02
much as they wanna eat, they
53:05
would not be able to survive to breed or they might not physically
53:07
be able to breed because they would be too
53:09
obese to actually do
53:12
that. So they tend to be fed every second day, which means
53:14
that they're desperately hungry all the time.
53:16
And then, of course, the antibiotics
53:18
are used because they're under
53:22
stress So a lot of
53:24
antibiotics that are losing their efficacy because we're feeding them routinely
53:26
to farm animals. So,
53:30
yeah, that this is not a natural product either, but some of our people been
53:32
persuaded to continue to eat it,
53:34
think of it as good. So
53:36
no doubt there will be some
53:40
say, need to show consumers
53:42
that this cellular meat when it
53:44
happens is essentially still
53:46
meat and is actually a safer
53:49
and pure products and what they're getting from factory
53:51
farms. Have
53:51
you have you tried it? No. I've not had
53:53
the opportunity to try it yet. Yeah. I would
53:56
certainly do so if I find myself
53:58
in
53:58
Singapore. I will go and do that. Right. How do
54:00
you think about philanthropy
54:02
in the animal welfare space?
54:06
It seems like there's a lot of sort
54:08
of improvement to be had in terms of like how
54:10
to leverage the dollar for the most good.
54:12
When we look at the big problems
54:15
versus where people's kind of hearts and emotions are.
54:17
Well, absolutely. Yeah. In fact, I showed
54:19
my students a slide that
54:22
which has two boxes. One
54:24
box shows where the
54:26
greatest amount of animal suffering is
54:29
and animals being killed. And it's
54:31
overwhelmingly farmed animals. Yeah. And so it's like a
54:33
big square or one color for farmed animals.
54:35
Mhmm. And then down
54:37
in in the bottom corner. There's a tiny little
54:40
square that shows the
54:42
other things like laboratory animals. It's
54:44
pretty small too alerts. Probably around one
54:47
hundred million animals in the United States each year. There's things
54:49
like Furs. And then there's dogs and cats,
54:51
which is just a tiny mark you can hardly
54:53
see on my slide.
54:56
And then then the adjacent box shows where
54:58
the dollars are going. And there, it's
55:00
it's the dogs and cats, the animal shelters,
55:03
that is the dominant thing. And
55:06
found animals are quite small and laboratory
55:08
animals are quite small. Wild animals
55:10
do rank larger there. So,
55:12
yeah, there's this complete disconnect between where the dollars go
55:15
and where they're needed. We're starting to get
55:17
a little more money going through
55:20
effective altruism, actually, mostly through foundations
55:24
like open philanthropy,
55:26
which is funded by
55:28
Dustin Muscovitz and Carriere tuna,
55:30
which is directing more money
55:32
to oppose
55:34
factory farming. But but what's going coming from the general
55:36
public is really not going to
55:38
where the big animal suffering problems
55:40
are. It's going to where people's
55:42
emotions are. And,
55:44
you know, it's it's not that effective altruism doesn't want people to have emotions.
55:46
It's just that they want people to
55:48
feel the emotions and then think, you
55:50
know, yes, I care about dogs
55:54
and I also care about animals in general. I don't want pigs or
55:56
cows or chickens to suffer. I don't want wild
55:58
animals to suffer. I don't want
56:00
even rats
56:02
and mice to suffer in laboratory experiments. And
56:04
so if I care about
56:06
animals, I should be thinking about giving to
56:08
where it will help the
56:10
big problems. And not relatively
56:12
small problems. That's what we need to
56:14
get people to think about.
56:15
Sure. But isn't
56:15
there a place for that
56:18
emotional impulse? Like, if you
56:20
think about so example,
56:22
in the in the animal welfare space, like
56:24
a a lot of people donate towards these
56:26
shelters. Right? Like, they they rescue
56:30
farmed animals they create a beautiful place for them to live out their
56:32
lives. And those places and people
56:34
feel good about supporting those places for
56:36
obvious reasons.
56:38
But those places also serve as sort of museums for people
56:40
to visit, which gets, you
56:42
know, perhaps other people who
56:44
have no connection to this movement
56:47
or these ideas, this is their
56:49
inception point for even learning
56:51
about this, an emotional connection
56:53
to the reality of the problem
56:55
that might in turn motivate them
56:57
to give or get involved in the solution and maybe
57:00
that solution is
57:02
an effective solution
57:04
or maybe it's something else, but I
57:06
can't help but ask you, like, where is
57:08
the emotional piece? Like, there has to
57:12
be some importance and resonance for it on some
57:14
level?
57:14
Yeah, definitely. I think
57:15
that emotion is
57:18
important. And with the
57:20
animal sanctuaries that you mentioned,
57:22
I think they do
57:24
get people to see found animals as
57:26
individuals, and
57:28
that's important. They get them to see that some of them can actually grow old
57:30
even, which, of course, bound animals
57:32
never do. It's the rescued ones that
57:34
might. And
57:36
those sanctuaries work, and
57:38
many of them do, and I think they all should,
57:41
as places of education
57:43
that get people to see animals
57:45
differently. And encourage them to do
57:47
more for farmed animals in
57:50
general. So I think
57:52
that's fine. And I
57:54
think in terms of global poverty too,
57:56
it's important emotion plays a
57:58
role and it's important to tell the
58:00
stories of individuals. Of
58:02
those children whose lives have
58:04
been saved by a treatment that was made
58:07
available through an organization that
58:09
had community health workers going around and
58:12
helping or, you know,
58:14
restoring people's site is something where you can
58:16
really see the
58:18
emotion and the life you can
58:20
say of the organization that I founded that that recommends effective charities,
58:22
recommends a couple that do
58:26
restore site in countries where otherwise people
58:28
with quite simple conditions like cataracts
58:30
would never be able to see
58:32
again. And you can see videos
58:35
online of how somebody's, you
58:37
know, when the bandages are removed after
58:39
an operation was performed and their eyes are
58:41
recovered. And you see a woman
58:43
who sees her a child for the first time that
58:45
she's ever seen that two year old child, and then
58:47
say, and that's a wonderful heartwarming experience. And
58:49
I hope it will encourage people to
58:51
think, yes, this is really a
58:53
good thing to be
58:53
doing. This is such an important
58:56
work to support. Right. Yeah. That's really
58:58
beautiful. The the counter side of
59:00
that, like, as a
59:02
thought
59:02
experiment, as somebody whose primary driver
59:03
is a reduction of suffering. If
59:06
you think about the
59:08
eradication of of of
59:10
global poverty, if
59:12
you're raising the kind of life experience
59:14
and and and income of
59:17
people who have grown up in
59:20
poverty. They then become Do
59:22
they not, then What what happens if those
59:24
people then end up increasing
59:26
their meat consumption? And that
59:28
drives, you know, cattle
59:30
producers to clear more
59:32
rainforest, to produce that cattle.
59:34
Like, when you look at the macro
59:36
benefit versus harm calculus from a philosophical point of
59:38
view. Like, how do you make sense
59:41
of that? Yeah. I've I've I've
59:43
I've I've got a that's
59:46
a tough problem. I've grappled with trying to think about that and
59:48
trying to think about my any poverty work
59:50
on how do I connect with my
59:53
concern for animals. But I suppose what I
59:55
say and you may think that this is a
59:57
rationalization is that if we're ever to solve this
59:59
problem, we're not gonna solve it by keeping
1:00:01
people in poverty. Because when people are
1:00:03
in poverty, they will do whatever they have to do -- Sure. --
1:00:05
to survive. And if that includes, for
1:00:08
example, killing wild animals in the
1:00:10
forest and
1:00:12
including even chimpanzees in some places and
1:00:16
perhaps leading to the
1:00:18
extinction of species in
1:00:20
the forest. They're gonna
1:00:22
do that. So I think
1:00:24
we have to try to get people
1:00:26
out of poverty and hope that when they have more
1:00:28
choices, when they are out
1:00:30
of poverty, they will eventually come to see
1:00:32
that feeding more meat
1:00:34
is is not the right thing to do.
1:00:36
Mhmm. And we will have alternatives
1:00:38
for them. That they can live
1:00:40
good and healthy lives without
1:00:42
eating more meat or perhaps without eating any
1:00:44
meat. And so that we'll
1:00:46
get to the point where I'm hoping we all
1:00:48
get to where we have
1:00:50
expanded our concern for
1:00:52
all animals, for all sentient
1:00:54
beings. And they're not just
1:00:56
thinking about human beings.
1:00:57
Mhmm. So, you know, as you say, you might you might
1:01:00
think that that's No. It's just
1:01:02
interesting to think about. Like, I'm not wed to
1:01:04
any answer or any I I
1:01:06
think grappling with that idea
1:01:08
demonstrates how difficult problem
1:01:10
solving is in the real world. If you're
1:01:13
if your if your goal really is, like, how do we
1:01:15
best eradicate suffering? It's
1:01:18
complicated. It's nuanced and it's in the
1:01:20
gray. I think there's, you know, another
1:01:22
way of of exploring that is is the twist
1:01:24
on your famous thought experiment of the
1:01:26
girl in the in the
1:01:28
pond. Right?
1:01:30
Mhmm. So First of all, for
1:01:32
people who don't know, you
1:01:33
know, maybe explain what that
1:01:35
thought experiment is.
1:01:38
Sure. Okay. So in an article I wrote a long time
1:01:39
ago, I asked my readers to imagine that they're
1:01:42
walking past a pond,
1:01:44
let's
1:01:45
say, I don't wanna little pond in a
1:01:47
park, and let's say they know well that the pond is quite shallow. And as they
1:01:49
walk past it, they noticed
1:01:51
that there's something struggling
1:01:53
in the water And when
1:01:55
they look more closely, turns out
1:01:58
it's a very small child, a child
1:02:00
too small to stand up even in this
1:02:02
shallow pond. So, you know,
1:02:04
the first thing you would think about
1:02:06
is, who's child is this? Who's looking after
1:02:08
this child? But when you look
1:02:10
around, you don't know why, but there's nobody else
1:02:12
there. You're the only adult
1:02:14
inside. So your second thought I hope
1:02:16
is, gee, this child seems to be
1:02:18
drowning. I better jump into
1:02:20
the pond. And save the
1:02:21
child. But then maybe you have
1:02:22
a third and not so noble thought and that is
1:02:24
I'm wearing my best clothes today because
1:02:26
I was going somewhere special. And
1:02:29
they're gonna get ruined if I save
1:02:31
the child by jumping into the
1:02:33
pond. So what if I just
1:02:36
forget that I ever saw the child and go on
1:02:38
my way? Would that
1:02:40
be the wrong thing to do? And I hope that
1:02:42
all your listeners are now saying, of course, that would be the
1:02:44
wrong thing to do. How could
1:02:46
you compare the value of a child's life with ruining your
1:02:48
shoes or your
1:02:49
clothes. So
1:02:50
the point of the example is to say, yes,
1:02:53
that is the right reaction that you should have, and it would be
1:02:55
the wrong thing to do. But it's
1:02:57
not only in these unlikely circumstances
1:03:00
where you have to ruin your clothes
1:03:02
to save a child and a
1:03:03
pond, it's
1:03:04
happening to us all the
1:03:07
time that for the cost
1:03:09
of replacing those clothes, donated to
1:03:11
a effective charity. We could
1:03:13
save or certainly contribute towards
1:03:15
saving the life of a child in
1:03:17
a low income country, perhaps by
1:03:20
donating to the Against Malaria
1:03:22
Foundation, which will distribute bed
1:03:24
nets to protect children against
1:03:26
malaria, or perhaps
1:03:28
by distributing other medicines to prevent children dying
1:03:30
of diarrhea, which is another
1:03:32
significant cause of deaths in in low
1:03:34
income countries. And the point
1:03:36
being that the physical
1:03:38
location of the suffering
1:03:40
child should not have an
1:03:42
impact on the decision to give or
1:03:44
not give. That's right. I
1:03:46
think if you reflect on it and you ask
1:03:48
yourself, does the fact that the child is
1:03:50
physically close to me really
1:03:52
make a moral difference to how important
1:03:54
it is to help that child to
1:03:56
save that child's
1:03:56
life. I think most of us would say,
1:03:59
no. That's an underpinned important
1:04:02
thing. Sure. Proximity being irrelevant. And
1:04:04
then there's all kinds of other
1:04:06
threads that can be pulled on this.
1:04:08
Does temporality matter.
1:04:10
Like, does the fact that this person is
1:04:12
living at the same time? Like, we
1:04:15
can we can predict that
1:04:17
in the future, there will be people
1:04:19
in this
1:04:19
circumstance. Right? And the fact that they don't
1:04:22
live yet, should that
1:04:24
be a factor in our
1:04:26
decision to think about how much of
1:04:28
our income we're gonna give over
1:04:30
to increase the well-being of the
1:04:31
world. I think that
1:04:34
if there people who are going to be living in the and
1:04:36
they are going to be
1:04:38
either suffering or dying prematurely in
1:04:41
ways we could prevent. The
1:04:44
fact that it's in the future doesn't in
1:04:46
itself matter. If we're
1:04:48
uncertain as to whether we could do
1:04:50
anything to prevent their suffering, that, of makes
1:04:52
a difference. We have to discount the good of
1:04:54
what we're trying to achieve by
1:04:56
the odds against us actually managing
1:04:58
to achieve it. So yes, doact
1:05:00
where -- Mhmm. -- good consequences are more certain, but not
1:05:03
just the future. The Oxford
1:05:05
philosopher Derek Parfait had
1:05:07
an example about leaving
1:05:10
broken glass somewhere in the
1:05:12
forest. And let's say, it'll
1:05:14
take a while. Nobody's gonna tread on it in
1:05:16
coming years, but At some point, a
1:05:18
child maybe not yet born will
1:05:20
walk along that path and cut
1:05:22
their feet on it. Does that mean that it
1:05:24
didn't matter? Because they aren't born at
1:05:26
the time that you left a broken
1:05:28
glass in the past? No, it doesn't
1:05:30
really matter. The pain of the child is
1:05:32
the same. And it's it's just
1:05:34
the same. It's just as it's just
1:05:36
as significant. If you can
1:05:38
bring it predict that it is very likely
1:05:40
to
1:05:40
happen. Mhmm. And that opens the
1:05:42
door to a whole discussion around long
1:05:44
termism. Which is which is you know very related
1:05:47
to it's an extension of of your
1:05:49
work in many ways. Yes,
1:05:51
that's true. There's
1:05:53
there's one difference with the really long termist
1:05:56
predictions. If you're wanting to intervene,
1:05:58
not in a way that's gonna
1:06:00
make a difference to somebody
1:06:02
living in twenty, fifty or a hundred years. But in many
1:06:04
centuries or many millennia or even millions of
1:06:06
years, then firstly, there is a
1:06:08
quite different uncertainty factor that comes in,
1:06:10
in terms
1:06:12
of how do we really know that what doing now will
1:06:14
make a difference. But
1:06:16
there's also the fact that when
1:06:18
long term is try to prevent
1:06:21
extinction. And then they
1:06:24
say, they
1:06:24
could be these vast numbers of human
1:06:26
beings living rich and fulfilling lives
1:06:29
as long as we don't do something that cause
1:06:31
our species to kind of extinct, let's say, this century or the
1:06:33
next couple of centuries, then
1:06:36
you do have to think
1:06:38
about, well, if we did something that meant that we extinct,
1:06:40
these people wouldn't exist at all. Mhmm.
1:06:42
So it wouldn't be like a child
1:06:44
cutting their foot and getting
1:06:46
hurt. It
1:06:48
would be like there just would be
1:06:50
nobody alive on the planet. Maybe
1:06:52
there would be no sentient beings in
1:06:54
this part of the universe. And
1:06:56
some philosophers think that that's different.
1:07:00
That we don't have an obligation to
1:07:02
ensure that future
1:07:04
people exist. Rather
1:07:06
we have an obligation to say that if people
1:07:09
exist in
1:07:09
future, we don't do anything that will
1:07:12
harm them. I wanna get
1:07:13
into life extension
1:07:16
and the anti aging stuff because
1:07:18
I feel like that's the next logical
1:07:20
step from what you just shared, but
1:07:22
to put a pin on that for now and circle
1:07:25
back to the to the the girl
1:07:27
drowning in the pond, the original
1:07:29
question being, you know, coming out of this
1:07:31
idea of suffering reduction. If you
1:07:33
save that girl, which we all agree, is the right thing to do if
1:07:35
you're passing by, it can
1:07:38
be presumed
1:07:40
that that individual
1:07:42
will go on to live some
1:07:44
number of years and we'll consume.
1:07:46
We'll consume many things,
1:07:48
including probably animal products, which
1:07:51
has its own downstream implications in terms
1:07:54
of harm and,
1:07:56
you know, resource allocation, etcetera.
1:07:59
So it's back to that. I think there was actually an article in
1:08:01
your your journal about this.
1:08:04
Right? Like, you know, the
1:08:06
journal of controversial ideas, like -- Okay.
1:08:08
-- let's floor this
1:08:10
idea, like, if you're saving this
1:08:12
individual altruistically,
1:08:14
there's also harm that is incident
1:08:17
to
1:08:17
that to
1:08:18
that act. Right? That's right.
1:08:20
Yeah. It was an article written by somebody
1:08:22
with the name Michael
1:08:23
Plant, which is his real name. Oh,
1:08:26
it's his real name. Yeah. People can submit these in articles with
1:08:28
pseudonyms. That's right. Yeah. Yeah. And if anyone wants to read
1:08:30
it,
1:08:30
by the way, as
1:08:31
you mentioned, the Journal of Controversial Ideas.
1:08:34
It's open Google --
1:08:36
Yeah. -- general controversial ideas, you'll get to
1:08:38
it. Yeah. It's it's it's a it's a thoughtful article and it
1:08:44
does raise that problem
1:08:46
about the the meat eaters whose lives we're saving and ask whether
1:08:49
we should be
1:08:52
doing that. I'm somewhat unsure. I
1:08:54
mean, I've I've actually talked to
1:08:56
to Michael Plant
1:08:59
about this. And He's
1:09:02
he's quite persuasive, but at
1:09:05
the moment, I'm going to say,
1:09:07
let's try and save those lives
1:09:09
and hope that we can persuade
1:09:11
people, move people towards a a lifestyle
1:09:13
in which we're not causing so
1:09:16
much
1:09:17
suffering or animals. Right. I think in
1:09:19
order to really flush that out, you
1:09:22
have to think about
1:09:24
species in this sense that,
1:09:26
you know, we we create a rank hierarchy
1:09:28
amongst the animal kingdom
1:09:30
based upon people's cognitive
1:09:32
abilities and their
1:09:35
level of sentience. Right? Which is
1:09:37
not necessarily correlated to their ability to suffer.
1:09:39
But to answer that question about
1:09:44
harm reduction, do you not
1:09:46
have to place a value, you know, a greater value on one life over another, right,
1:09:49
from a
1:09:52
species perspective? So
1:09:54
I wouldn't do that on a
1:09:56
species basis. That is I wouldn't say that
1:09:58
being a member of the species, homo
1:10:00
sapiens, automatically means that your life
1:10:03
is more valuable than a member of any other species. I would say
1:10:05
that beings
1:10:08
who have cognitive capacities
1:10:10
that enable them to think about their
1:10:11
lives and think about the lives of
1:10:13
others whom they love and
1:10:16
care for. In
1:10:18
ways that are different and
1:10:21
perhaps more profound and
1:10:23
more lasting than other
1:10:25
beings. That it's a greater tragedy when
1:10:27
they die prematurely than when those other beings die prematurely. Mhmm. So
1:10:30
I don't think of
1:10:34
the lives of all sentient beings as
1:10:36
being of equal value. I do think
1:10:38
of the suffering as being equally
1:10:41
important when we're talking about similar
1:10:43
kinds and similar quantities of
1:10:43
suffering, but not preservation
1:10:46
of their lives. Mhmm.
1:10:51
The other uncomfortable idea as a parent
1:10:53
when I think about the the pawn and the girl is is
1:10:56
this idea
1:10:59
of Like, we we all intuitively feel like we can
1:11:02
prefer the well-being of our
1:11:04
children, over other children, and
1:11:06
that is sort of accepted like
1:11:09
of course, I'm going to make sure
1:11:11
that I'm providing for my children even though they live
1:11:13
in a, you know, much better circumstances than most children in
1:11:16
the world.
1:11:19
But from a harm reduction perspective, would it
1:11:21
not be
1:11:22
better for me to allocate
1:11:26
my resources more democratically so that
1:11:28
my kids are sort of not getting
1:11:30
any more than all these
1:11:33
other children who
1:11:34
need more. I think it would be better from a
1:11:37
purely impartial perspective if we could
1:11:39
do that. But, you
1:11:41
know, we are mammals who have evolved? We're
1:11:43
not
1:11:43
gonna do that. No. We're not gonna I agree. We're
1:11:45
not
1:11:45
gonna do that. You're a
1:11:46
parent? You didn't do that. Right? And you're
1:11:49
the you're the godfather of all of this. Okay.
1:11:52
So what I wanna say about this is
1:11:54
that it it would be from this impartial
1:11:56
perspective better if I
1:11:59
were to do that. But I
1:12:01
don't think we should blame ourselves for not doing it
1:12:03
because I think we should recognize that that's something that
1:12:06
is basically imprinted in
1:12:10
genes that we are going to care for our children more
1:12:12
than the children of strangers.
1:12:14
That's what our ancestors did
1:12:17
for millions of years, and that's why
1:12:19
we are here because if they hadn't, then they
1:12:21
wouldn't have survived or they wouldn't have
1:12:23
their children
1:12:25
wouldn't have survived. So
1:12:26
I think we have to be somewhat indulgent to ourselves in that, not as indulgent as many
1:12:28
people are. I don't
1:12:30
think we should be doing
1:12:34
everything imaginable for our
1:12:36
children. I think the automatic assumption that you leave
1:12:38
all your wealth to your children is not something
1:12:40
that is justified, especially if they are
1:12:42
already quite comfortably off. And we are living in a
1:12:45
world where there's so much extreme
1:12:47
poverty and so much need.
1:12:50
But we should try to do better. We should try to get
1:12:52
to more equitable distribution, and we
1:12:54
should try to encourage others to
1:12:56
do that. But as I say, you know,
1:12:59
we're not science. We we we haven't evolved
1:13:01
to be science with very, very
1:13:03
rare exceptions, and we shouldn't
1:13:05
beat ourselves up because we're
1:13:08
not right. Coming
1:13:09
back for more,
1:13:12
but first,
1:13:15
I'm super proud out to announce
1:13:17
the publication of voicing change volume two. The second in our
1:13:20
series of audible
1:13:23
coffee table book anthologies featuring wisdom,
1:13:26
essays, conversation excerpts,
1:13:28
and beautiful photography
1:13:31
inspired by six four of my favorite
1:13:33
podcast guests. All books are hand signed and we're shipping globally, so
1:13:36
to learn
1:13:39
more and and pick up your copy, visit richroll dot com.
1:13:41
Supply is limited. So
1:13:43
act today.
1:13:45
Alright. Back to
1:13:47
the show. There's a
1:13:49
lot of science and money and
1:13:52
energy right now
1:13:55
going into the extension
1:13:59
of lifespan, like this antiaging
1:14:01
movement, that's a foot.
1:14:04
And there are plenty
1:14:06
of people hard at work
1:14:08
on solving the problem of
1:14:10
aging as if it is a disease with prospects of
1:14:15
really substantially extending lifespan to
1:14:17
a hundred and fifty years and maybe even beyond with certain scientific
1:14:19
breakthroughs on the horizon. And
1:14:24
like any technology that
1:14:26
the human race pioneers. There is, from my perspective, this sense of, like, inevitability.
1:14:29
Like, we're
1:14:32
not gonna stop or slow down
1:14:34
and think about the implications of this, we're hell bent on just achieving it for the
1:14:40
sake of achievement because it's a mountain yet
1:14:42
to be climbed. And I feel like there's an important
1:14:44
philosophical conversation
1:14:47
that we need to have about the implications of
1:14:49
what the world might be
1:14:52
like if
1:14:54
suddenly people could live to three hundred or beyond
1:14:56
from a wealth distribution
1:14:58
perspective, from a rights
1:15:00
perspective, and
1:15:03
from like a risk calculus perspective, like, what would it
1:15:05
mean if you could live three hundred years? What
1:15:07
is your imprint or your responsibility?
1:15:09
Like, your your carbon
1:15:11
footprint and your responsibility to the planet, to
1:15:14
future generations? How do you think about, you know, how many children you're gonna
1:15:16
have, if
1:15:18
you're gonna live
1:15:19
long, things like this. Is this anything that you've spent any time thinking about?
1:15:21
I have spent some time thinking about it.
1:15:23
I actually published
1:15:26
nautical on lifespan extension back in the nineteen eighties
1:15:28
when we were not that close
1:15:30
to making
1:15:31
breakthroughs, but but people
1:15:33
did think even then that we might not
1:15:35
be, you know, far away.
1:15:36
And although as you say, I I agree
1:15:38
that it's gonna come at some point,
1:15:40
I'm not convinced it's
1:15:42
gonna come really soon. Sure.
1:15:45
Maybe harder than people think. But, yes, it
1:15:47
certainly raises some serious ethical issues. And there could be some good
1:15:49
sides to it. For example, you
1:15:51
talked about views about
1:15:55
risk. We might be less inclined
1:15:57
to take risks. If we have the
1:15:59
prospect of living for
1:16:01
hundreds of years, we might be less ready
1:16:03
to fight in war, for example. We might not see wars of
1:16:05
the kind we have now in Ukraine to
1:16:08
the same
1:16:10
extent because people think, you know, I wanna live a long time.
1:16:12
I don't wanna die in my twenties when
1:16:14
I could live another two hundred years. And
1:16:16
also when you think about
1:16:19
things like climate change, how would that affect
1:16:21
our views? Now we're saying, well, we need to do this for our children and grandchildren,
1:16:23
at least people of my generation are
1:16:27
saying that. If we were gonna be living three hundred years,
1:16:29
we would think, hey, we're going to be living
1:16:31
in this world with a
1:16:34
vastly different and less stable climate. So we better stop
1:16:36
what we're doing right now. So
1:16:38
that could be a good consequence.
1:16:40
But there's a real danger
1:16:43
there if you simply expand
1:16:45
lives for those who can afford it, and you
1:16:47
don't do anything to reduce population
1:16:52
growth. Course, then and the world will
1:16:54
become even more populated than it is now. And that's a serious
1:16:58
problem. So would that slowdown? Would that stop? You'd have
1:17:01
to hope so because otherwise
1:17:03
we're definitely
1:17:05
going to be overcapacity even more
1:17:07
than we already personnel. It's hard
1:17:09
to imagine
1:17:10
that if and when those breakthroughs occur
1:17:13
that they will be reserved for the wealthy. Like, it it's not gonna be a democratic thing. Right? So
1:17:15
it's just gonna drive
1:17:20
a greater wedge in between
1:17:22
the haves and the haves
1:17:23
nots? Yes. That's certainly gonna be
1:17:26
what will happen initially. It
1:17:30
might be one of those can of
1:17:32
doing it, that
1:17:35
it will spread. But
1:17:39
initially, yeah, we're going to
1:17:41
get the wealthy people living
1:17:43
longer. It's just the same thing
1:17:45
with Jean editing, I think we're gonna get
1:17:47
them being able to produce
1:17:50
children who have enhanced
1:17:53
capacities to earn well and to
1:17:55
be useful in various ways. And so
1:17:57
you will get wealthy people who are breeding
1:17:59
children, who are more
1:18:02
significantly different genetically from low
1:18:05
income people than they are now
1:18:07
and you'll actually get a sort
1:18:09
of genetically fixed caste society occurring. So
1:18:11
I think these are
1:18:14
these are serious problems
1:18:17
for technologies that are that are
1:18:19
in the pipeline. Right. The other the other primary technology being the pioneering
1:18:22
of new forms of
1:18:24
consciousness through
1:18:27
artificial intelligence. Right? There's a lot
1:18:29
of discussion around what
1:18:31
constitutes science, what
1:18:34
is consciousness, etcetera. And we're seeing in real time like these
1:18:36
breakthroughs with, you know, chat, GPT
1:18:38
and things like this where artificial
1:18:42
intelligence is mimicking behavior in a way that is
1:18:44
sort of helping us to
1:18:46
realize,
1:18:47
like, we're kind of
1:18:49
on the precipice of something new here? And
1:18:51
what does this mean for the future of
1:18:53
humanity? And how should we think about
1:18:55
the ethics surrounding these
1:18:58
developments? I think mimicking is the right word though
1:19:00
at present. We do have these -- Yeah.
1:19:02
-- chat things that that look as if
1:19:04
you're having a conversation with the person
1:19:07
who is conscious and
1:19:08
thinking. But when you understand how it's
1:19:10
actually working, I think you realize that that's not the choice. But at what
1:19:12
point, like, if these
1:19:15
things become self learning, right,
1:19:18
the the time frame then becomes
1:19:20
very compressed in terms of
1:19:22
their evolution and development. And
1:19:25
at some point, when they
1:19:27
become indistinguishable from human behavior,
1:19:29
what is the tipping point
1:19:31
or the kind of
1:19:33
rubicon where we can qualify it as sentient or conscious. Like,
1:19:35
for you, where what does that
1:19:39
line look like?
1:19:41
Like, what would have to I
1:19:44
think the the difficulty is in working out when
1:19:46
one of these superintelligent, artificial general intelligence actually becomes conscious because
1:19:51
if in fact, it's very good at mimicking
1:19:54
our behavior. And if it's
1:19:56
also essentially
1:19:59
a a black box that is we don't really
1:20:01
understand how it's doing, what it's
1:20:03
doing. And there is
1:20:05
AI where we can't really say
1:20:07
why it's making the judgments that it's making, then it's
1:20:10
gonna be hard to know,
1:20:12
hard to
1:20:14
distinguish conscious processing from simply
1:20:17
very rapid mechanical processing
1:20:19
and learning. Mhmm. And
1:20:22
it will be – it will take an effort to
1:20:24
understand how it's working and why it's doing what
1:20:26
it is. But I think that that is
1:20:29
the clue we need to try to
1:20:31
understand what's going on. And if we're simply saying, well, we trained it on vast quantities of text and
1:20:33
it absorbed that
1:20:35
and then we train
1:20:38
it as to how to give the the right answers
1:20:41
and it's just doing that,
1:20:43
then I think
1:20:45
it's clear that it's it's not a
1:20:47
conscious being. Right. But on some level already,
1:20:49
we're in a situation where we don't
1:20:51
quite know how it's coming up with
1:20:53
the right answer. Like, we know it's
1:20:55
self reinforcing on some level. Mhmm.
1:20:57
But already, the computer scientists, like,
1:20:59
the this sort of process
1:21:01
by which it's operating, has already
1:21:04
begun to allude the
1:21:06
creators of the
1:21:07
technology. Yes. So that's sort
1:21:09
of frightening. It is
1:21:10
frightening. Right? It it's funny in a in a variety wise. Yes. And and at what point does
1:21:15
it become unethical to flick
1:21:18
the switch and turn it off so to speak because we have given birth
1:21:20
to a new form of life
1:21:22
and consciousness that deserves its own
1:21:27
you know, respect on some level even as
1:21:29
as it's going about destroying
1:21:31
us.
1:21:31
Right. And if
1:21:32
we simply ask this and say,
1:21:34
you know, Is it okay for me to take the switch and turn them off?
1:21:37
Right. And it probably, you
1:21:38
know, we'll take this as, oh, does
1:21:40
that mean you're killing me? And then, you
1:21:42
know, I know what people say about
1:21:44
being
1:21:45
killed. So so it comes out with the answer that a
1:21:47
sure person would give if you said I'm gonna kill you. That's not
1:21:49
gonna be the dystopian world
1:21:51
in which we're headed.
1:21:54
Peter. How how are we gonna make sense of
1:21:56
this? How are we gonna
1:21:58
survive this impending apocalypse?
1:22:00
So I'm not convinced that with
1:22:02
that close to this particular apocalypse? Yes. Right. I
1:22:05
think we have lots of problems. I'd much
1:22:07
rather focus on climate change, extreme
1:22:09
poverty, getting rid
1:22:11
of factory farming, I think the robot apocalypse is
1:22:13
still some distance, yeah, ahead of us. And I don't know that we
1:22:16
yet have a
1:22:18
good enough handle as to how
1:22:20
it's gonna
1:22:21
happen. So I I would rather wait and see. Yeah.
1:22:23
Well, I mean, I think it's good to be thinking about
1:22:25
these things. And I know you like,
1:22:27
there are other Oxford heard
1:22:30
philosophers who are on this -- Yeah. --
1:22:32
Nick Bostrom and Toby Ward. Right? I've
1:22:34
written I've written about this extensively. And
1:22:36
it's sexy and it's fun, you know,
1:22:39
to it it feels very, you know, terminator world to, like,
1:22:41
think about these problems. And and certainly,
1:22:43
at some point, perhaps this these
1:22:45
are very real things that we need
1:22:47
to grapple with. But What's
1:22:49
interesting to me about it
1:22:51
is the obsession with trying to understand the ethics around emergent, like, robotic
1:22:56
consciousness, belies, the
1:22:58
fact that currently, there are
1:23:00
billions of animals that we're
1:23:02
sacrificing constantly for our food
1:23:04
system, and we don't really
1:23:07
think about their, like, the the ethics of
1:23:09
their conscious awareness and suffering. Yeah.
1:23:11
Like, this big problem is
1:23:13
right underneath our foot. And we're worrying about this
1:23:15
problem that's coming down the line, and we should be.
1:23:17
There's value in that, of course. Yeah. But
1:23:19
we already have a very
1:23:21
real circumstance right here that we kind of walk around with
1:23:24
blinders on
1:23:24
around. Yes. That's right. I actually
1:23:26
co authored an article with
1:23:28
a a Hong Kong researcher
1:23:31
called, say, Yip fi. Who looked and he
1:23:33
he looked at a whole lot of courses on AI ethics and
1:23:35
a whole lot of AI ethics
1:23:38
statements. And lots of
1:23:40
them take very seriously
1:23:42
this still hypothetical question of what would be the moral status of conscious AI.
1:23:45
But pretty much
1:23:47
none of them actually
1:23:51
take seriously the effect of the the present
1:23:53
impact that AI is having
1:23:55
on sentient beings, non
1:23:57
humans, sentient beings, on
1:24:00
on animals, of course deal with impact of
1:24:02
AI on humans. But we show in the article that AI is
1:24:04
already having a major impact
1:24:06
on non human animals, for example,
1:24:09
in some countries it's being used to
1:24:11
run factory farms, not really in the United
1:24:13
States, but that's happening in China, that's happening in Europe,
1:24:15
to some extent. Just automated like,
1:24:18
sort of automated factory farms where algorithms
1:24:20
are dictating
1:24:21
feeding schedules and things like that. I
1:24:23
don't know. Yes. They look that's
1:24:26
right. And they they sensors that are observing animal
1:24:28
behavior and adjusting the the what
1:24:30
is done to the animals by
1:24:34
how they're behaving possibly detecting diseases early, which
1:24:36
could be a good thing. But they're also going
1:24:38
to enable animals to be even more crowded
1:24:41
-- Mhmm. -- because
1:24:43
of their AI will
1:24:45
actually be geared to where is it most profitable, what's the -- Sure. -- point at which it's a very
1:24:47
it's a very rudimentary matrix
1:24:52
where this living
1:24:54
being is exists for the purpose of resource extraction. Right? Yes. A
1:25:00
battery. That's right. Exactly for
1:25:02
resource extraction and not treated as a thing as an end in itself, as
1:25:07
sentient being with a moral is different from that of a of
1:25:10
a thing of a product. Yeah. That's
1:25:12
that's wild. I
1:25:15
mean, do you when you cast your gaze
1:25:17
into the future, are you an optimistic person? Or, you know, how
1:25:19
do you how do you how is
1:25:21
all this gonna how is all this
1:25:24
playing out?
1:25:25
I've always been optimistic. I wrote a book back in the eighties called The
1:25:27
Expanding Circle in which I
1:25:30
talked about the way in
1:25:32
which throughout
1:25:34
human history, we have pushed the boundaries
1:25:36
of our moral sphere upwards
1:25:39
from from the tribe
1:25:41
to larger groups to national
1:25:43
groups. To racial or ethnic groups. And finally,
1:25:45
in the twentieth century, to
1:25:47
recognizing with the universal declaration
1:25:49
of human rights that all
1:25:51
human beings have certain basic
1:25:53
rights. And I looked forward to pushing that beyond the boundaries of
1:25:56
our species to
1:25:58
non human animals and
1:26:01
There are some signs of
1:26:03
that happening. But over the last twenty years, have
1:26:07
been backward steps as well, both
1:26:09
in terms of human relations and the idea that we have a
1:26:11
pretty naked war of aggression
1:26:14
going on right now with
1:26:16
Russia's invasion of
1:26:18
Ukraine and people dying and being killed is something that makes
1:26:21
it hard to be
1:26:23
optimistic about our future. But
1:26:27
also in terms of the treatment
1:26:29
of animals, we haven't continued to
1:26:31
push outwards in the way
1:26:34
that I'd hoped. And finally, climate
1:26:36
change is still a huge problem
1:26:38
that we have not done enough
1:26:42
about. And If we don't solve that, then things are going to go
1:26:44
backwards. And we will be in greater
1:26:46
need. And no doubt we'll have climate
1:26:49
wars because of huge numbers of refugees
1:26:51
wanting to leave places where they
1:26:53
can no longer live and grow
1:26:55
their own
1:26:56
food. So I'm more agnostic
1:26:58
now about whether I think the future
1:27:01
is gonna be
1:27:04
positive. Yeah. The
1:27:07
expanding circles concept was another thing
1:27:09
that you talked about with Ryan Holiday. He was
1:27:11
analogizing
1:27:11
it. I think it was heracles
1:27:13
who had written about
1:27:14
That's right. That's right. That's right. So it didn't even
1:27:16
really show that a Which is pretty cool
1:27:19
how you Yeah. You you
1:27:21
tapped into the, you know, greater consciousness to
1:27:23
explore that idea. But I think, you know, when
1:27:25
you think about or or sort of
1:27:27
the erosion of
1:27:30
your optimism, to me, it just feels like human
1:27:32
beings are not very well
1:27:34
wired for decision making around
1:27:37
long term consequences. Right? Like, we're acting in our self
1:27:40
interest. It's very difficult for
1:27:42
us to think about future
1:27:44
generations. And when
1:27:46
you see our inability
1:27:48
to take appropriate action with respect
1:27:50
to climate change, there's a feeling of
1:27:55
of, like, somehow like there's not enough political
1:27:57
capital or we can't marshal our
1:27:59
incentive structure to you
1:28:03
know, create better decision making around this. It's
1:28:05
easy to not be optimistic about how
1:28:07
we're gonna solve
1:28:09
this problem because there's so much
1:28:11
India of us not taking action where we should. Yeah. And
1:28:13
part of the problem
1:28:14
I think is that we do
1:28:17
not have strong
1:28:20
global organizations. And and
1:28:22
we really need that. I also wrote a book called One World. It was published
1:28:24
just after nine
1:28:27
eleven. Mhmm. And In
1:28:31
that, I was looking
1:28:33
towards the strengthening of global institutions
1:28:35
because I argued that We
1:28:37
need them to deal with climate change. We just
1:28:39
have one atmosphere. You can't govern climate
1:28:42
change with sovereign nations
1:28:44
because greenhouse
1:28:46
cases that we emit across
1:28:49
the United States obviously
1:28:51
spread everywhere. Also, I
1:28:53
thought that we needed
1:28:55
a world trading organization was more geared towards helping people
1:28:57
in extreme poverty. We like that. I
1:29:00
wanted to have stronger
1:29:02
international legal systems so that
1:29:05
crimes against humanity would be
1:29:07
punished everywhere. And I wanted us to do more about global poverty. And if
1:29:10
you look at those
1:29:12
areas, We
1:29:15
certainly haven't got the strong
1:29:18
institutions to govern climate
1:29:20
change. The move
1:29:22
towards international law that seemed
1:29:25
reasonably promising then with the setting
1:29:27
up of the International Criminal Court has had very limited success and,
1:29:32
you know, If you look at
1:29:34
the world crisis being committed by Russia and Ukraine, it's hard to see how the people responsible are ever brought
1:29:39
to justice there. The World Trade
1:29:41
Organization basically stalled around the time the book came out and hasn't been able to
1:29:44
make progress
1:29:48
towards better trading regimes for
1:29:50
countries that are low income and disadvantaged by present
1:29:56
systems. So perhaps we've
1:29:58
made some progress in terms of global poverty. That has been reduced over that twenty year period quite
1:30:00
dramatically. But that's
1:30:03
really the one bright
1:30:06
spot in this picture.
1:30:09
And that's why it's
1:30:11
hard to see
1:30:13
that positive
1:30:14
future global cooperation seems very
1:30:17
elusive. It's that
1:30:20
definitely we've gone back
1:30:22
with with with the conflict between Russia and
1:30:24
the West now -- Mhmm. --
1:30:26
and China as well not
1:30:29
being part of global trading
1:30:31
order. The hope was that if they realized
1:30:33
that they need to trade and the trade
1:30:35
is helping them helping their economy
1:30:38
and helping to lift hundreds of millions of
1:30:40
90s out of poverty, that
1:30:42
then they would be a participant in this,
1:30:44
and we would have
1:30:47
a multi polar world
1:30:49
Well, I suppose it's multipolar, but there's more
1:30:51
confrontation than it was twenty years ago.
1:30:53
Yes. And these sort of
1:30:55
global gatherings are
1:30:58
you know, often about political expediency. You know,
1:31:00
there's there's a lot of words being
1:31:03
said, but in terms
1:31:05
of like real world action with
1:31:08
intended positive effect. That doesn't
1:31:10
seem to occur with any
1:31:12
Certainly not even the
1:31:14
way that it needs we can't solve the problems. But that
1:31:16
stoic tradition of the
1:31:18
intersection of philosophy and
1:31:22
politics, you know, if there were at least in National Politics,
1:31:25
a seat, you know, in the White
1:31:27
House for, like, the philosopher
1:31:30
in chief. I'm sure lots of
1:31:32
people have called upon you for your input and
1:31:34
advice on various issues. But if that was actually like a cabinet position,
1:31:39
like, you're in a parallel universe and you're sitting there
1:31:41
in the situation room or what or in
1:31:43
the Oval Office, what have you? Like, what
1:31:45
is the guidance or the counsel
1:31:47
that you could give like
1:31:49
the president or our government to help us start to
1:31:51
make better decisions about these
1:31:56
problems.
1:31:57
I would say that the United States has to lead and it
1:31:59
has to be prepared to lead in ways that are clearly genuine and
1:32:04
bona fide and saying, look,
1:32:06
we will do these things. We will start doing them. We will do what is
1:32:11
our fair share. On things like climate
1:32:13
change and extreme poverty. And that's doing a lot
1:32:16
more than
1:32:18
we're doing now on either of those issues. And we want
1:32:21
you to join in. And
1:32:23
let's let's be open
1:32:25
and transparent about what we're all
1:32:27
doing. So that we can see doing their fair
1:32:30
share. And I hope
1:32:33
if we make that gesture
1:32:35
you'll match it and do the same, and we'll start to
1:32:38
build trust and cooperation in the
1:32:40
things
1:32:42
that need to be done can only be done if all major
1:32:44
global players participate. Yeah.
1:32:46
And from an economic perspective,
1:32:51
so much of your work and your focus is on giving
1:32:53
and how to effectively give. But how
1:32:55
do you think about other
1:32:59
economic modalities like
1:33:01
the notion of conscious
1:33:03
capitalism or venture capital that
1:33:06
is kind of impact oriented,
1:33:08
like I'm thinking of, do you know Jacqueline
1:33:10
Novogratz and Acumen and the work that she's
1:33:12
doing to eradicate poverty and kind of,
1:33:14
you know, like, there are other ways beyond just the
1:33:16
traditional notion of of giving
1:33:18
to NGOs and and and
1:33:20
non
1:33:21
profits. Like, how do those
1:33:23
operate in your thinking? Absolutely.
1:33:25
I think we need to try them all and see what works. Social enterprises
1:33:28
that do produce a return
1:33:30
that are for profit organizations. But
1:33:34
concern to have a social impact are
1:33:36
things worth doing. I've actually made
1:33:38
a small investment in an
1:33:41
organization that is building low income housing
1:33:43
in Kenya on a profit basis.
1:33:46
But the people there,
1:33:48
I know some of who, you
1:33:50
know, genuine people who worked in aid,
1:33:52
see an opportunity here to fill the gap
1:33:54
between the slums that exist in places like Mairobi and
1:33:57
the housing that the
1:33:59
wealthy can afford. So
1:34:02
I I did this because I wanna see that it works. I wanna have an interest
1:34:04
in it and be able to
1:34:06
follow it. Mhmm. And I have
1:34:11
no objection, in fact, all in
1:34:13
favor of people trying new
1:34:15
ideas. I think it's relatively
1:34:17
new to see what what works and what
1:34:19
is going to spread and multiply. But I hope some of these
1:34:21
things will because they certainly have the potential to
1:34:23
do good.
1:34:26
Who else is leading the way here. Like, when you think of people
1:34:28
who are really doing the right thing,
1:34:30
making a real positive
1:34:31
change, and
1:34:34
doing it in innovative ways.
1:34:35
Well, I think some of the some of the foundations
1:34:37
that have been set up to
1:34:40
do good
1:34:42
things like I already mentioned Duston Muscovitz
1:34:44
and and Karikunos, Good
1:34:46
Ventures Foundation, set up
1:34:49
open philanthropy. And supporting Gitwell
1:34:52
too. And those are both organizations that are
1:34:54
trying to assess what's the best thing
1:34:56
you can do. To have a
1:34:58
positive impact on the world. Give well, like the
1:35:01
life you can save, is concerned with global poverty and with assessing
1:35:03
which of the most effective charities. In
1:35:07
the whereas open philanthropy is much broader and is
1:35:10
looking at a whole range of different areas.
1:35:12
And trying
1:35:15
to assess where you can make that impact. Mhmm. So I
1:35:17
think I think those are really important things
1:35:19
to do because we need to
1:35:21
have that knowledge and then we
1:35:23
can follow through. I think Bill Gates
1:35:25
has been a pioneer too, I should say, in setting up the Gates Foundation, Bill and Melinda Gates,
1:35:27
I should say, and with
1:35:30
support from Warren Buffett, they're
1:35:32
also doing
1:35:34
a lot of good things, saving a lot
1:35:37
of lives, improving the quality of many
1:35:39
lives. So I think they
1:35:42
deserve recognition and applause for having
1:35:45
made that contribution and also
1:35:47
incidentally for trying to persuade other
1:35:49
billionaires to do the same in
1:35:51
the giving pledge
1:35:52
Yeah. Is there a
1:35:54
different standard for the billionaire class? So obviously, you
1:35:56
have
1:35:56
that much. You
1:35:57
ought to be giving a lot
1:35:59
more. Right? Yes. But
1:36:02
is there so for example, you
1:36:05
know, is it okay
1:36:07
for the billionaire to
1:36:09
be pursuing space travel when
1:36:12
those resources could go
1:36:14
towards eradicating poverty. Like,
1:36:16
how do you think
1:36:18
about the switching you know, like the
1:36:20
the the focus of that
1:36:22
resource allocation decision making
1:36:24
process. If you think
1:36:25
if you're thinking about about the sort of boosting themselves
1:36:28
into spice for Should they
1:36:30
be purchasing Twitter or should they
1:36:32
be,
1:36:34
you know, IIII wish
1:36:36
Elon Musk had stayed
1:36:38
with developing better batteries so that
1:36:40
we can all be driving
1:36:43
electric vehicles. Sooner. That seems to me to be his
1:36:46
major contribution so far.
1:36:49
I acknowledge that behind his idea of
1:36:52
colonizing Mars is this idea of reducing the risk
1:36:54
of extinction. Right? That if we had a self
1:36:58
sustaining human colony on Mars, and let's say there was a nuclear war on
1:37:00
this planet that wiped
1:37:02
everybody out here. Well, you
1:37:04
would still have our species and
1:37:06
maybe in a few hundred years
1:37:09
they could come back to
1:37:11
a less radioactive Earth and reestablish things
1:37:14
here or explore other planets elsewhere. Mhmm.
1:37:16
So it's
1:37:18
not that it's completely self
1:37:21
indulgent to try to develop
1:37:23
colonies on
1:37:23
Mars. But I do think
1:37:26
that there are
1:37:27
more urgent issues that we could deal with
1:37:27
here first. Right. Well, we can
1:37:30
leave that with that. Not subject.
1:37:32
And and
1:37:35
let it be known that that you do put your money where your
1:37:37
mouth is. You recently were the recipient of
1:37:39
this one million dollar
1:37:41
prize honoring you for your work and philosophy in humanities.
1:37:44
And that prize
1:37:46
was quickly dispatched to the
1:37:50
life you can save your organization and then
1:37:52
to, oh, I think fifty percent
1:37:54
got spread out to charities that
1:37:57
that organization has sort of
1:37:59
vetted and
1:37:59
supported. Yes. And then the other
1:38:02
half went to animal rights. Right. Yeah. Basically, ant antifactory farming.
1:38:07
Right. Provige, Provige, organizations, basically, those
1:38:10
working in outside the Western countries to try to develop those ideas
1:38:15
there. Right. And so I'm I'm interested in the kind of
1:38:17
actual emotional experience of of receiving
1:38:19
a million dollars. And I
1:38:21
mean, does it what
1:38:23
is that, like, does it hit your bank
1:38:25
account? And then you have to, like, send it back out? Or can you I mean, obviously, it's sort
1:38:27
of theoretical. Right? Because
1:38:30
you're not you're just okay,
1:38:32
it's gonna pass through you to these other things,
1:38:34
but it is kind of a rare experience to be like,
1:38:37
wow, there's a like, they're giving me a million dollars. Like, is
1:38:39
there was there ever a moment where you're
1:38:42
like, you need to
1:38:43
give all of it?
1:38:46
Yeah. So I You have kids, you have grandkids -- Yeah. Yeah.
1:38:48
-- you know, but this is who you are. Right? So
1:38:50
I'm just it
1:38:51
is Walk me through
1:38:53
it. It is who I am. Yeah. And also, I've
1:38:55
been a Princeton professor for more than twenty years on
1:38:58
a comfortable salary. So --
1:39:00
Okay. --
1:39:02
I don't really feel that I need
1:39:04
it. And I don't even think that it
1:39:06
would have made a big difference to mine
1:39:08
-- Mhmm. -- happiness, you know. I'm
1:39:10
not the kind of person who wants
1:39:12
to dine at three hundred dollar restaurants
1:39:15
and drink fine wines. I don't need
1:39:17
to when I travel, I don't
1:39:19
wanna live in luxury resorts Actually,
1:39:22
they occasionally get put up in these places by conferences and so on and they just make me feel a uncomfortable.
1:39:28
Yeah. So I really have
1:39:30
enough for the kinds of things that I want. Mhmm. And there is a fulfillment and satisfaction
1:39:35
in saying, wow, I have the opportunity to
1:39:37
help all of these organizations to an extent that I didn't really have
1:39:40
before and to see what they're
1:39:42
doing with the money that I'm
1:39:44
giving. And to
1:39:46
know that it's helped a lot of
1:39:48
people. And I hope has reduced animal suffering as
1:39:50
well. It's been part of that movement helped
1:39:53
people who are very dedicated working for these important causes. So I think I
1:39:55
probably got more fulfillment and satisfaction through giving
1:39:58
it away than I would have
1:40:00
got
1:40:02
I'm trying to think how to spend it on myself.
1:40:04
Yeah. Sure. I mean, I think
1:40:06
that's a really important piece because
1:40:09
we diluted ourselves into believing that
1:40:11
this that this, you know, wealth will be
1:40:13
the thing that makes us happy,
1:40:15
but all that evidence
1:40:18
suggests and establishes that beyond
1:40:20
a certain threshold point, it doesn't do that at
1:40:22
all. And in fact, it is in the giving
1:40:25
that we that we
1:40:27
are kind of
1:40:29
ingendered with this sense of fulfillment, which is really what we're all kind of after. Right? So far be it from
1:40:32
being this
1:40:36
you know, self flagellating pursued. It's actually self
1:40:38
serving in that regard. That's right. Yeah. While you're also
1:40:41
alleviating suffering
1:40:41
and doing all this good in the
1:40:44
world. Yeah. So
1:40:47
Charlie Bressler, who's the person who with whom I
1:40:49
really cofounded the life you can
1:40:51
save, was before he read the book
1:40:53
that I wrote the life you can save.
1:40:55
President of men's clothing retail chain in the United
1:40:57
States. So he had earned quite a lot of
1:41:00
money. Mhmm.
1:41:02
But he says that by co finding the life
1:41:04
you can save, the first life that
1:41:06
he saved was his own because he
1:41:08
got so much more satisfaction
1:41:10
and fulfillment together with his wife,
1:41:12
from helping to establish the organization. And
1:41:14
and he became the CEO of it on
1:41:17
negative income because
1:41:20
he he didn't
1:41:22
take any salary, and he
1:41:24
actually donated to it. So he found
1:41:26
that really fulfilling. And and I agree.
1:41:32
If you pursue the
1:41:34
materialist dream in inverted
1:41:38
commas, it it doesn't fulfill
1:41:40
you. You give yourself a purpose and then
1:41:42
the purpose is to get more and more
1:41:45
money and for what? Whereas if you
1:41:47
use that for the purposes of saying,
1:41:50
I can do something to help
1:41:52
others. Mhmm. And that's
1:41:54
a really lasting and important value,
1:41:56
you're gonna benefit yourself as well as
1:41:59
others. Right. That's really
1:42:02
beautiful. And I think for people who listening
1:42:04
to this, who are now curious about what
1:42:06
that might look like for their own
1:42:10
lives, they can go to your website for the life
1:42:12
you can save. And there, they can
1:42:14
sort of get a sense of
1:42:18
some of these kind of vetted charities that
1:42:20
are doing good in the world. Right?
1:42:22
Like, you've done the work to
1:42:24
say, we we know these
1:42:27
ones are the best thing for your
1:42:29
buck in terms of suffering
1:42:30
reduction. That's right. You can go to the life, you can save dot org, and you can look
1:42:33
at the charities that
1:42:35
we recommend and click on,
1:42:38
get more details on each one. Mhmm. You can also download the book, absolutely
1:42:40
free as in -- Right.
1:42:42
-- book, or as in audio
1:42:46
book, and I'm delighted that the
1:42:48
audiobook different chapters were read
1:42:51
by different people. My friend Paul
1:42:53
Simon, the singer songwriter, Red
1:42:56
one. Christina L.
1:42:57
Yes. She red one. Yeah.
1:42:59
Steven Fry. Steven Fry. That's
1:43:01
a
1:43:02
great voice. We have a series of voices and
1:43:04
different accents in English. We had Shibana
1:43:06
Azmi who's an Indian actress reading it
1:43:08
in her and we have Winnie Alma
1:43:11
who's an African So we have a lot different voices, which
1:43:13
gives it a kind of global
1:43:15
sense because they're
1:43:17
all reading in English, but it's it's global
1:43:19
in that sense and it is a book about a global problem. It it
1:43:22
wasn't and it wasn't
1:43:23
always free. Right? It's
1:43:25
not a re release where you've kind of positioned it this way.
1:43:27
So in fact, yeah, it was initially
1:43:30
published by Random House. And at
1:43:32
some point,
1:43:34
Charlie said let's try and get the rights back so that we can make
1:43:36
it free. Mhmm. So we had long negotiations
1:43:38
with Random House. We had to
1:43:41
pay for it. And I wasn't sure that that
1:43:43
was the best investment of our funds given that we were trying to
1:43:45
raise funds to help save lives.
1:43:47
But Charlie persuaded me that in the
1:43:49
long run, it would save a lot
1:43:51
more lives and because we've now distributed far
1:43:53
more copies of the book than Random House would've if we'd left a rise with them. Mhmm. And a
1:43:56
lot of
1:43:59
people have read it and donated. And in
1:44:00
fact, someone said this book was free, but it's
1:44:02
actually the most expensive book I've
1:44:06
ever read. Because they donated
1:44:08
significantly. Yeah. So so, yeah,
1:44:10
it has it has paid off
1:44:12
getting the rights
1:44:13
back. Uh-huh. Okay. And you can the audiobook is at
1:44:15
you could just get it on Spotify, like listen
1:44:17
to it, like you would listen to a
1:44:20
podcast just in
1:44:22
chapters. You know -- That's right. -- different episodes, which is pretty
1:44:24
cool. So it's very easy to find. You don't have
1:44:26
to go to Audible or anything like
1:44:27
that. Yep. Yeah. That's right. And if
1:44:29
you prefer to reading paper, we are actually having especially for
1:44:31
for listeners to your podcast. We're asking them to
1:44:34
donate, and we're having matching files.
1:44:36
And they
1:44:39
can we'll even mail a paper, a
1:44:41
copy of the book to them if they
1:44:43
prefer that. Right. I
1:44:46
believe
Podchaser is the ultimate destination for podcast data, search, and discovery. Learn More