Episode Transcript
Transcripts are displayed as originally observed. Some content, including advertisements may have changed.
Use Ctrl + F to search
0:03
Hello, Hello, Welcome to Smart Talks
0:05
with IBM, a podcast from Pushkin
0:07
Industries, iHeartRadio and IBM.
0:10
I'm Malcolm Babbo. This season,
0:13
we're talking to new creators, the
0:15
developers, data scientists,
0:17
CTOs, and other visionaries who
0:19
are creatively applying technology
0:21
and business to drive change. Channeling
0:24
their knowledge and expertise, they're developing
0:26
more creative and effective solutions
0:29
no matter the industry. Our
0:31
guest today is Padre Bonidius,
0:34
trust in AI practice leader
0:36
within IBM Consulting. Advocating
0:39
for artificial intelligence built and deployed
0:42
responsibly is no longer
0:44
just a compliance issue, but
0:46
a business imperative. Part
0:49
of Phader's job is to help companies
0:51
identify potential risks and pitfalls
0:54
way before any code is written. In
0:56
today's show, you'll hear how Phader's
0:58
team at IBM is approaching this challenge
1:01
holistically and creatively. Phedre
1:04
spoke with doctor Laurie Santos, host
1:06
of the Pushkin podcast The Happiness
1:08
Lab. Laurie is a professor of
1:10
psychology at Yale University and an
1:13
expert on human cognition and the
1:15
cognitive biases that impede
1:17
better choices. Now,
1:20
let's get to the interview
1:28
Phedro. I'm so excited that we get a chance to chat
1:31
today. You know, just to start off,
1:33
I'm wondering how did you get started
1:35
in this role at IBM, Like, what's the story
1:37
to how you got where you are today? Oh goodness.
1:39
My background is actually from
1:42
the world of video games for entertainment,
1:44
So AI has always been very
1:46
interesting to me, especially when you intersect
1:48
AI and play. But several
1:51
years ago I began to get very
1:54
frustrated by what
1:56
I was reading in the news with
1:58
respect to malintent
2:02
through the use of AI. And
2:04
the more that I learned and the
2:06
more that I studied about
2:09
this space of AI and ethics, the more I
2:11
recognize that even
2:14
organizations that have the very very
2:16
best of intentions could
2:19
inadvertently cause potential
2:22
harm. And so that's super cool.
2:24
I love that your interest in more responsible
2:26
AI came from the gaming world.
2:29
You have to talk a little bit about your history with gaming
2:31
and that how that informed your interest in trustworthy
2:34
AI. Well, it wasn't as
2:36
much necessarily the
2:38
ethical components of AI
2:40
when I was working in games. It was more
2:43
things like, look at what
2:45
non player characters can do? You
2:48
know, I mean if you've got an AI acting
2:50
as a character within the game, and how
2:52
is it that you can use AI in order
2:54
to make a game a more interesting experience.
2:58
Actually, I ended up joining IBM to
3:00
be our first global lead for something called
3:02
serious games, which is when you use video games
3:04
to do something other than just entertaining.
3:07
And so the idea of integrating real data and real
3:09
processes within sophisticated games
3:11
powered by AI to solve complex
3:14
problems. It wasn't until,
3:16
as I mentioned, like later, when
3:18
we started to hear all of us more
3:20
and more news about just
3:23
problems what could happen with respect
3:26
to rendering or putting out models that are inaccurate
3:29
or unfair. I know one of
3:31
your inspirations for hearing other interviews that you've
3:33
done is sci Fi. I'm also a sci Fi
3:35
nerd, and I know sci Fi has talked a
3:37
lot about, you know, the trustworthiness
3:39
issues that come up when we're dealing with AI and
3:42
so on, and so talk a little bit about
3:44
how you bring that to your work in developing
3:46
AI. That's a little bit more ethical. A
3:48
lovely question. So my parents
3:51
were major technofiles. They
3:53
both were immigrants to the United States,
3:55
came here to study engineering and they met
3:59
in college. Growing up, my
4:01
sister and I we had Star
4:03
Trek playing every
4:06
night. My parents were
4:08
both big fans of Gene Roddenberry's
4:11
vision of how technology could
4:13
really be used to help
4:15
better humankind, and that was the
4:18
ethos that, of course, we grew up
4:20
in. The wonderful thing about
4:22
science fiction isn't that it
4:24
predicts cars, for example,
4:27
but that it predicts traffic jams, you
4:29
know. And I think there's just so
4:32
much we can learn from
4:34
science fiction, or in fact, like I said,
4:36
play as a mechanism to be able
4:38
to teach science fiction predicting
4:41
traffic jams. I love it. But
4:44
when we think about AI and science
4:46
fiction, we need to be careful. We
4:49
need to remember that AI is
4:51
not something that's going to enter our lives at
4:53
some point in a distant future. AI
4:56
is something that's all around us today.
4:59
If you have a virtual assistant in your
5:01
house, that's AI, your
5:03
phone app that predicts traffic AI.
5:06
When a streaming service recommends a movie,
5:09
you've guessed it AI. Phaeder
5:12
says. AI maybe behind the
5:14
scenes determining the interest rate
5:16
on your loan, or even whether
5:18
or not you're the right candidate for that job
5:20
you applied for. AI is
5:23
both ubiquitous and invisible,
5:26
which is why it is so crucial the
5:28
companies learn how to build trustworthy
5:30
AI. How do we do that?
5:33
When thinking about what does it take
5:35
to earn trust in something
5:37
like an AI, there
5:39
are fundamentally human centric
5:42
questions to be asked, right like,
5:44
what is the intent of this particular AI
5:46
model? How accurate is that model?
5:49
How fair is it? Is it explainable
5:52
if it makes a decision that could directly
5:54
affect my livelihood? Can
5:56
I inquire what data did you use
5:58
about me to make decision?
6:01
Is it protecting my data? Is
6:03
it robust? Is it protected
6:05
against people who could trick
6:08
it to disadvantage me over others?
6:10
I mean, there's so many questions
6:12
to be asked. Earning
6:14
trust in something like AI is fundamentally
6:17
not a technological challenge,
6:20
but a socio technological challenge.
6:22
It can't just be solved
6:25
with a tool alone. What
6:28
are the kinds of risks that companies have to think
6:30
through? Is they're developing these technologies
6:32
to make sure they're as trustworthy as possible, Well,
6:35
you know, they may be putting a lot of money
6:37
into investing in AI.
6:39
That gets stuck in proof of concept,
6:42
planned likes get stuck in pilot. We've
6:44
done some research where we have found about eighty
6:46
percent of investments in AI
6:48
get stuck, and sometimes
6:51
it's because the investment isn't tied
6:53
directly to a business strategy, or more
6:55
often than not, people simply don't trust
6:58
the results of the AI model. As
7:01
a company who is of course thinking about this so
7:03
deeply what a businesses need to consider
7:05
when they're trying to figure out how
7:07
to solve this big puzzle of AI ethics.
7:10
It has to be approached holistically, so
7:12
you've got to be thinking about, for example,
7:15
what culture is required
7:17
within your organization in order to
7:19
really be able to responsibly create AI,
7:22
what processes are in place to make
7:24
sure that you're being compliant and that your
7:27
practitioners know what to do,
7:30
and then of course AI engineering
7:32
frameworks and tooling that can
7:34
assist you on this journey. There
7:36
is so much fundamentally to do.
7:39
We found that actually those that
7:41
were leading responsible
7:44
AI trust were the AI initiatives
7:46
within their organization has switched
7:48
in the last three years. It used
7:50
to be technical leaders, for
7:52
example, chief data officer or
7:54
someone who is a PhD in
7:56
machine learning and now it's
7:58
switched to be eighty percent
8:01
of those leaders are now non technical
8:03
business leaders maybe you know, chief
8:06
compliance officer, chief diversely
8:08
inclusivity officers, chief legal officer.
8:11
So we're seeing a shift, and I believe
8:13
firmly it's a recognition
8:16
from organizations that are
8:18
seeing that in order to really
8:20
pull this off well, there has to be
8:22
an investment than a focus in
8:25
culture, in people
8:27
and getting people to understand why
8:29
they should care about this space. And
8:33
so I see two challenges with doing
8:35
that right. One is, you know a lot of these
8:37
technology companies are really built to be
8:40
tech companies, not necessarily you know,
8:42
social tech companies or having this sort
8:44
of training in ethics and beyond. Another
8:47
issue seems to be that you're really proposing
8:49
a switch that's truly holistic, right,
8:51
that's like rethinking the way the company
8:54
thinks about its bottom line. And so as
8:56
you think about working through these kinds of challenges
8:59
at IBM, how have you tackled this, like, how
9:01
have you brought new talent in? How have you thought
9:03
really carefully about this big holistic switch
9:05
that needs to come to make AI more trustworthy.
9:08
Data is an artifact of the human
9:10
experience and if you start with
9:12
that as your definition and then
9:14
think about well, data
9:17
is curated by data sideists. All data
9:19
is biased, and so if
9:22
you're not recognizing bias
9:25
with eyes fully open, then
9:27
ultimately you're calcifying systemic
9:30
bias intosystems like AI.
9:33
So some of the things that we've done and IBM
9:36
again recognizing this important need
9:38
for culture is big, big,
9:40
big focus on diversity, not
9:43
only looking at teams of data scientists
9:45
and saying how many women are on this team,
9:47
how many minorities are on this team, but
9:50
also insisting
9:52
on recognizing that we
9:55
need to bring in people with different world views
9:57
too, for example, what's your
9:59
definition and of fairness? Is
10:01
your definition equality? Or is an equity?
10:04
Also bringing people with a
10:06
wider variety of skill sets and roles,
10:09
including our social scientists, anthropologist,
10:12
sociologist, psychologist
10:15
like yourself, right, behavioral
10:17
scientists, designers. I mean we have
10:20
one of the leading AI
10:22
design practices in the
10:24
world. I mean the effort, the investments
10:27
we've been making in design thinking
10:29
as a mechanism to
10:31
create frameworks for systemic empathy
10:34
well before any code is written, so
10:37
people can think through how
10:39
would you design in order to
10:41
mitigate for any potential harm given
10:44
not only the values of your organization,
10:46
but what are the rights of individuals
10:49
asking oneself? These kinds of questions
10:51
reinforces than the idea
10:54
that ethics doesn't come at the end, like
10:57
it's some kind of quality assurance,
10:59
like check I passed the audit, I'm
11:01
good to go, you know. But instead,
11:03
really, you know, as soon as you're thinking about
11:05
using an AI for a particular
11:08
use case, thinking about, you know, what
11:11
is the intent of this model, what's the relationship
11:13
we ultimately want to have with AI?
11:16
And again, these are non technology
11:19
questions. This is where social scientists.
11:22
Having a social scientist on your team
11:25
helping think through these kinds of questions
11:27
is critical. Let's
11:30
pause here for a second, because this is
11:32
a really profound idea. Building
11:34
responsible AI does not mean
11:37
that you create a system then check in
11:39
at the end and say is this okay?
11:41
Is this ethical? If you don't
11:44
ask those questions until the end of the process,
11:47
you've already failed. You have
11:49
to think about ethics from the jump
11:52
from the makeup of the team to the data
11:54
you're using to train the model, to the most
11:56
basic question of all, is this even
11:58
the right use case for artificial
12:00
intelligence. The big lesson
12:02
from IBM is this responsible
12:06
AI is something you build at
12:08
every step of the process. So
12:11
this season of smart Talks is all focused on
12:14
creativity and business. My guess
12:16
is that thinking about trustworthy AI involves
12:18
a lot of creativity. But talk to me about
12:20
some of the spots where you see this work as being most
12:23
creative. Oh goodness,
12:25
I would say incorporating design
12:28
design thinking in particular as
12:30
well as straight up design in
12:33
order to craft AI responsibly.
12:36
You've used this word design thinking, and so
12:38
I'm wondering exactly what you mean here. How do you
12:40
define this idea of design thinking. Design
12:43
thinking is a practice that we established
12:46
here at IBM many years ago. In
12:48
essence, what it is, it's a
12:50
way of working with groups
12:52
of people to co
12:55
create a vision for
12:58
something, for a product or service
13:00
or an outcome. And
13:03
typically it starts with things
13:05
like, for example, empathy maps, like if
13:07
you're thinking about an end user, thinking
13:10
through what is this person thinking,
13:13
seeing, hearing, feeling, like what are they
13:15
experiencing in order
13:17
to ultimately craft an experience
13:19
for them that is targeted specifically
13:22
for them. So we use it in
13:24
a really wide variety of
13:26
different ways with respect to
13:28
trustworthy AI, even rendering
13:31
an AI model explainable
13:33
to a subject. And I'll give you an example.
13:36
So we've got this wonderful program
13:38
with an IBM call our Academy of Technology,
13:41
and we take on initiatives that steer
13:44
the company in innovative new directions.
13:47
So we had an initiative where
13:49
it was titled what the Titanic
13:51
taught Us about Explainable AI? And
13:55
the project was imagining
13:58
if there was an AI model that could
14:01
predict the likelihood
14:03
of a passenger getting a life raft
14:05
on the Titanic. And we broke
14:07
up into two workstreams. One was
14:10
the workstream full of the data scientists
14:12
who were using all the different explainers
14:14
to come up with the predictions and they would crank out
14:16
the numbers. And the other team
14:19
here's where the social scientists lived
14:21
and the designers were right where
14:23
we were thinking through how do
14:25
we empower people? Well,
14:27
how do we explain this
14:30
algorithm and this
14:33
predictor and the accuracy behind this
14:35
prediction in such a way as to
14:37
ultimately empower an end users. They
14:39
could decide I'm not getting on
14:41
that boat, or I
14:43
want to get a second opinion
14:46
please, or I want to contest
14:49
the outputs of this model because
14:51
I upgraded to first class
14:54
just yesterday, see what I'm saying,
14:56
And that takes a lot of creativity
14:59
how you design and experience
15:01
for someone in order to ultimately
15:04
empower them. So design
15:06
design, design is critically,
15:09
critically important. And why I mentioned
15:11
you know, we've got to open up the aperture
15:13
with respect to who we invite to the table
15:15
and these kinds of conversations. Taking
15:17
the time to really understand other people's
15:20
perspectives is so important
15:22
when you're doing anything creative, and
15:25
it is fundamental to the way the
15:27
new creators work. The
15:29
core question you should always be asking
15:31
is where will the user be meeting
15:34
this product? As Peder
15:36
said, what will they be thinking, seeing,
15:38
hearing, feeling. If you can
15:40
answer those questions the way IBM
15:43
does in its design thinking practice,
15:45
you will be in great shape to create
15:48
almost anything. Really, let's
15:50
hear how it works in practice.
15:53
And so we've been mostly talking kind of at the metal
15:55
level about you know, how to think about AI ethics
15:58
generally, but of course the way this probably
16:00
occurs in the trenches as a client approaches
16:03
IBM and they want to help with a specific
16:05
problem in AI and so I'm wondering
16:07
from a client based perspective, where do you
16:09
start having some of these tough conversations.
16:12
It has varied, To tell you
16:14
the truth, we had one
16:16
client that approached us to
16:19
expand the use of an AI model
16:22
to infer skill sets
16:24
of their employees, but not just to infer
16:26
their technical skills but also
16:29
their soft foundational skills,
16:31
meaning, let me use an AI to determine
16:33
what kind of communicator you might be A
16:35
Laurie right. Others might
16:38
come to us with, Okay,
16:41
we recognize we need help setting an
16:43
AI efics board. Is this something you can
16:45
assist us with? Or we
16:48
have these values, we need to
16:50
establish AI ethics principles
16:52
and processes to help
16:55
us ensure that we're compliant given regulations
16:57
coming down the pike. Or you've
17:00
had clients come to us saying, please train our people
17:02
how to assess for unexpected
17:05
patterns in an AI model,
17:07
but then also how to
17:10
holistically mitigate to
17:12
prevent any potential harm. And
17:15
those have been phenomenal engagements.
17:19
They're huge learning moments. And
17:21
so it seems like the additional value
17:24
that IBM is bringing through this process
17:26
isn't necessarily just providing an AI algorithm
17:28
or consulting on same AI algorithm. It seems
17:30
like the real value added is
17:33
explaining how this design thinking works.
17:35
You're almost like this therapist
17:37
or like a really good bartender who talks to people,
17:39
who talks about companies through some of their problems
17:42
to try to figure out where they're going astray before
17:44
they start implementing these things. Can
17:47
I put chief bartender off and
17:49
on. I like the metaphor,
17:52
I'll tell you some of our most
17:55
valuable people on the team for that
17:57
engagement. We had an industrial
17:59
organisational psychologist, we
18:01
had an anthropologist. That's
18:04
why I'm saying it's important that we
18:06
bring in the social scientists because you're
18:08
exactly right. It's more
18:10
than just scrutinizing the
18:13
algorithm in its state.
18:15
You have to be thinking about how is it
18:17
being used holistically? And
18:20
So if I was a business that was trying to think about
18:22
how a company like IBM could come
18:24
in and help out with more trustworthy AI,
18:27
what would this process really look like. Well,
18:30
what we're finding more often than not is
18:32
that they'll be smaller teams
18:34
within broader organizations that
18:37
either have the responsibility of
18:39
compliance and see the writing
18:41
on the wall, or they've been
18:43
the ones investing in AI and
18:46
are trying to figure out how to get
18:48
the rest of the organization on
18:50
board with respect to things like setting
18:53
up an ethics board or establishing principles
18:56
or things like that. So some
18:58
things that we've done to help companies
19:01
do this is we kick off engagements
19:04
with what we called our AI for
19:06
leaders workshops. On the
19:08
one hand, it's teaching why
19:11
you should care, but on the other
19:13
hand, it's meant to get people so excited
19:15
across the organization that they want to raise their hand
19:17
and say, I want to represent this part,
19:20
like, for example, I want to be part of the ethics
19:22
board as it is being stood up. The
19:24
heart parts, not the tech. The hard
19:26
part is human behavior. And I know I'm preaching to
19:28
the choir given your background, it's
19:31
so nice as a psychologist to hear this. I'm
19:33
like snapping my fingers, like preach exactly.
19:35
The hard part is human behavior. So
19:38
it's been like drinking
19:40
from a fire hose. I mean in terms
19:42
of the kinds of things that we've
19:45
all been learning, and there's still so much
19:47
to learn. It really
19:50
bugs me that those who are
19:52
lucky enough to be able to take
19:54
classes in things like data ethics or
19:56
AI ethics, self categorize
19:58
as coders, machine learning scientists, or data
20:00
scientists. If we're living in a world where
20:03
AI is fundamentally
20:05
being used to make decisions that could directly
20:07
affect our livelihoods, we need to
20:09
know more. We need to have
20:11
more literacy, and also
20:15
make sure that there is a consistent
20:17
message of accessibility such
20:20
that we are saying you don't
20:22
just have to be interested in coding,
20:24
like you're interested in social justice or
20:26
psychology or anthropologies. There's
20:29
a seat at the table for you here
20:31
because we desperately need you. We
20:33
desperately need that kind of skill set.
20:36
Just getting people to think about
20:39
how do you design something given
20:42
an empathy lens to protect
20:44
people? I mean that, I think is such a crucial
20:47
skill to learn. You know, one
20:49
thing I love about your approaches that when you're talking
20:51
to clients, you're almost doing what I'm doing as
20:53
a professor, where you're kind of instructing
20:55
students, getting them to think in different ways.
20:58
But I know from my field that I wind up learning
21:00
as much from students as I think sometimes they
21:02
learned from me. And so I'm wondering
21:05
what you've learned in the process of helping
21:07
so many businesses approach AI a
21:09
little bit more ethically, Like have there been insights
21:11
that you've gotten through your interaction with clients
21:13
and the challenges they've been facing. I'm
21:16
learning with every
21:18
single interaction. For
21:20
example, in my
21:23
mind, given the experiences
21:26
that IBM has had with respect
21:28
to setting up our principles,
21:31
our pillars ARII ethics
21:33
board, there's a process
21:35
to follow, right if you're thinking about it like
21:37
a book, these are the chapters in order to optimize
21:41
the approach, let's say. But sometimes
21:44
we work with clients that say I'm going to install
21:46
this tool and I want to jump to chapter
21:48
seven, and it's like,
21:50
oh, okay, you know, how do we
21:52
help navigate clients that want
21:55
to skip over steps
21:57
that we think are important. Another
22:00
one is again the social scientists
22:02
and bringing them in to really push
22:05
hard on what is the right context
22:08
where this data tell me the origin story?
22:10
Again like really pushing
22:12
us to think hard and with
22:15
their perspective, you
22:17
don't know, just constant, constant
22:19
learning, which is why one of the things
22:21
we did at IBM is we've established
22:24
something called our Center of Excellence, where
22:26
we said, you know what ibm ors. We
22:28
don't care what your background is, We don't care who
22:30
you are. If you're interested in this space,
22:33
you can become a member. The
22:35
Center of Excellence is a way in which
22:37
we have not only projects
22:40
people can join in order to get real life
22:42
experience, but then also share back.
22:44
Here's what we learned. We did this with
22:46
this particular Yet here was our epiphany,
22:49
because if we're not sharing back and we're
22:51
not constantly educating,
22:54
then we're missing the opportunity
22:56
to establish the right culture. Establishing
23:01
the right culture to share what
23:03
we're learning is so important,
23:06
and so I wanted to end. But going back to where
23:08
we started, you with your technofile family
23:11
watching the Star Trek, I think if we were to fast
23:13
forward a couple of decades, we probably couldn't
23:15
have imagined that we'd be in the place with AI
23:18
generally where we are now, and especially
23:20
as we think through more trustworthy AI. And
23:22
so you know, with such change
23:24
happening right now, with the fact that it's
23:27
a fire hose that's gonna just get even
23:29
more powerful over time, what do you think
23:31
is next in this world of thinking through more trustworthy
23:33
AI. I would say next
23:36
is far more education,
23:38
far more understanding, and we're starting
23:41
to see that shift. Far more CEO
23:44
is saying, yeah, ethics has to be corrid
23:46
or a business. There's but there's a shift. Barely
23:49
half of the CEOs in twenty eighteen we're
23:52
saying that AI ethics was
23:55
key or important to their business, and
23:57
now you're saying the great majority so
24:01
education, education, education, And
24:03
again I would underscore making it far
24:05
more accessible to far more people,
24:07
which means it's not just our
24:10
classes in higher ed institutions,
24:13
it's our conferences, it's anytime
24:16
we write white papers, anytime
24:18
we publish articles, anytime we do
24:20
podcasts like this. Right, the
24:22
way we talk about this space
24:25
has to be far more accessible and
24:27
open and inviting to
24:29
people with different roles, different skill
24:31
sets, different worldviews, because
24:33
else again we're just codifying our
24:35
own bias. Well, Phaedre, I want
24:37
to express my gratitude today for making
24:40
AI a little bit more accessible to everyone.
24:42
This has been such a delightful conversation. Thank
24:44
you so much for joining me for it. The pleasure
24:47
was mine. Loie, thank you for being the consummate host.
24:56
I want to close by going back to that moment when
24:58
Lourie suggested that Phaedra was actually
25:00
IBM's Chief Bartender Officer,
25:03
not just because that's the best C suite title
25:06
ever, but because it gets at what
25:08
I think is the biggest, most important
25:10
idea. In today's episode, Pedro
25:13
boiled it down into a single line when
25:15
she said, the hard part is not the
25:17
tech, The hard part is human
25:20
behavior. Why is
25:22
building AI so complicated? Because
25:24
people are complicated. IBM
25:27
believes that building trust into AI from
25:29
the start can lead to better outcomes,
25:32
and that to build trustworthy AI, you
25:35
don't just need to think like a computer scientist.
25:37
You need to think like a psychologist,
25:40
like an anthropologist. You need
25:43
to understand people.
25:48
Smart Talks of IBM is produced by Molly
25:50
Sosha, Alexandra Garatin, Royston
25:52
Preserve and Edith Russolo with
25:55
Jacob Goldstein. We're edited by
25:57
Jan Guerra. Our engineers are Jason
26:00
Umbrel, Sarah Bruger and
26:02
Ben Tolliday. Theme song by
26:04
Gramoscope. Special thanks
26:07
to Carlie Migliore, Andy Kelly,
26:09
Kathy Callaghan and the eight Bar
26:11
and IBM teams, as well
26:13
as the Pushkin Marketing team.
26:16
Smart Talks with IBM is a production of Pushkin
26:18
Industries and iHeartMedia. To
26:21
find more Pushkin podcasts, Listen
26:23
on the iHeartRadio app, Apple
26:25
Podcasts, or wherever you
26:28
listen to podcasts. I'm
26:30
Malcolm Gladwell. This is a paid
26:32
advertisement from IBM.
Podchaser is the ultimate destination for podcast data, search, and discovery. Learn More