Episode Transcript
Transcripts are displayed as originally observed. Some content, including advertisements may have changed.
Use Ctrl + F to search
0:03
Hello, Hello, Welcome to Smart Talks
0:05
with IBM, a podcast from Pushkin
0:07
Industri's iHeartRadio and
0:10
IBM. I'm Malcolm Gladwell.
0:13
This season, we're continuing our conversation with
0:15
new creators visionaries
0:17
who are creatively applying technology in
0:19
business to drive change, but with a
0:21
focus on the transformative
0:24
power of artificial intelligence
0:26
and what it means to leverage AI as
0:28
a game changing multiplier for
0:31
your business. Our guest today
0:33
is Christina Montgomery, IBM's
0:36
Chief Privacy and Trust Officer.
0:38
She's also chair of IBM's AI Ethics
0:41
Board. In addition to overseeing
0:43
IBM's privacy policy, a core
0:45
part of Christina's job involves
0:48
AI governance, making
0:50
sure the way AI is used complies
0:53
with the international legal regulations
0:55
customized for each industry.
0:58
In today's episode, Ristina will explain
1:01
why businesses need foundational
1:03
principles when it comes to using technology,
1:07
why AI regulation should focus
1:09
on specific use cases over
1:11
the technology itself, and share
1:14
a little bit about her landmark congressional
1:17
testimony. Last May, Christina
1:19
spoke with doctor Lori Santos, host
1:22
of the Pushkin podcast The Happiness
1:24
Lab, a cognitive scientist
1:27
and psychology professor at Yale University.
1:29
Laurie is an expert on human
1:32
happiness and cognition. Okay,
1:35
let's get to the interview. So
1:39
Christina, I'm so excited to talk to you today. So
1:42
let's start by talking a little bit about your role
1:44
at IBM. What does a Chief Privacy
1:46
and Trust Officer actually do. It's
1:48
a really dynamic profession and
1:50
it's not a new profession, but the
1:52
role has really changed. I mean, my role
1:55
today is broader than just helping
1:58
to ensure compliance with data protects
2:00
laws globally. I'm also responsible
2:02
for AI governance I co chair or
2:05
AI Ethics Board here at IBM, and
2:07
for data clearance and data governance
2:09
as well for the company. So I
2:12
have both a compliance aspect to my role,
2:14
really important on a global basis, but also
2:17
help the business to competitively
2:19
differentiate because really
2:21
trust is a strategic advantage for IBM
2:24
and a competitive differentiator as a company
2:26
that's been responsibly managing
2:28
the most sensitive data for our clients for
2:30
more than a century now and helping
2:33
to usher new technologies into the world with trust
2:35
and transparency. And so that's also
2:37
a key aspect of my role. And
2:40
so you joined us here on smart talks back
2:42
in twenty twenty one, and you chatted with us about
2:44
IBM's approach of building trust and transparency
2:46
with AI, and that was only two
2:49
years ago. But it almost feels like in eternity
2:51
has happened in the field of AI since then,
2:53
and so I'm curious how much has changed
2:55
since you were here last time. We're the
2:58
things you told us before you are they still
3:00
true? How are things changing? You're
3:02
absolutely right, it feels like the world
3:05
has changed really in the last two
3:07
years. But the same fundamental
3:09
principles and the same overall governance
3:12
apply to IBM's program
3:15
for data protection and responsible
3:17
AI that we talked about two years
3:20
ago, and not much has changed there from
3:22
our perspective. And the good thing
3:24
is we've put these practices and
3:26
this governance approach into place,
3:29
and we have an established way
3:31
of looking at these emerging technologies. As
3:33
the technology evolves, the tech
3:35
is more powerful, for sure, foundation models
3:38
are vastly larger and more capable,
3:41
and our creating in some respects new
3:43
issues. But that just makes it all the
3:45
more urgent to do what we've been doing and
3:47
to put trust and transparency into place
3:49
across the business to be accountable
3:52
to those principles, and
3:54
so our conversation today is really centered around this
3:56
need for new AI regulation and
3:58
part of that regulations the mitigation
4:01
of bias. And this is something I think about
4:03
a ton as a psychologist, right, you know, I
4:05
know, like my students and everyone who's
4:07
interacting with AI is assuming
4:09
that the kind of knowledge that they're getting
4:11
from this kind of learning is accurate,
4:13
right, But of course AI is only as good as the knowledge
4:16
that's going in. And so talk to me a
4:18
little bit about like why bias
4:20
occurs in AI and the level of the problem
4:22
that we're really dealing with. Yeah,
4:24
Well, obviously AI is based on data,
4:27
rights it's trained with data,
4:30
and that data could be biased
4:32
in and of itself, and that's where issues
4:35
could come up. They come up in the data, they could
4:37
also come up in the output of the models
4:39
themselves. So it's really
4:41
important that you build bias
4:44
consideration and bias testing into
4:46
your product development cycle. And
4:48
so what we've been thinking about here at
4:50
IBM and doing we had some of our research
4:52
teams delivered some of the very first
4:55
toolkits to help detect bias years
4:57
ago now, right, and deployed them to open source,
5:00
and we have put into place for
5:02
our developers here at IBM and
5:04
Ethics by Design playbook that's
5:07
sort of a step by step approach which
5:09
also addresses very fully
5:12
biased considerations, and
5:14
we provide not only like
5:17
here's a point when you should test for it and
5:20
you consider it in the data, you have to measure
5:22
it both at the data and the model level or the
5:24
outcome level, and we provide
5:26
guidance with respect to what tools
5:28
can best be used to accomplish that. So
5:31
it's a really important issue. It's one
5:33
you can't just talk about. You have to provide
5:35
essentially the technology and the capabilities
5:38
and the guidance to enable people to test
5:40
for it. Recently, you had this wonderful
5:42
opportunity to head to Congress to talk about
5:44
AI, and in your testimony before
5:46
Congress you mentioned that it's often said
5:49
that innovation moves too fast for government
5:51
to keep up. And this is something that I also
5:53
worry about as a psychologist. Right our policy
5:55
makers really understanding the issues that they're
5:57
dealing with, and so I'm curious how you're approaching
5:59
the challenge of adapting AI policies
6:02
to keep up with the sort of rapid pace of all
6:04
the advancements we're seeing in the AI technology
6:06
itself. I think it's really
6:09
critically important that you have foundational
6:11
principles that applied to not
6:14
only how you use technology,
6:16
but whether you're going to use it in the first place, and where
6:19
you're going to use and apply it across your company.
6:21
And then your program from a governance perspective,
6:24
has to be agile. It has to be able
6:26
to address emerging capabilities,
6:29
new training methods, etc. And
6:32
part of that involves helping
6:34
to educate and instill and empower
6:37
a trustworthy culture at a company
6:39
so you can spot those issues so you can
6:41
ask the right questions at the right time if
6:44
you try. We talked about during
6:46
the Senate hearing, and IBM's been talking
6:48
for years about regulating
6:51
the use, not the technology itself,
6:53
because if you try to regulate technology,
6:56
you're very quickly going to find out regulation
6:59
will appbsolutely never keep up
7:01
with that. And so in your testimony to Congress,
7:03
you also talked about this idea of a precision
7:05
regulation approach for AI. Tell
7:08
me more about this. What is a precision regulation
7:10
approach and why could that be so important.
7:12
It's funny because I was able to share with
7:14
Congress our precision
7:16
regulation point of view in
7:18
twenty twenty three, but that precision
7:21
regulation point of view was published by IBM
7:23
in twenty twenty So
7:26
we have not changed our position
7:29
that you should apply the tightest
7:31
controls, the strictest regulatory requirements
7:34
to the technology where the end use
7:37
and risk of societal harm is the greatest.
7:39
So that's essentially what it is. There's
7:42
lots of AI technology that's used
7:44
today that doesn't touch people, that's
7:46
very low risk in nature. And
7:48
even when you think about AI
7:51
that delivers a movie recommendation versus
7:53
AI that is used to diagnose
7:56
cancer, right, there's very different
7:58
implications associated with those two uses
8:00
of the technology. And
8:03
so essentially what precision regulation
8:05
is is apply different rules to different risks,
8:07
right, more stringent regulation to the use
8:10
cases with the greatest risk. And
8:12
then also we build that
8:14
out calling for things like transparency
8:17
you see it today with content right,
8:19
misinformation and the like. We
8:22
believe that consumers should always
8:24
know when they're interacting with an AI system,
8:26
So be transparent, don't hydra AI clearly
8:30
define the risks. So as a
8:32
country, we need to have some clear guidance
8:34
right in globally, as well in
8:36
terms of which uses
8:39
of AI or higher risk where will apply
8:41
higher and stricter regulation and
8:44
have sort of a common understanding of what
8:46
those high risk uses are and
8:49
then demonstrate the impact in the cases
8:51
of those higher risk uses.
8:53
So companies who are
8:55
using AI in spaces where they can impact
8:58
people's legal rights, example,
9:01
should have to conduct an impact
9:03
assessment that demonstrates that
9:05
the technology isn't biased. So we've
9:07
been pretty clear about apply that
9:10
the most stringent regulation to the highest
9:12
risk uses of AI. And
9:15
so so far we've been talking about your congressional
9:17
testimony in terms of, you know, the specific
9:19
content that you talked about, But I'm just curious on
9:21
a personal level, you know, what was that
9:24
like right like right now it feels like at a policy
9:26
level, like there's a kind of fever pitch going
9:28
on with AI right now. You know what did that
9:30
feel like to kind of really have the opportunity
9:32
to talk to policy makers and sort of influence
9:34
what they're thinking about AI technologies like
9:37
in the coming century. Perhaps I
9:39
was really an honor to be able to do
9:41
that and to be one
9:43
of the first set of invitees to
9:45
the first hearing, and what I
9:48
learned from it essentially is you know,
9:50
really two things. The first is really
9:52
the value of authenticity. So
9:54
both as an individual and
9:56
as a company, I was
9:59
able to talk about out what I do.
10:01
You know, I need a lot of advanced prep
10:03
right. I talked about what my
10:05
job is, what IBM
10:07
has been putting in place for years now. So
10:11
this isn't about creating something. This
10:13
was just about showing up and being authentic.
10:15
And we were invited for a reason. We were invited
10:18
because we were one of the earliest
10:20
companies in the AI technology
10:22
space. We're the oldest
10:24
technology company and we
10:27
are trusted and that's an
10:29
honor. And then the second thing I came
10:31
away with was really how important this issue is
10:33
to society. I don't think I appreciated it
10:36
as much until following
10:38
that experience. I had
10:41
outreached from colleagues I hadn't worked with
10:43
for years. I had an outreach from family
10:45
members who heard me on the radio. You
10:48
know, my mother and my mother in law,
10:50
and my nieces and nephews and my friends
10:52
of my kids were all like, oh, I get it, I
10:54
get what you do. Now, Wow, that's pretty cool,
10:57
you know. So that was really probably
10:59
the best, the most impactful takeaway
11:01
that I had. The mass adoption of
11:03
generative AI happening at breakneck
11:06
speed has spurred societies
11:08
and governments around the world to
11:10
get serious about regulating
11:13
AI. For businesses,
11:15
compliance is complex enough already, but
11:17
throw an ever involving technology like AI
11:20
into the mix, and compliance itself
11:22
becomes an exercise in adaptability.
11:26
As regulators seek greater accountability
11:29
in how AI is used, businesses
11:31
need help creating governance processes
11:34
comprehensive enough to comply with
11:36
the law, but agile enough to
11:39
keep up with the rapid rate of change in
11:41
AI development. Regulatory
11:43
scrutiny isn't the only consideration
11:46
either responsible AI governance.
11:49
Of business's ability to prove its AI
11:51
models are transparent and explainable
11:55
is also key to building trust with customers,
11:58
regardless of industry. In
12:00
the next part of their conversation, Laurie
12:03
asked Christina what businesses should
12:05
consider when approaching AI
12:07
governance. Let's listen. So
12:10
it's a particular role that businesses are playing
12:12
in AI governance, Like, why is it so critical
12:15
for businesses to be part of this? So
12:17
I think it's really critically important
12:19
that businesses understand
12:22
the impacts that technology can have both
12:24
in making them better businesses, but
12:27
the impacts that those technologies can have on
12:30
the consumers that they
12:32
are supporting. You know, businesses
12:34
need to be deploying AI
12:37
technology that is in alignment
12:39
with the goals that they set for it, and that can
12:41
be trusted. I think for us and for our
12:43
clients, a lot of this comes back
12:46
to trust in tech. If
12:48
you deploy something that doesn't work, that
12:50
hallucinates, that discriminates,
12:54
that isn't transparent, where decisions
12:56
can't be explained, then you are
12:58
going to very rapidly erode
13:00
the trust at best right of your
13:03
clients and at worst for yourself.
13:05
You're going to create legal and regulatory issues
13:07
for yourself as well. So trusted technology
13:10
is really important, and I think there's
13:12
a lot of pressure on businesses today to move very
13:14
rapidly and adopt technology. But if you
13:16
do it without having a program of governance
13:19
in place, you're really risking eroding
13:21
that trust. So this is really where I think a
13:23
strong AI governance comes in. Talk
13:26
about from your perspective, how this really
13:28
contributes to maintaining the trust
13:30
that customers and stakeholders have in these
13:32
technologies. Yeah, absolutely,
13:34
I mean you need to have a governance program
13:36
because you need to understand that
13:39
the technology, particularly in the AI space,
13:41
that you are deploying, is
13:44
explainable. You need to understand
13:46
why it's making decisions and
13:48
recommendations that it's making, and you need to be able
13:51
to explain that to your consumers. I mean, you
13:53
can't do that if you don't know where your data is
13:55
coming from, what data are you using to train those
13:57
models if you don't have a program
13:59
that manage is the alignment
14:01
of your AI models over time to make
14:04
sure as AI learns
14:06
and evolves over uses, which
14:08
is in large part what
14:11
makes it so beneficial that
14:13
it stays in alignment with the objectives
14:15
that you set for the technology over
14:17
time. So you can't
14:19
do that without a robust governance
14:22
process in place. So we work with
14:24
clients to share our own
14:26
story here at IBM in terms of how
14:28
we put that in place, but also in
14:30
our consulting practice to
14:33
help clients work with these
14:35
new generative capabilities and foundation
14:38
models and the like in order to put
14:40
them to work for their business in a way that's going
14:42
to be impactful to that business but
14:44
at the same time be trusted. And so
14:46
now I wanted to turn a little bit towards watsonex
14:49
governance, and so IBM recently announced
14:51
their AI platform Watson X, which
14:53
will include a governance component. Could
14:56
you tell us a little more about watsonex dot
14:58
Governance. Yeah, I mean before do that,
15:00
I'll just back up and talk about the full platform
15:04
and then lean into Watson X because I
15:06
think it's important to understand the delivery
15:09
of a full suite of capabilities
15:12
to get data, to train
15:15
models, and then to govern them over their
15:17
life cycle. All of
15:19
these things are really important
15:22
from the onset. You need to make
15:24
sure that you have, you know, for our
15:26
watsonex dot AI for example,
15:30
that's the studio to train new foundation
15:32
models and generative
15:34
AI and machine learning capabilities.
15:37
And we are populating
15:40
that studio with some IBM
15:42
trained foundation models which we're
15:45
curating and tailoring
15:47
more specifically for enterprises. So
15:49
that's really important. It comes back to the point I made
15:51
earlier about business trust and
15:54
the need you know, to have
15:56
enterprise ready technologies
15:59
in the AI space. And then the
16:01
watsonex dot data is a
16:03
fit for purpose data store or a data
16:06
lake. And then watsonex dot
16:08
gov so that's a particular
16:10
component of the platform that
16:13
my team and the AI Ethics Board
16:16
has really worked closely with the product
16:18
team on developing, and we're using
16:20
it internally here in the Chief Privacy
16:22
Office as well to help us
16:25
govern our own uses of AI
16:27
technology and our compliance
16:30
program here, and it essentially
16:33
helps to notify you
16:35
if a model becomes biased or gets
16:38
out of alignment as you're using it over
16:40
time. So companies are going to need
16:42
these capabilities. I mean they need them today
16:44
to deliver technologies with
16:47
trust. They'll need them tomorrow
16:49
to comply with regulation, which is on
16:51
the horizon, and I think compliance becomes
16:53
even more complex when you consider international
16:56
data protection laws and regulations. Honestly,
16:59
I don't know how any on any company's legal
17:01
teams keeping up with this these days. But my
17:03
question for you is really how can businesses
17:06
develop a strategy to maintain
17:08
compliance and to deal with it in this ever changing
17:10
landscape. It's increasingly more challenging.
17:12
In fact, I saw statistic just
17:14
this morning that the regulatory
17:17
obligations on companies have increased something
17:19
like seven hundred times in
17:22
the last twenty years, So it really
17:24
is a huge focus
17:26
area for companies. You have to have a
17:28
process in place in order
17:30
to do that, and it's not easy, particularly
17:33
for a company like IBM that
17:35
it has a presence in over one hundred
17:37
and seventy countries around the world. There
17:40
is more than one hundred and fifty comprehensive
17:43
privacy regulations, there
17:45
are regulations of non personal
17:47
data, there are AI regulations emerging,
17:50
so you really need an operational approach
17:53
to it in order
17:55
to stay compliant. But one of the things we do is
17:57
we set up baseline and a lot of companies
17:59
do that as well. So we define a privacy
18:02
baseline, we define an AI baseline,
18:05
and we ensure then
18:07
as a result of that that there are very few deviances
18:10
because it incorporates in that baseline.
18:12
So that's one of the ways we do it. Other companies,
18:14
I think are similarly situated in
18:17
terms of doing that. But
18:20
again, it is a real challenge
18:22
for global companies. It's one of the reasons why
18:25
we advocate for as much alignment as
18:27
possible on the international
18:30
realm as well as nationally
18:32
here in the US, as much alignment
18:35
as possible to make compliance
18:38
easier for easier and not
18:40
just because companies want an easy way
18:42
to comply, but the harder
18:44
it is, the less likely there will
18:46
be compliance. And it's
18:49
not the objective of anybody
18:51
governments, companies consumers
18:54
to have to set legal obligations
18:57
that companies simply can't meet. So
18:59
what advice would you give to other companies who are
19:01
looking to rethink or strengthen their approach
19:03
to AI governance. Think you need to start with,
19:05
as we did, foundational principles,
19:08
and you need to start making decisions
19:10
about what technology you're going to deploy
19:13
and what technology you're not, What are you going to use it for, and
19:15
what aren't you going to use it for? And then when you do use
19:17
it, align to those principles.
19:20
That's really important. Formalize a program,
19:22
have someone within the organization,
19:25
whether it's the chief privacy officer, whether
19:28
it's some other role, a chief
19:30
AI ethics officer, but have
19:32
an accountable individual and accountable
19:35
organization. Do a maturity
19:37
assessment, figure out where you are and where you need
19:39
to be, and really start
19:42
putting it into place today. Don't
19:45
wait for regulation to apply
19:47
directly to your business, because it'll be too
19:49
late. So Smart
19:51
Talks features new creators, these visionaries
19:54
like yourself, who are creatively applying technology
19:56
in business. To drive change. I'm curious
19:59
if you see yourself as creative, you
20:01
know, I definitely do. I
20:03
mean, you need to be creative
20:06
when you're working in an industry
20:08
that evolves so very quickly. So
20:11
you know, I started with IBM
20:14
when we were primarily a hardware company,
20:16
right, and we've changed our business so
20:19
significantly over the years, and the issues
20:21
that are raised with respect to each new
20:24
technology, whether it be cloud,
20:26
whether it be AI now where
20:29
we're seeing a ton of issues, or you look at emergent
20:31
issues in the space of things
20:33
like neurotechnologies and quantum computers.
20:36
You have to
20:38
be strategic and
20:40
you have to be creative and thinking about
20:43
how you can adapt agilely
20:46
quickly a company
20:48
to an environment that is changing
20:50
so quickly and with
20:52
this transformation happening at such a rapid
20:55
pace. Do you think creativity plays a role
20:57
in how you think about and implement specifically
20:59
a trust where the AI strategy.
21:03
Yeah, I absolutely
21:05
think it does, because again, it comes back
21:07
to these capabilities, and there are ways.
21:10
I guess how you define creativity
21:12
could be different, right, But
21:14
I'm thinking of creativity in the sense of
21:17
sort of agility and strategic vision
21:19
and creative problem solving. I
21:21
think that's really important
21:23
in the world that we're in right now, being able
21:26
to creatively problem solve
21:28
with new issues that are rising
21:31
sort of every day. And
21:33
so, how do you see the role of chief privacy officer
21:36
evolving in the future as AI technology
21:38
continues to advance, Like what stuff
21:40
should CPOs take to stay ahead of all these changes
21:43
that are come in their way? So
21:45
the role is evolving in
21:47
most companies, I would say pretty
21:50
rapidly. Many companies
21:52
are looking to chief privacy officers who
21:54
are ready understand the data
21:57
that's being used in the organization and have programs
21:59
to ensure compliance with laws
22:02
that require you to manage that data in
22:04
accordance with data protection laws and the like. It's
22:07
a natural place and position for
22:11
AI responsibility. And
22:13
so I think what's happening to a lot of chief
22:15
privacy officers is they're being asked to
22:17
take on this AI governance responsibility
22:20
for companies and if not take
22:22
it on, at least play a very
22:24
key role working with other parts
22:26
of the business in AI governance.
22:29
So that really is changing. And if chief
22:32
privacy officers are in companies
22:34
who maybe haven't started thinking about AI
22:36
yet, they should so I would
22:39
encourage them to look at
22:41
different resources that are available already
22:43
in AI governance space. For
22:45
example, the International Association of
22:47
Privacy Professionals, which is the
22:50
seventy five thousand member professional
22:53
body for the profession
22:55
of chief Privacy officers, just recently
22:57
launched an AI Governance
23:00
in an AI Governance certification program.
23:03
I sit on their advisory board. But that's
23:05
just emblematic of the fact that the field
23:08
is changing so rapidly. And
23:11
so, you know, speaking of rapid change, when
23:13
you're back here on smart Talks in twenty
23:15
twenty one, you said that the future of AI
23:17
will be more transparent and more trustworthy.
23:20
You know, what do you see the next five to ten years
23:22
holding. You know, when you're back on smart Talks
23:24
in you know, twenty twenty six, you know
23:26
twenty thirty, you know what are we going to be talking about
23:28
when it comes to AI technology and governance.
23:31
So I try to be an optimist, right And
23:33
I said that two years ago, and
23:36
I think we're seeing it now come
23:38
into fruition, and there will be
23:41
requirements, whether they're
23:43
coming from the US, whether they're coming from Europe,
23:45
whether they're just coming from voluntary adoption
23:48
by clients of things like the
23:50
NISS Risk Management Framework, really
23:52
important voluntary frameworks. You're
23:54
going to have to adopt transparent and explainable
23:57
practices in your uses of AI. So
24:00
I do see that happening. And in the next five to ten
24:02
years, boy, I think we'll see more
24:04
research into trust
24:07
in techniques because
24:09
we don't really know, for
24:11
example, how to water mark. We
24:14
were calling for things like watermarking. There'll
24:16
be more research into how to do that. I
24:19
think you'll see regulation
24:23
that's specifically going to require those types of things.
24:26
So I think again, I think the regulation is
24:28
going to drive research. It's going to drive
24:30
research into these areas that will
24:32
help ensure that we can deliver
24:35
new capabilities, generated capabilities
24:37
and the like with trust and explainability.
24:40
Thank you so much Christina for joining me on smart
24:42
Talks to talk about AI and governance. Well,
24:45
thank you very much for having me to
24:48
unlock the transformative growth possible
24:50
with artificial intelligence. Businesses
24:53
need to know what they wish to
24:55
grow into first. Like
24:57
Christina said, the best way forward in
24:59
the AI future is for businesses
25:02
to figure out their own foundational principles
25:05
around using the technology. Drawing
25:07
on those principles, to apply AI
25:09
in a way that's ethically consistent
25:12
with their mission and complies with the legal
25:14
frameworks built to hold the technology
25:17
accountable. As AI
25:19
adoption grows more and more widespread,
25:21
so too will the expectation from
25:23
consumers and regulators that
25:26
businesses use it responsibly. Investing
25:29
independable AI governance is a
25:31
way for businesses to lay the foundations
25:34
for technology that their customers
25:36
can trust while rising to
25:38
the challenge of increasing regulatory
25:41
complexity. Though the emergence
25:43
of AI does complicate an already
25:46
tough compliance landscape, businesses
25:48
now face a creative opportunity
25:51
to set a precedent for what accountability
25:54
in AI looks like and rethink
25:56
what it means to deploy trustworthy
25:59
artificial intelligence. I'm
26:01
Malcolm Gladwell. This is a paid
26:04
advertisement from IBM.
26:07
Smart Talks with IBM will be taking a short hiatus,
26:09
but look for new episodes in the
26:11
coming weeks. Smart Talks
26:14
with IBM is produced by Matt Romano, David
26:17
jaw Nische, Venkat and
26:19
Royston Deserve with Jacob Goldstein.
26:21
We're edited by Lydia Jene Kott.
26:24
Our engineer is Jason Gambrel. Theme
26:26
song by Gramoscope Special
26:29
thanks to Carli Migliori, Andy Kelly,
26:31
Kathy Callahan, and the eight
26:33
Bar and IBM teams, as
26:35
well as the Pushkin Marketing team.
26:38
Smart Talks with IBM is a production
26:40
of Pushkin Industries and Ruby Studio
26:43
at iHeartMedia. To find
26:45
more Pushkin podcasts, listen on
26:48
the iHeartRadio app, Apple Podcasts,
26:50
or wherever you listen to
26:53
podcasts.
Podchaser is the ultimate destination for podcast data, search, and discovery. Learn More