Episode Transcript
Transcripts are displayed as originally observed. Some content, including advertisements may have changed.
Use Ctrl + F to search
0:02
Concrete production? Livestock?
0:05
The Socratic method? Somehow
0:07
we talk about all three. Find
0:09
out how these connect with AI in today's
0:11
episode. I'm
0:13
David Hardoun from Apoitas Data Innovation
0:15
and you're listening to me, myself and
0:18
AI.
0:20
Welcome to Me, Myself and AI, a podcast
0:22
on artificial intelligence and business. Each
0:25
episode we introduce you to someone innovating
0:27
with AI. I'm Sam Ransbotham,
0:30
professor of analytics at Boston College.
0:33
I'm also the AI and business strategy
0:35
guest editor at MIT Sloan Management
0:37
Review. And I'm Shervin Kodubande,
0:40
senior partner with BCG and
0:42
one of the leaders of our AI business.
0:44
Together, MIT SMR and
0:46
BCG have been researching and publishing
0:49
on AI since 2017,
0:50
interviewing
0:52
hundreds of practitioners and surveying
0:54
thousands of companies on what it takes
0:56
to build and to deploy and scale
0:58
AI capabilities and really transform
1:01
the way organizations operate.
1:04
Welcome. Today, Shervin and I are excited to
1:06
be joined by David Hardoun, who holds
1:09
several senior positions at our Apoitas
1:11
group. David, thanks for joining us.
1:13
Thank you very much, Sam. Shervin? Can
1:16
you first tell us a bit about the Apoitas group? Where
1:19
do you work? The Apoitas group is a hundred
1:21
plus year old conglomerate, originated
1:24
in Spain, Catalonia, and relocated into the
1:26
Philippines. It started in the hemp
1:28
business, but now quite diversified from the
1:30
main business being power generation and distribution
1:33
across the Philippines, financial services,
1:36
cement construction, utilities,
1:38
estate, airports, food,
1:41
agriculture. They're now going
1:43
through a transformation and becoming a, I
1:45
love this term by the way, a tech conglomerate.
1:48
What is Apoitas data innovation?
1:51
About seven years ago, give
1:53
or take, the bank started with the whole digitalization
1:56
of the banking services.
1:57
And what that had resulted in, as
1:59
As you would imagine, tremendous amount
2:02
of data. The more you engage your consumers
2:05
digitally, the more you have digital services,
2:07
well, surprise, surprise, the more data you have.
2:10
And the question came as well, how
2:12
are we really using it? Are we using
2:14
it? What's the best way to put it to good
2:16
use? And that question kind of went
2:19
also beyond just the bank into
2:21
the rest of the business, because you can imagine power
2:23
has a lot of data. Agriculture, airports, et
2:25
cetera, has a lot of data. We were born
2:28
with a very kind of on point
2:30
mandate operationalizing data,
2:32
operationalizing AI. Really, how
2:34
do we put it to good use?
2:36
What are some of these uses?
2:39
I mean, there's the usual financial side
2:41
where we all learn for my personalization,
2:44
financial crime, and don't get me wrong, that stuff
2:46
is things that always gets me all excited. I
2:48
spent a few good years in the financial regulator
2:51
here in Singapore. But let me give you
2:53
an oddity,
2:55
cement. An industry that you wouldn't really
2:57
associate with data or AI. We
3:00
sat down with the CEO at the time and we said, look,
3:02
even in the world of cement, you have a lot of data.
3:05
How can this work? So let me give you a little
3:08
tidbit of how the world of cement works. And this is something that
3:10
was new to me. Cement is actually like baking. I don't
3:12
know if you bake, but it's like baking. It's basically you
3:14
have mixtures. You have these kind
3:16
of formulas and you end up with cement,
3:19
which will have different type of properties.
3:21
And these properties, it's what's absolutely critical
3:23
depending on what you're planning to build, whether it's
3:25
a mole, a high rise, a low rise, a residential,
3:29
et cetera, and so forth.
3:30
Having said that, as with baking,
3:33
you kind of need to do a bit of trial and error. You need
3:35
to try out these different mixtures to make sure it
3:37
produced the right one. That results
3:40
in
3:40
operational overhead. It results in wastage.
3:43
I mean, and as with baking, you stick this stuff
3:45
into keels, literally it's a furnace is to
3:48
bake it.
3:49
Using data, using the information that's coming
3:51
from all the devices, the IOT,
3:53
using AI,
3:55
being able to actually tell the bakers, or
3:57
in this case, the chemical engineers,
3:59
what is gonna be the... outputs of
4:01
this mixture before they even start,
4:04
while at the same time maintaining that
4:06
quality control that is absolutely crucial.
4:09
Now this is, by the way, this is not just hypothetical,
4:11
this is already operational for the last year in
4:14
all the plants, about six plants in the Philippines,
4:16
and results in operational efficiency,
4:19
results in reduction in the number of wastage, resulted
4:21
in what I like to call quantifiable
4:23
ESG, 35 kilotons reduction
4:26
of CO2 emission. So that's a
4:28
nice
4:29
unusual example I like to give in terms
4:32
of
4:32
how data is used.
4:34
Well, I could tell you, Sam and I are going
4:36
to love that. We're both chemical engineers.
4:39
Oh, well, there you go. When you said baking,
4:41
I did my PhD in catalyst
4:44
synthesis. So I spent a lot of my
4:46
time baking
4:47
various aluminum
4:50
silicates to create catalyst, and
4:52
you're completely right. You try all these
4:54
things, some work, some don't work. And
4:56
had there been the ability for
4:58
me to know ahead of time, I
5:02
probably would have gotten my PhD in a tenth
5:04
of the time. But seriously,
5:07
this is quite interesting. Now,
5:10
if you go from personalization
5:13
and cyber and fraud, and
5:15
you also have this example in baking
5:18
cement, then we must
5:20
believe that there is such a wide portfolio
5:23
of things that you're considering. So
5:25
tell us more about
5:26
what makes it into that portfolio,
5:29
because there is no end to what you could
5:31
do.
5:31
What are the kinds of things you get excited
5:34
about?
5:35
You're absolutely right. Being fortunate and working in a
5:37
conglomerate, you kind of wake up every day and discover something
5:39
new. So there are kind of two dimensions
5:41
to it. On the one hand,
5:43
and I'm going to go back to this term operationalization
5:46
and operationalizing data and AI. It's
5:49
stuff that has to make sense to the business. So
5:51
revenue, operational efficiency, risk management.
5:54
And then we have to look at the things around the corner.
5:56
We have to experiment. But those may
5:59
not be things that get immediately deployed.
6:01
Like effectively in agriculture builders, we have animals,
6:04
we have pigs, wine, and poultry. And
6:07
as
6:08
part of that process, you want to make sure that
6:10
the animals have the best possible
6:12
care provided to them. On
6:14
the experiment side, we said, okay, how can we use technology
6:17
that's already available,
6:19
but may not have been put in exactly in this particular context,
6:22
not in Southeast Asia.
6:23
So we're using voice recognition
6:26
and image recognition for pigs to
6:29
help identify stress and
6:32
detect illnesses. So that could be automatic
6:34
alerts to the caregivers.
6:36
What's the ground truth on that? That would be
6:38
interesting. That's a great question. Like, what's
6:40
the training data?
6:42
So this is the amazing stuff. It's a very expressive
6:44
animal. So when you actually go there with the people
6:47
who take care of them, they can literally
6:49
point out by saying, this animal is distressed,
6:52
and you could constantly recording, we're
6:54
kind of okay, is this really something
6:57
that's relevant? Does it make sense? Like, can we
6:59
have that conversation with the baker, you know, the chemical
7:01
engineer, can we have a conversation with the animal
7:03
keeper, the veterinarian and so forth, or
7:05
the poll engineer when we're dealing
7:08
with electricity cables?
7:10
It's extremely important. And that's
7:12
one of the things that I realized throughout my
7:14
career of doing data is where things
7:16
failed, where you suddenly had this divergence
7:19
of exploring a scientific research.
7:21
And I came from the world of science, you know, x
7:23
academic, without really
7:26
seeing that connectivity. And if we go all the way back,
7:28
even when radar was invented, I mean, the reason
7:31
things fall apart is whereby the very,
7:33
very small gaps of, well, it's not quite
7:35
there, or it's not quite usable. So that's the first cut.
7:38
Then the second level is seeing,
7:40
well, is this something that's
7:43
as much as possible, truly going
7:45
to make a difference to either our internal users,
7:47
because that's extremely important. And for many of the
7:50
businesses, which are within the group, which are actually B2B,
7:52
again, power, essentially, we provide power
7:54
in wholesale. So it's our internal users
7:56
on terms of,
7:58
let's say, predictive asset maintenance.
7:59
critically important.
8:02
That is really fantastic. I mean what you've
8:04
said is
8:05
inspiring on so many levels.
8:08
One is let
8:09
your imagination be the limit, right? Because
8:13
the question of can something be done better,
8:15
more effectively, can you see around the corner and
8:18
there is data,
8:19
then yes. That's one thing that's
8:21
inherent in all these examples that you gave. You started
8:23
with what most would consider quite
8:26
advanced and interesting things and we have
8:28
guests who talk about those all the time. Personalization,
8:31
fraud, cyber, all of those are very important.
8:34
And then you went to cement and then you went to pigs. And
8:36
then you talked about human and AI,
8:39
which is
8:39
quite critical too. I
8:42
just find that very, very energizing.
8:44
Hi everyone. Since you're
8:46
listening to this podcast to understand AI
8:48
in business through conversations with people
8:50
on the front lines of figuring it all
8:52
out, I wanted to tell you about another podcast I
8:55
think you'll like for those reasons. It too
8:57
focuses on conversations directly with experts
8:59
but edited to ensure very high insights per
9:02
minute for all our listeners. It's hosted
9:04
by the former showrunner of the A6 and Z podcast,
9:07
produced by venture capital firm Andresen Horowitz
9:09
and editor-in-chief Sonal Chauxy. That's me.
9:12
The show is called Web3 with A6
9:14
and Z Crypto, but it's really about the
9:16
future of the internet, future of the firm, future
9:19
of business. So whether you're a business
9:21
leader preparing for that future now or just
9:23
seeking to understand emerging tech
9:25
trends, this show is for you. I'd
9:28
recommend you start with episode 18, a deep
9:30
dive with Bob Iger, CEO of Disney. You
9:33
can find this and other episodes on
9:35
network effects and modes, community marketing, etc.
9:38
by following Web3 with A16Z
9:41
in your podcast app.
9:44
Well, it's the nexus between human and AI.
9:47
There are two critical things that I
9:50
believe have to go hand in hand, have to.
9:52
While this may change in the future to some degree
9:54
at an extent, I mean, who knows what's
9:57
going to happen around the corner. Things change so rapidly.
9:59
But the first one, and I'll be the first one to admit
10:02
this, I truly came to this appreciation
10:04
when I worked in the regulator, surprise, surprise,
10:07
is this criticality of
10:10
combining governance and innovation. And
10:13
I used to get asked this question repeatedly
10:15
of, oh, but don't you think governance inhibits
10:17
innovation? It stifles us. And
10:20
I came to the view of I'm vehemently
10:22
against that perspective. I would argue that not only
10:25
it does not stifle it,
10:26
it would result in more
10:29
and even better innovation, it's essentially
10:31
about just simply having common
10:33
sense. I was privileged in being in the process
10:36
and coming up with a fee principle. So this was the fairness
10:38
ethics accountability and transparency back at the
10:40
Monetary Authority of Singapore. And I remember when
10:42
it came out, and we deliberately kept it very simple.
10:45
And I showed it to our governor, our managing director, and
10:47
he was just like, David, isn't this just common sense?
10:49
And I just got a smile and was like, well, no, even
10:51
common sense has to, it's not always that common,
10:53
it has to be written down.
10:55
But it's critical. That's number
10:57
one. And number two, what you were mentioning is that, yes,
11:00
while AI and data can do these
11:03
what is seemingly miraculous stuff,
11:06
it's critical that this combination
11:08
with us humans and how we
11:10
use it
11:11
is baked at the very beginning. And
11:14
even now, we're like, obviously, everyone's talking
11:16
about your ITBT. But
11:18
remember, all the data that it's
11:21
trained on is from
11:23
us to a certain extent. You can't
11:25
take humans out of the loop because
11:28
after a while, they will lose what
11:30
makes them human. Well, but we have
11:32
examples of that. I mean, that's okay in some places.
11:35
I mean, neither of you know how to navigate
11:37
by the stars, I'm guessing, unless, Shervin, you've
11:39
got some tricks up your sleeve that I haven't learned
11:42
yet. I mean,
11:43
most people don't drive a manual
11:45
transmission. That seems to be a skill
11:47
that's, well, okay, maybe one
11:50
or two of us do here. But the point is,
11:52
I guess, we don't have to retain all possible
11:54
skills. We just have to be, I think, savvy
11:57
about which ones we hang on to.
11:59
what you said. It's
12:01
some, not all, but sometimes you
12:03
find that you see this trend of like, oh, look
12:06
what it can do. Like everything gets automated. And
12:08
I remember, like if I go to my early days as a consultant,
12:10
you know, I used to be a consultant doing AI and
12:13
you would find a lot of times, you know, potential
12:16
clients and people you speak to there, even
12:18
if they didn't set it explicitly, what they were trying to achieve
12:20
was like, oh, just do everything automatically
12:22
with AI. And you need to have
12:24
this almost this natural inclination by saying, okay,
12:27
if it's contextual, if it makes sense, like
12:29
you said, I,
12:30
you know, maybe I want to pick up star navigation
12:33
because I'm interested in it. I want to learn about astrology
12:35
or astrophysics and what not great, but
12:37
you see it now becomes a niche topic that some
12:40
people pick up. The general public doesn't
12:42
need to know how to do it.
12:44
But we need to be able to identify that
12:47
decision point rather than just go like, you know, everything
12:49
now AI galore kind of situation.
12:52
Well, I mean, what you're saying is, there's
12:54
value in the ongoing dialogue,
12:57
and there's value in ongoing challenge.
13:00
And every time there is a dialogue, I mean,
13:02
even back in Socrates
13:04
time, right, the dialogue is where it elevates
13:07
the conversation. And you're rightly pointing out that
13:09
the moment you say AI is be all and
13:12
end all is the moment that you
13:14
are under delivering
13:16
on AI and then you're for sure under delivering
13:19
on the human potential. Well, you're
13:21
losing a potential answer. Let me give you two
13:23
examples.
13:24
In the financial sector, we have the Bank
13:26
Union Bank of the Philippines, amongst others, while
13:29
AI
13:30
governance or regulation is
13:32
not yet a work well yet I'm fisight
13:34
term a requirement, let's say in the Philippines.
13:37
We've situated a working
13:39
group, which is an interesting combination of people which
13:41
from your risk officer, legal
13:44
compliance, and then you have marketing, customer
13:46
engagement experience, which
13:48
what happens is while you still have
13:51
the traditional process of model validation, etc,
13:53
from a statistical mathematical
13:56
data point of view,
13:57
in models are presented in this working group.
14:00
for us to have a debate.
14:02
Because a model may pass all the statistical
14:05
texts, but if this model goes wrong, you
14:07
know, even that 10% or 5%,
14:09
there is a significant reputational risk at play
14:12
or there's a potential impact to the consumers.
14:16
And that debate is important because, A, if you
14:18
just looked at it from that statistical, even
14:20
a potentially automated process, you would miss it.
14:23
Now the resolution,
14:25
interestingly enough, and I honestly
14:28
tell you, like maybe eight out of 10 times so far,
14:30
isn't data, isn't AI, the resolution
14:32
a lot of times is process, which
14:35
is people. And
14:37
that
14:38
makes us actually wiser in understanding, okay,
14:40
how do we use it and how do we engage with it and when
14:43
do we allow, Sam, to your point, that automation
14:45
and when we go, no, I retain the veto
14:48
to overrule to a certain extent. So that's one example.
14:50
The other one is if I go back to my cement.
14:53
And in fact, we did this very deliberately at the very beginning
14:55
because we didn't want our colleagues
14:57
and chemical engineers to think like, oh, great, so why
15:00
do you need me? You just can automate the whole thing. No,
15:03
the whole point was we absolutely need them
15:05
because there may be new type of mixtures
15:07
that we haven't considered. You will still need to have that experimentation.
15:10
The whole goal is providing information.
15:14
But what it has resulted is efficiency. So if
15:16
I give a swing again to another one, when chat GPT
15:18
came out and I got asked straight away
15:20
from a few boards and I said, what does this mean? And
15:23
my instinctive reaction, you
15:25
know, rather than going to this whole lengthy explanation
15:28
of liberation, I just responded by saying it
15:30
means that every one of us can have the productivity
15:32
of 10 people.
15:34
So this is what this stuff means. And that's what
15:36
that nexus, the dialogue, the integration,
15:38
the augmentation means is that we
15:40
now have the ability to be far
15:42
more productive, whatever productive means in
15:45
that context.
15:46
For some people may say, I just want to work
15:48
two hours, but as if I worked the whole day, it
15:51
may differ. But that's what it means because now we're able
15:53
to take all this data. I'm
15:57
sure some of you remember back in 2000.
15:59
and you had these meme online of getting information
16:02
off the internet is like drinking from a fire hose. It's
16:05
still true. We're in outdated with
16:07
information, you know, with data, but it's like distilling
16:09
it down to something that's relevant to me, usable,
16:11
that I can do something with it and get
16:14
that gain, essentially.
16:16
I think one thing that's coming out of this conversation,
16:18
I think, Shervin uses the word
16:20
Socratic and they would use the
16:22
word dialogue. What's nice about
16:24
this is it's dropped this hubris that
16:26
I feel like I
16:28
see in a lot of machine
16:30
learning.
16:31
Machine learning seems to be about humans
16:33
teaching machines. So it's this sort of
16:35
we know all, we make the machines
16:37
emulate us. And if they do, they pass
16:40
the Turing test and yes,
16:42
everything is golden. No, but then
16:44
you get pushback and you say, oh no, machine
16:47
can teach us things we've never known before.
16:50
Well, that just has switched the direction. It
16:52
still has that same directional
16:54
hubris. But the things that you're both talking
16:56
about are much more Socratic
16:59
and dialogue. You think about what
17:01
can that
17:02
group
17:03
form together? And Shervin, I've got some results
17:05
from last year's research that said
17:07
about 60 percent of the people are thinking about
17:10
AI as a coworker. And that
17:13
strikes me as that sort of a relationship,
17:16
because between the two, yes, you find
17:19
some new compound that maybe someone wouldn't have
17:21
tried. I don't know what the chemical engineering
17:24
equivalent of the Fosbury flop is.
17:26
Do you remember the Fosbury flop where he learned
17:28
the different way of jumping over the high bar
17:31
and then suddenly everyone else
17:32
adopted that technique? That sort
17:35
of idea seems like it could come out of
17:37
this approach. It's actually really
17:39
interesting you bring that up. And I
17:41
mean, I'd love to say like, oh, yeah, we
17:43
had this all intended in the very beginning. But I'll be very
17:45
honest and like, I think it's more of a
17:47
nice consequence that wasn't fully
17:50
intended and point in time. But I want to
17:52
go back to that fee principle.
17:54
One of the principles resulted
17:56
in a lot of discourse. And
17:58
I mean a lot.
19:59
background. How did you end up where you are?
20:02
If I roll back all the way to the beginning and I kind
20:05
of say this again with a big smile myself, how to end up where
20:07
I am, detention.
20:09
That's how I ended up here. I must
20:11
have been what, 14, 15, 16 years old. And
20:15
I got sent to the library
20:17
because of detention. And if you're
20:19
in a library, nothing better to do. I picked
20:21
up a book on prologue.
20:23
And I don't ask me why. From all the
20:25
books I could have picked up, I picked up one about
20:27
prologue. And this is really before knowing anything
20:30
about the whole world of, well, I guess
20:32
in that case, it was the expert-based systems. And
20:34
I started reading and I just couldn't put it down.
20:36
And that kind of triggered this exploration
20:39
of
20:40
how can we better capture knowledge? How can we
20:42
better learn? And that obviously resulted in kind of learning
20:45
a bit more about neural networks, AI.
20:48
In fact, I was one of the first two students
20:50
who took the degree of computer science with
20:52
artificial intelligence. It was literally
20:55
brand new from that perspective. My
20:57
PhD thesis was about semantic models. So literally the representation
20:59
and encapsulation of knowledge effectively and information
21:02
was on learning musical patterns, music,
21:05
or generating music from brain patterns. And
21:08
the whole idea about that is essentially providing expert-based
21:11
systems knowledge, if you think about it in that way, for people, let's
21:13
say, who can't sit in front of a piano and play,
21:15
but are fully capable cognitively. So that's
21:18
kind of what brought me here. I know it's a very weird kind of journey,
21:20
but yeah, I need to thank my, uh, literacy teacher.
21:23
Thank you for sending me to detention.
21:26
Okay. So we've got a segment where
21:28
we're going to ask you some quick questions. What
21:30
are you proudest of in terms of artificial intelligence?
21:33
What have you all done that you're proudest of? Where
21:35
to begin? One I'm most
21:37
proud of is the
21:40
way we've been able to graduate.
21:42
And I literally mean that from the
21:44
academic world to the industrial world.
21:47
What worries you about AI? You've mentioned some worries
21:49
today, but what worries you?
21:51
What worries me is I don't think we're
21:53
fully appreciating what we're creating.
21:56
I think we need to head on with the realization
21:58
of what we're creating and what we're seeding for. for possibilities,
22:00
for good and for bad.
22:02
What's your favorite activity that does not involve technology?
22:06
Sup, stand up paddling. Being
22:09
on the water and just paddling away,
22:12
it's extremely soothing. It's actually a phenomenal
22:14
exercise for those who haven't tried.
22:16
I've tried and I've missed the stand
22:18
up part. I'm okay with the paddling, but the stand up
22:21
seems to have trouble. What's
22:23
the first career you wanted while you were
22:25
sitting in detention? What did you want to be when you grew up?
22:28
I wanted to be an astrophysicist.
22:31
What's your greatest wish for AI in the future?
22:33
What are you hoping we can gain from this?
22:36
I don't know, self-actualization.
22:38
I hope we learn more about ourselves.
22:41
It's already giving us capabilities. I mean, for example,
22:44
look, I'm dyslexic. I mean, thank heavens
22:46
for auto spell checkers.
22:48
Well, thank you for taking the time. I think that there's a lot
22:51
that you've mentioned. I think we can go back
22:53
to even to examples of
22:55
food a hundred years ago. We
22:57
had a
22:57
terrible food cleanliness and
22:59
now we have a supply chain we can trust.
23:03
Perhaps we can build that same sort of supply chain
23:05
with data. Thank you for taking the time to talk with
23:07
us today. It's been a pleasure.
23:09
Thank you, Sam, Sherman. Yeah, thank you. And
23:11
maybe if I just may just add on that note, I think that's really the critical
23:13
thing. It's
23:14
AI trust. It's about trust. Thank
23:17
you very much. Thanks for
23:19
listening. Next time, Sherman and I talk
23:21
with Naba Banerjee, head of trust product and
23:23
operations at Airbnb, about how the travel
23:25
platform uses AI and machine learning
23:28
to make travel experiences safer.
23:30
Thanks for listening to me, myself, and AI. We
23:33
believe like you, that the conversation about
23:35
AI implementation doesn't start and stop with
23:37
this podcast. That's why we've created a
23:39
group on LinkedIn specifically for listeners
23:41
like you. It's called AI for Leaders.
23:44
And if you join us, you can chat with show creators
23:46
and hosts, ask your own questions, share
23:49
your insights, and gain access to valuable
23:51
resources about AI implementation from MIT
23:54
SMR and BCG. You can access
23:56
it by visiting mitsmr.com
23:59
forward slash. We'll
24:02
put that link in the show notes and we hope to see you
24:04
there.
Podchaser is the ultimate destination for podcast data, search, and discovery. Learn More