Podchaser Logo
Home
Operational Safety With AI: Chevron’s Ellen Nielsen

Operational Safety With AI: Chevron’s Ellen Nielsen

Released Tuesday, 5th December 2023
Good episode? Give it some love!
Operational Safety With AI: Chevron’s Ellen Nielsen

Operational Safety With AI: Chevron’s Ellen Nielsen

Operational Safety With AI: Chevron’s Ellen Nielsen

Operational Safety With AI: Chevron’s Ellen Nielsen

Tuesday, 5th December 2023
Good episode? Give it some love!
Rate Episode

Episode Transcript

Transcripts are displayed as originally observed. Some content, including advertisements may have changed.

Use Ctrl + F to search

0:02

Digital. Twins Gen Vi for engineering

0:05

on today's episode funded a One

0:07

Petrochemical company up Skills it's

0:09

workforce to benefit from new tech

0:12

like generate of a I.

0:15

I'm Ella Nielsen from Chevron.

0:17

And you listening to me

0:19

myself? And A I welcome

0:21

to me myself an Ai, a

0:23

podcast on Artificial Intelligence and business.

0:25

Each episode we hundred issue to

0:27

someone innovating with a I I'm

0:29

Sam Ramsbotham Professor of Analytics the

0:32

Boston College. I'm also the Ai

0:34

and Business Tragic guest editor at

0:36

Mit saw Management Review and I'm

0:38

sure of and co two band

0:40

a senior partner would be C

0:42

G and one of the leaders

0:44

of our Ai business together M

0:46

I S M R and B

0:48

C D. Have been researching and

0:51

publishing on a I since Twenty

0:53

seventeen, interviewing hundreds of practitioners and

0:55

surveying thousands of companies on what

0:58

it takes to build and to

1:00

deploy and scale Ai capabilities and

1:02

really transform the way organizations operate.

1:06

Hi everyone! Today Salmon I are speaking

1:09

with Ellen Nielsen chief Date Officer at

1:11

Chevron Allen. Thanks for taking the time

1:13

to talk to us! Welcome to the

1:15

show! Thank. You for having

1:17

me and really excited to have a

1:19

very cool conversation today. Let's.

1:22

Get started! I would imagine

1:24

most of our listeners in fact, all

1:26

of them have heard about Chevron. But

1:29

what they may not know is the extent

1:31

to which a eyes prevalent across all of

1:33

Chevron's value chain. So maybe tell us a

1:35

little about your role and how Ai is

1:37

being used. That Chevron. May. Talking

1:39

about my role. It started three

1:41

years ago I was the first

1:43

data officer with in Chevron. That doesn't

1:46

mean that we deal with data since

1:48

a long time but to need to

1:50

put more focus on the data was

1:53

starting to emerge and to with that

1:55

I was task in evangelizing data driven

1:57

decisions and at of course include.

2:00

any kind of data science analytics along

2:03

the way. And that was

2:05

very, very interesting to see it growing over

2:07

the time. We use AI in

2:09

many places, some areas

2:11

where we use robots, for example,

2:14

in tank inspection today. You

2:16

can imagine that was very cumbersome, having

2:18

the human involved. Now we do this

2:20

with robots. And we

2:22

take basically the human beings out of

2:24

these confined spaces. And that's a

2:27

combination of computer vision,

2:30

taking images, comparing

2:32

the images, and take predictions on

2:35

what's the status of this tank and this

2:37

equipment. Is it rusting? Does it need

2:39

maintenance? Do we need to tackle

2:41

it in a very predictive way? So that's

2:43

operating in a much more reliable and safe

2:45

way in the future. The

2:47

other example is when

2:49

we talk about sensors in

2:52

compressors or any kind of equipment. In

2:54

the past, we were, of

2:57

course, installing them. But the prices

2:59

dropped dramatically for those sensors and the data

3:01

collection. And I just

3:03

saw recently, actually, it was a

3:05

citizen development application, which has been

3:07

created because these sensors have to

3:09

be installed. And when you install

3:11

them, you basically take a QR

3:13

code. And with one click, you

3:16

can add the geospatial location to

3:18

the sensor. And then you can

3:20

see all these sensors you have installed in

3:22

your facility in a map. So

3:25

you see actually really actively happening

3:27

what's going on and where are

3:30

the things actually working and which

3:32

sensors have been inventoried there. So

3:34

we have a combination here of

3:36

computer vision, of using

3:39

citizen development, and then, of course,

3:41

using the sensor in a machine

3:43

learning AI-based way to

3:45

come to predictions and how they work. So

3:48

one of the things I know that

3:50

you do quite well is digital twin.

3:53

Maybe you can comment a little bit about that

3:56

example. Digital twin is

3:58

one of many examples. where we use

4:00

that, what triggers to

4:02

do digital twin? One is

4:05

you can imagine that we have people out

4:07

in the field so we want to make

4:09

their life easier and safer. That

4:11

means the more data and the more

4:14

information we can gather about our field

4:16

assets and how to operate them will

4:19

serve the purpose for being more safe,

4:21

more reliable in the operations and that

4:23

was one trigger. The second

4:25

trigger is that you collect a lot

4:27

of information based on

4:30

let's say internet of

4:32

things IIoT devices, censoring

4:35

and that feeds into another pool of

4:38

information where you can drive even predictive

4:40

decisions in these assets. So with

4:42

the digital twin we want to basically serve both.

4:45

We want to be safer, reliable but

4:48

also more predictive on what we do that

4:50

speaks to efficiency and do the right thing

4:52

at the right time. Can

4:54

you give us a specific example of a

4:56

place you're using a digital twin? How does

4:58

that help with safety? How does that help with efficiency? If

5:02

you take a digital twin and you'd

5:04

say you digital twin basically a facility,

5:06

a refinery. So in a

5:08

refinery you can imagine there are lots

5:10

of pipes, there are lots of equipment,

5:13

there are compressors, there are

5:15

generators, there are things

5:17

very mechanically working and

5:19

people have to maintain those to get

5:21

the product out. So when you see

5:23

the value chain from product coming in or

5:25

materials coming in and product comes out everything

5:27

between this goes for this

5:29

refinery and if you have everything

5:32

digital twin you can plan better, you

5:34

can operate better, you know when

5:36

things are coming in you can predict better

5:38

on how to get a better output and

5:41

that's basically how we do it in

5:43

refineries or facilities where we operate is

5:46

really looking at the flow of information

5:48

and the data driven decisions. So we

5:50

were always driving decisions with information, you

5:52

know in the past information was more

5:54

in the in the heads of the

5:56

people who were very experienced and sometimes

5:58

augmented of course. i

6:42

think that data

8:00

at hand in a digital way, this is

8:02

quite cumbersome. I cannot imagine how the people did

8:04

it in the past. They may were printing out

8:06

things and laying it on top of it and

8:09

coming with assumptions based on their

8:11

experience. And of course they gained

8:13

a lot of experience. Now we

8:16

do this with machine learning algorithms.

8:18

We understand how the rock composition

8:20

is. We even created

8:22

actually a rockerpedia to know what

8:25

are the different rock conditions and compositions

8:27

so that we can tap into this

8:30

data every day when we need it. Yeah,

8:32

and I think it's a bigger theme

8:34

that with the advent of these technologies,

8:37

the sky's the limit. And so the

8:39

question is, how else can

8:41

you apply it? And what else can you do with

8:43

it? And I think this brings me

8:45

to a question around the mission

8:47

and the purpose. Because there's obviously a

8:49

ton of data. There is obviously a

8:51

lot of tools. And the

8:53

use cases are driven by the mission and what

8:55

are some of the things we want to do

8:57

with that? Yeah, I would

9:00

link it actually in Chevron back

9:02

to our strategy. We do higher

9:04

returns and lower carbon safely. And

9:07

this is our guiding principle. So everything what

9:09

we do should of course benefit the

9:12

success of the company, the

9:14

impact of the company, but also

9:16

doing it in a low carbon environment.

9:18

We know the world looks different in

9:20

a few decades. We look

9:22

after methane. We look after

9:25

greenhouse emissions. We look

9:27

after our carbon footprint overall. So

9:29

this is something what we always

9:31

tackle. And data and AI

9:33

plays their role, but also plays a

9:35

role in how we operate and how

9:38

we operate safely. Safety is

9:40

a big component of Chevron's value

9:42

system. And when you think

9:44

about the future and think about AI and robots

9:47

and digital twin and all of that, there

9:49

is technology out there where we can

9:51

help our people to do their work

9:54

safer and much more

9:56

reliable and in better ways and in new

9:58

ways in the future. What's

10:00

interesting to me about Chevron

10:03

or a company that's predominantly

10:05

an engineering and

10:07

science company is when

10:10

AI is being put

10:12

in production to

10:14

augment some of

10:16

the decisions and some of the

10:18

insights that workers and engineers and

10:20

scientists are making. But

10:23

as an engineer, as an operator of

10:26

these plants, I may not

10:28

quite agree with it. I don't

10:30

know whether this resonates. How

10:32

do you get scientists and

10:34

engineers comfortable to use these

10:36

tools? I

10:38

think it's actually helping because engineers

10:41

have a very logical mindset and

10:43

they know the science and we

10:45

have a lot of science people

10:47

in the company. So when you

10:49

talk about data science and the things behind

10:51

it, we have many people very interested in

10:54

learning data science. And

10:57

we also would say

10:59

we have started to provide

11:02

education. So I think where

11:04

do I start with? So you

11:06

start with learning. I

11:08

don't understand this. That's a typical engineering mindset.

11:11

I don't understand it. I want to understand

11:13

it. I'm looking for what does it tell

11:15

me? How can it influence my solution?

11:17

We have a digital scholarship program

11:20

since a while. And actually we do

11:22

this with MIT where we have cohorts going

11:24

for a year and they're not coming out

11:26

of one department. They're really

11:29

coming out of the whole company going

11:31

through a design engineering master

11:33

in one year, which is really a tough thing

11:35

to do. But they're coming back

11:37

and understanding the new technology, understanding

11:40

the things how we can use it

11:42

differently. And they are the first

11:44

going back into their normal environment

11:46

and influence and basically have

11:48

other people participating from their knowledge and to

11:50

venture out different things maybe they have not

11:53

tried before. So this is one

11:55

thing to influence culture. The

11:57

second thing in the data science space,

11:59

we... started to work with the Rice

12:01

University. We have a six, seven months

12:04

program also going across the company that's

12:06

not only for IT people to

12:08

learn what data science means and they bring

12:10

it back to their environment. So they are

12:12

not leaving their role completely, they go into

12:14

six months, seven months and then they return

12:17

back in the best way possible

12:19

to influence the company. Hey, what is possible?

12:21

The last piece is maybe the broadest

12:24

way because we call it citizen development.

12:27

We believe that many, many people in

12:29

the company get things in their

12:31

hand now with the evolution of AI

12:34

and we just saw gen AI is now

12:36

in the hands of any of everybody who

12:38

wants it. And with this

12:40

kind of citizen development overall, we want

12:42

to bring the technology which

12:45

is becoming much easier to many people

12:47

so that they can use it. And

12:49

of course, they need data for this.

12:51

And that's why we provide the data

12:53

in these systems to be more self

12:55

efficient. So I would say that's a

12:57

three prong kind of approach to influence

12:59

the culture, leadership and we have really

13:02

nice cases over in AI, citizen

13:04

development, we are also publicly talking

13:06

about it with certain use cases

13:08

we do. I think that's the

13:10

culture piece. It takes a while,

13:13

you know, to get into every artery of

13:15

the company, but I feel there's really excitement

13:17

in the company right now to go down

13:19

that road. What I like

13:21

about what you're saying is that actually

13:24

doubling down on the predominantly engineering

13:26

and scientific culture of the company

13:28

and making this a cross

13:31

disciplinary collaboration between science and

13:33

engineering and AI versus any

13:36

of these replacing each other.

13:38

It's an end, not an

13:40

or. Is there a specific

13:42

example you have where you where someone has

13:44

gone to one of these seven month programs

13:47

or the digital scholar program and

13:49

brought back something that's made some change made a

13:51

difference? Yeah, definitely. So we

13:53

have many because we are I think two or

13:55

three years into this. And of

13:57

course, they bring it back and solve several

13:59

issues. We even have this sometimes with

14:01

internships. After two, three

14:04

weeks they recognized they could solve a

14:06

planning issue where they were chewing on

14:08

since some time and it was

14:10

pretty complex but with the new, let's

14:12

say, views and data and artificial

14:15

intelligence, the outcomes were really

14:17

stunning. We

14:19

have actually somebody also

14:21

influencing really the planning of our

14:23

field, field development, creating

14:26

a low-code environment and really

14:28

just breaks in and really

14:31

changes the way how we work. In

14:34

terms of making the company more

14:37

productive, more efficient, ensuring

14:40

it's safe, ensuring that it

14:42

does good for people and communities

14:44

and environment and species in

14:47

all different forms. What has

14:49

been challenging? What's hard? I

14:51

would say there are definitely some challenging parts.

14:54

This is an early

14:56

stage technology, especially in the

14:58

Gen. AI. Things

15:00

are moving very fast. What is challenging,

15:02

whatever you do today might be different

15:05

in three months. The

15:07

challenging part is you cannot work in the

15:09

same way you worked maybe in the past.

15:11

You have to maybe pivot

15:13

faster. It's not that you

15:15

build a solution. I think a company told me they

15:18

built a solution and six

15:20

months later if they would build it now

15:22

again they would do it totally different. You

15:25

have to watch when you, I call

15:27

it maybe put the X in a

15:29

basket, you have to think about

15:31

what's the right timing for what kind of

15:33

use case and figuring this out because you

15:35

don't want to lock yourself in when

15:38

the technology is still in that kind of

15:40

an evolution stage. This is

15:42

something what we watch. The second

15:44

thing is not everything

15:46

in terms of security

15:50

or handling data in the right

15:52

way is solved yet in Gen. AI.

15:56

The technology is not ready. There

15:58

are no solutions yet. and

16:00

you can build a kind of a sandbox

16:03

or a kind of fenced environment,

16:06

but you have to fence it by yourself.

16:08

And I think the hyperscalers like Microsoft and

16:10

so on, I think they're working on also

16:14

adapting those use cases in

16:16

their normal, let's say, landscape

16:18

where you can have

16:20

an authorization process, where you have

16:22

an access process, how you're administering

16:25

and governing this the right way.

16:27

So this is, I would say, still missing.

16:30

I'm very hopeful that this will be

16:32

closed very fast, but today you have

16:35

to pull different technologies if it's a vector database

16:37

to talk a little bit to tech language here.

16:41

It's not already to be used

16:43

on a really wide scale very

16:45

safely. And you have to imagine

16:47

if you have a corporation, there

16:50

are rights in terms of what information

16:53

can be shared, what should be not

16:55

shared, and so on. And that's something

16:57

that we think is a challenge. The

16:59

third challenge I want to mention is

17:02

the policymakers. So we follow

17:04

this very closely with Responsible

17:06

AI. We are a member of the Responsible

17:09

AI Institute and

17:11

watching very carefully what's happening there, what

17:13

kind of policy are coming around the

17:15

corner, how do we

17:17

incorporate that responsibly into our

17:19

operations, into our

17:22

productization of AI models. And

17:25

that's, of course, evolution. It's not something

17:27

you can buy and run

17:29

it. And yeah, we'll see how

17:31

companies are filling these gaps. Helen,

17:34

can you comment on Generative AI and

17:36

if and how it's being used or

17:38

planned to be used? Yeah, yeah, absolutely.

17:40

We are following Generative AI already since

17:42

two years or so, maybe a little

17:44

longer. We were not

17:47

totally surprised by the development. Maybe you

17:49

can say, okay, when was chat GPT

17:52

coming? That was maybe a surprise for everybody

17:54

that it was coming so fast. But

17:56

we were watching this and already did some

17:59

use cases kind of innovative

18:01

sandbox environment to see what will that

18:03

be. And when it came

18:05

out, we said, okay, this is new

18:07

technology. We want to understand it. We

18:10

give it into the hands of the

18:12

people and use it and

18:14

then understand the telemetry of what do we

18:16

use it for and how does it resonate.

18:18

And in May-June, we

18:20

decided to put a more

18:22

dedicated team on those activities.

18:25

And, yeah, we have hundreds

18:27

of use cases now in

18:29

the pipeline, which we down-select

18:31

to the most prominent ones and

18:33

approach them. But technology-wise, we

18:35

are really, I would say, very much

18:37

on top of what's going on and

18:40

have really super smart people working on

18:42

it. I can tell you my own

18:44

use case. I use it for

18:47

writing things down. You can

18:49

talk about maybe writing your

18:51

performance agreement with your

18:53

supervisor or with your team.

18:56

You check on

18:58

presentations or documentations you

19:00

have to do to really optimize

19:02

the writing. I know that

19:05

my team is using it because we

19:07

are thinking in product development and product

19:10

management and portfolio management. So

19:12

in the past, they took much

19:14

longer to write down their thinking

19:16

and I talked with one of my

19:18

team members and she said, you know, in the past it

19:20

took me maybe one or two weeks. Now it

19:22

takes me one hour to get this done. So

19:24

there are lots of efficiency in using,

19:27

let's say, GPT in

19:29

this space. When we look

19:31

into other examples, you can imagine

19:33

we have knowledge databases. We have

19:36

knowledge around system engineering

19:38

and other information we have available within

19:40

the company on a very broad scale.

19:42

And in the past if

19:44

you wanted to know how this generator works,

19:47

you had to basically type and

19:49

search criteria and then finally you found

19:52

the document and you had to read the

19:54

document. Oh, this document was not enough. You

19:56

need another document. Okay, you find the second

19:58

document. Then you complete basically your

20:00

answer and then you go back

20:02

basically execute on it. We have created

20:04

a chat system where you can collaborate

20:07

with this kind of information and figuring

20:09

this out much faster. So

20:11

these are maybe two, maybe more one

20:13

on a daily thing and one maybe

20:16

more related to kind of how

20:18

we work in a systems approach. If

20:21

I combine some of your ideas, I see

20:23

some difficulties. So earlier on you were talking about

20:25

citizen developers and the idea of putting a lot

20:27

of these tools in the hands of people. And

20:30

then later you're talking about problems

20:33

of security and policy that are not

20:35

part of the infrastructure yet. Historically,

20:38

security always follows features. We care

20:40

about features first and then we

20:43

care about security. So we have

20:45

the combination of a widespread proliferation

20:47

of tools amongst citizen developers and

20:51

low infrastructural guardrails or

20:53

policies and then concern

20:56

about inability to fast follow.

20:59

Those seem like they could smash together

21:01

and create a lot of tension.

21:03

How do you navigate that? Yeah,

21:06

I would say maybe we have to

21:08

talk about AI in general and then

21:10

generative AI. So when I talked about the

21:12

policy makers, this was more in the

21:14

generative AI perspective. When you

21:16

think about citizen development, we have

21:19

models or algorithms in

21:21

the box. We have proven, we have secured.

21:24

They have followed a review process. We

21:26

checked on them in terms of responsible

21:28

AI. So they're ready to use for

21:31

any citizen developer who wants to use that.

21:34

So they are secured and safe and they're

21:36

actually in our safe environment. So

21:38

you can already start there and make it

21:40

safe. But the new technology which is coming

21:42

on the Gen AI with these large language

21:44

models and the data behind

21:46

it, where the large language models

21:48

learn from, that's maybe not

21:51

ready yet to put into a citizen

21:53

development perspective. So to make this very

21:55

clear, when I talk about citizen development,

21:57

everything what is secured, kind of

21:59

the telemetry. is there, the space is

22:01

there, we have ensured that we do the right

22:03

thing, this is made available

22:06

for everyone in the company and

22:08

the other things which are maybe not

22:10

secure yet, we are not putting

22:12

that into the system, we are

22:14

waiting. So we cannot just afford

22:16

to have unsecured things into our

22:18

citizen development program. Yeah,

22:20

that brings out a nice sort

22:23

of differentiation between the ideas that

22:25

citizen data ship, data scientist can't

22:28

just be able, there's

22:30

a curation process that goes on and

22:32

it sounds like you're pretty active in

22:34

that curation process and deciding what tools

22:36

go to citizen developers and

22:39

which tools are still

22:41

investigating and you're protecting that makes

22:43

sense. Yeah, that's

22:45

exactly. Chevron is

22:47

obviously a giant petrochemical company out

22:50

there worldwide, everyone knows it and

22:52

you're the chief data officer. How did you get

22:54

there? Tell us a little bit about your history

22:56

of how did you get to this role. Yeah,

22:59

I'm happy to be in this role,

23:01

it's a super exciting area I'm always

23:03

passionate about. When you

23:05

follow my start of my career,

23:07

I'm from Germany, I did a

23:10

system engineering degree and then ventured

23:12

out into digital data later

23:14

on to procurement and supply chain and

23:16

I think the big red thread throughout

23:18

my whole career is the data part

23:21

but of course in different ways. So one can

23:24

say when I ventured out into

23:26

supply chain, you deal with a lot of

23:28

money from the company bought by third

23:30

parties, how do you organize that and

23:32

there's a lot of data

23:34

and thinking and strategic thinking

23:37

about how you do that and I

23:39

would say I'm a learner, I'm a

23:41

humble learner, I like to embrace

23:44

new things and very diverse perspectives

23:46

for the best of the company and

23:49

it's just by coincidence maybe that I got

23:51

into this role because when I joined Chevron

23:53

five years ago, I started

23:56

in the procurement space because I have a

23:58

procurement and data digital lab. I would

24:00

call it, we tackled on

24:02

data right away because the data was

24:04

not as sufficient to drive these decisions

24:06

and maybe the first two years proved

24:09

me right in terms of that's possible. I'm

24:11

also a big believer that data and AI

24:13

will be all around

24:15

us. So this is an

24:18

exciting space to be in and to

24:20

learn and to see what's

24:22

coming next there. So I'm just happy

24:24

to be there. Actually a former executive

24:26

said when I said to him and

24:28

not in Chevron, I'm so lucky all

24:31

the opportunities I had in my career and he said,

24:33

Ellen, you are not lucky. So he

24:35

sent me a book home. You basically

24:37

condition your path, you know, so you're

24:39

open to things even when

24:41

you think it's not on your direct

24:44

trajectory but it's really enhancing your skills

24:46

and how you connect the dots. So

24:49

I like connecting the dots and that's why I'm enjoying this

24:51

role. That's a great story.

24:54

Okay, so these are a series of rapid fire

24:56

questions we ask. Just tell us the first thing

24:58

that comes to your mind. It's

25:00

kind of a speed dating question maybe. What

25:04

do you see is the biggest opportunity for AI

25:06

right now? Healthcare.

25:10

What is the biggest misconception about AI?

25:15

Replacing human beings. What

25:18

was the first career you wanted? What did you want to be

25:20

when you grew up? I didn't want

25:22

to sit on a desk. I

25:24

failed. AI

25:27

is being used like in our daily lives a lot.

25:29

When is there too much AI? I

25:32

would say too much AI would mean

25:34

if it guides me in

25:36

the wrong direction and influences me

25:41

in a way which is not based on the real

25:43

facts. I already

25:45

have too much AI in my car

25:47

because I cannot open the garage because

25:50

it recognizes where I am and

25:52

which thing it has to open

25:54

and if it doesn't work, I can't get in. I

25:57

enjoy this. We have a pretty smart home

25:59

here. with all the

26:01

kind of voice recognition, electronics,

26:04

garage door opener, sprinklers, starters and

26:06

whatsoever. But I would say it

26:09

helps to be more efficient and

26:11

if the network is down, that's

26:13

really hard now. That's right.

26:15

That's right. So last question, what

26:18

is the one thing you wish AI could do

26:20

right now that it can't? Cure

26:24

cancer. Very good. It

26:26

seems like there's a headline every week that this

26:28

new AI thing is going to solve cancer and

26:30

then you look back and none

26:33

of these seem to pan out. I'm not saying we

26:35

should quit trying but it's always the example and it

26:38

seems like it never really gets there. Well,

26:40

it's a little bit of a stochastic process too,

26:42

right? I mean, if you have enough trials

26:45

at it, right? I mean,

26:47

we could for sure try a lot

26:49

more things because of AI and our

26:52

ability to experiment.

26:55

In my answer, it may be slightly different. So I

26:57

think the other thing would be what

26:59

AI maybe cannot do, which would be

27:01

great, really help us with the climate

27:03

transition, the climate questions we have on

27:05

this planet. I think it helps

27:07

here and there. But that would be

27:10

like fantastic if it can help more. Yeah.

27:13

At the same time though, I don't think we can abdicate

27:15

and just hope the machine solves or some

27:18

of the problems that we have created either. I

27:20

think it's going to take both

27:22

of us out working together on that. It's

27:24

okay, that's part of the hope. Is there

27:26

anything you're excited about, artificial intelligence? What's the next thing

27:28

coming to your most excited about right now? Hmm,

27:33

good question. I

27:35

think we want to improve our

27:37

lives. And I think where I

27:39

live right now, we are very privileged. We

27:42

already have AI access in

27:44

many ways. We just talked about it in our

27:46

smart homes and the cars and etc. But that

27:48

doesn't count for everybody in the world. It

27:51

would be great if those

27:53

advances and those benefits would

27:56

be broader available. You

27:58

didn't ask me Sam. I totally agree.

28:01

I mean, I think that if you

28:03

think about just like in education, right,

28:05

and the impact that it can have

28:07

on underprivileged communities and nations that, you

28:09

know, they don't need to have a

28:11

school set up anymore. You

28:13

could do so much and help so

28:16

many people just, you know,

28:18

learn and develop and build skills

28:20

that normally would rely on infrastructure

28:23

and physical people and teachers

28:25

and all that. You

28:27

think I'd be threatened by that, but I'm not

28:29

a bit. I mean, I think that's our biggest

28:31

opportunity. We have so many people that, I mean,

28:33

we just cannot get them all through education programs.

28:36

And the education programs we have are not

28:38

particularly optimized or fast. And if we

28:41

could solve that problem and get better, better

28:43

resources out of our brains, then there'll be

28:45

a huge win. Hey, Sam,

28:47

can I ask you a question? I know I turned this

28:49

now around, but if you think

28:51

that the shelf life of knowledge is

28:54

decreasing, right, there were some recent articles

28:56

about it that maybe what you learned

28:58

today is maybe worth for five years

29:00

and then kind of obsolete.

29:03

So how do you think this will

29:05

evolve in the education system? That's

29:08

huge because I think about that. I mean, I

29:10

teach a class in machine learning and AI, and

29:12

I am acutely aware that unless they're

29:14

graduating the semester that I teach them, everything

29:17

that I'm, you know, these specifics that

29:19

we're teaching them are likely to be

29:22

quite ephemeral. I mean, we've seen how

29:24

rapidly this evolves. I think that pushes

29:26

us to step back and

29:28

be higher level. If we slip into teaching

29:30

a tool, teaching how to click file, how

29:32

to click new, how to click open, how

29:34

to click save, those are very

29:37

low level skills. And

29:39

when we think about what kinds of things we should

29:41

be teaching, I mean, my university is a liberal arts

29:43

university. And I think that's a

29:45

big deal because if we

29:47

think about teaching technical

29:49

skills within a world of

29:52

liberal arts, I think that's

29:54

a big deal. We had the sexiest job

29:56

of the 21st century being data science.

30:00

The next one is not clear to me that data

30:02

science is involved. And it's not that data science isn't

30:04

important, it's just rapidly becoming commoditized.

30:06

And so then we have things like

30:09

philosophy, which become more important, and

30:11

ethics, which as the

30:13

cost of the data science drops, these

30:15

things become more important. Linguistics.

30:17

Linguistics, yeah. There you go.

30:20

Or large language models, right? Yeah.

30:24

Wonderful. Ellen, thank you so much. This has

30:26

been so insightful, and we thank you for

30:28

making the time. Yeah, thank you.

30:30

Thanks for tuning in. On our

30:33

next episode, Shervin and I venture

30:35

into the use of AI in

30:37

outer space with Vandy Verma, Chief

30:40

Engineer of Perseverance Robotic Operations and

30:42

Deputy Manager at NASA's Jet Propulsion

30:44

Laboratory. Please join us. Thanks

30:48

for listening to me, myself, and AI. We

30:51

believe, like you, that the conversation about

30:53

AI implementation doesn't start and stop with

30:55

this podcast. That's why we've

30:57

created a group on LinkedIn specifically for

30:59

listeners like you. It's called AI for

31:01

Leaders. And if you join us, you

31:03

can chat with show creators and hosts,

31:05

ask your own questions, share your insights,

31:08

and gain access to valuable resources about

31:10

AI implementation from MIT SMR and BCG.

31:13

You can access

31:15

it by visiting

31:17

mitsmr.com/AI for Leaders. We'll put that link in

31:19

the show notes, and we hope to see you there.

Unlock more with Podchaser Pro

  • Audience Insights
  • Contact Information
  • Demographics
  • Charts
  • Sponsor History
  • and More!
Pro Features