Episode Transcript
Transcripts are displayed as originally observed. Some content, including advertisements may have changed.
Use Ctrl + F to search
0:00
[Music]
0:02
hello and welcome to technically
0:04
speaking where scientists and Engineers
0:05
come together to chat about a common
0:07
interest to share knowledge and satisfy
0:08
some curiosity I'm Laura and I'm joined
0:11
by Sarah and Antonia to talk about
0:13
artificial intelligence and how it could
0:14
be used in healthcare to help with
0:16
Diagnostics or what else it might do for
0:18
medicine so Antonia I think the
0:21
inspiration for this episode started off
0:23
with you so tell us about it so I was
0:25
reading an article and the headline said
0:28
chatgpt4 can pass the US GP medical exam
0:32
and I thought hmm interesting we've
0:36
heard all sorts of things coming out of
0:38
chat GPT
0:40
and I thought about one of my friends
0:42
has recently passed her GP exam in the
0:45
UK maybe it'd be a good time to talk
0:47
about it fair enough and that's you
0:49
Sarah so you're a medical professional
0:51
so um do you have any thoughts about
0:53
this just a caveat I actually haven't
0:54
passed my GP exam yet I've passed my GP
0:57
entrance exam so as the exam to get into
1:00
GB training so I'm both a medical
1:02
professional and a surgical educator my
1:05
initial response to AI or the concept of
1:08
AI passing these exams is it it
1:11
threatens my job
1:13
um so that's my immediate gut reaction
1:16
and I also know that it's going to be
1:19
heavily invested in because
1:22
the NHS will need to reduce Workforce
1:25
Workforce costs over time so I think
1:28
that it's going definitely going to be
1:29
an area of interest in the future all
1:32
right and I guess one big question there
1:34
is is it better to have an AI doctor or
1:38
GP or surgeon even or is it better to
1:42
have a human doing that job and what's
1:43
the difference
1:45
so I guess we should start as we always
1:47
do in this show with defining what AI is
1:50
now I know it's something that needs to
1:52
be trained by feeding it data I don't
1:55
really have much experience that I've
1:56
never used GPT I've heard a lot about it
1:58
but I don't know how any of these things
1:59
actually work so if you guys got an
2:00
experience of AI or have a better
2:02
definition I don't have a definition as
2:05
much as just a AI stands for artificial
2:08
intelligence and I think it it gets
2:11
mixed with some of the terminology like
2:13
machine learning and neural networks and
2:16
I think those are two of the key tools
2:18
that are making up the AI that we see
2:21
around at the moment such as in chat GPT
2:24
or Bing's search engine is now also AI
2:28
powered and my experience is we've got a
2:31
bunch of data and fed it into the
2:34
machine as it were and told it to either
2:37
identify patterns or replicate something
2:42
and try to do the next iteration that
2:44
follows based on that information that's
2:47
been given and so it's learning and
2:50
sometimes it will develop a new way to
2:54
pick up those patterns and so it becomes
2:57
its own teacher and that we're not
2:59
telling it exactly what pattern it
3:01
should be reading the neural network I
3:03
think helps it combine more things so
3:07
it's not just a set data set that's my
3:10
understanding as someone who isn't in
3:12
computer science
3:13
I think I mean I'm also not in computer
3:16
science but I think that the term
3:17
artificial intelligence is a bit of a
3:19
misnomer because it's only as
3:21
intelligent as the information that we
3:24
feed it so it is based on the data that
3:27
we give it it can't create its own new
3:31
data and it only learns as much as we
3:35
reinforce that learning it it can't
3:38
reinforce its own learning if that makes
3:40
sense or if it does it creates its own
3:43
feedback loop and in which case it has
3:46
its own biases that will get Amplified
3:49
as it goes through more and more
3:50
iterations in a popular culture example
3:53
I want to say it was Twitter that tried
3:56
an AI on people's tweets and then it
3:59
found it got more aggressive more races
4:02
more bigoted that they had to shut it
4:05
down within a very short time frame
4:07
because they realized that what people
4:08
put on the internet should not be
4:10
repeated and Amplified this is something
4:13
that's I think is important to consider
4:15
when we talk about bias in healthcare
4:17
and bias and AI because any data that we
4:22
put into it any bias that we put into it
4:25
will be perpetuated and there's already
4:28
something that I'm very passionate about
4:29
is equity in healthcare and any
4:31
inequities that currently exist in
4:33
healthcare is only going to be
4:35
perpetuated by this AI it can't go
4:38
against any of the societal prejudice is
4:41
that we have one of the things that's
4:43
important to me is critical thinking so
4:46
the ability to take in information and
4:48
weigh it against other information and
4:50
say well does that match what I already
4:52
know about a particular thing or am I
4:55
spotting some sort of patent or
4:56
something doesn't fit into that pattern
4:57
that doesn't make sense that I should be
4:59
questioning I I don't know if AI has
5:01
that same capability to think critically
5:03
about something and decide whether
5:04
something is true or correct or
5:08
appropriate or not I don't know is the
5:11
honest answer I think it's able to say
5:14
this doesn't fit the patterns I've been
5:15
fed but able to truly assimilate
5:20
knowledge
5:21
and create new theories I don't think so
5:26
I think it'd be interesting because um
5:28
like you know asimov's Laura robotics we
5:32
put some certain rules in to ensure you
5:35
know the Safety and Security of our
5:37
future as we developed robotics at the
5:39
time I'm not going to repeat them
5:41
because I don't remember them off the
5:42
top of my head
5:44
but I wonder if we do put those barriers
5:47
in
5:48
can we put those barriers in in a
5:51
non-biased way but also isn't too
5:54
restrictive because then there's that
5:55
idea that you've put in a limitation but
5:58
then does that create other consequences
6:00
that we didn't foresee when we release
6:03
it out into world and actually it has a
6:06
complete blind spot because we said
6:07
you're not allowed to think about X so
6:10
then
6:12
never thinks about X I guess it depends
6:15
what you're using it for I guess pop
6:16
culture example is iRobot this robot was
6:19
sort of almost like a human being and
6:21
seemed to be doing certain things and
6:23
seem really independent but a more
6:25
practical example of what artificial
6:26
intelligence can be used for is I think
6:29
I saw um something a while ago now I've
6:31
mentioned it in a previous episode about
6:32
training AI to be able to tell when a
6:35
tumor is cancer or not based on an image
6:38
of it and it was actually better at
6:40
doing it than the specialist who had
6:42
been using these images to try and
6:44
diagnose cancer for decades maybe and it
6:47
seemed like the AI had been fed
6:48
sufficient data that it could spot I
6:51
guess really subtle color using the
6:52
images that sometimes a person might
6:54
miss because they're tired or they've
6:56
not eaten or they're just having a
6:57
really bad day or they're distracted by
6:59
something AI doesn't forget and it
7:01
doesn't get distracted necessarily so I
7:03
can I can see how in that instance it's
7:05
a lot more useful than just doing some
7:06
big picture out in the real world doing
7:08
everything thing yeah so for focused
7:11
tasks absolutely
7:13
and that specifically I was talking to
7:16
some histopathologists
7:18
um some colleagues of mine who very that
7:21
to explain the term
7:22
um basically they're the people when you
7:24
take a tumor out you send it off to the
7:27
lab they're the lab people that then
7:28
slice it up and look at it under a
7:30
microscope and tell you what is in that
7:32
biopsy
7:34
um so I was speaking to some of them and
7:36
they were saying that actually the way
7:37
that they study for their exams is
7:39
purely by looking at hundreds and
7:42
hundreds of slides hundreds and hundreds
7:43
of cases and that really interested me
7:45
because actually that's exactly what we
7:47
do for AI is that we just feed it all of
7:49
this data and eventually it comes to
7:51
recognize those patterns the other thing
7:53
that I was talking to them about because
7:54
I raised the idea of oh well that's
7:57
exactly what AI does so how is your job
7:59
any different
8:00
um and they were talking about actually
8:02
it's something that they are worried
8:04
about in the the world of histopathology
8:06
that they now need to look at career
8:09
tracks that are AI proof because there
8:12
is AI in some hospitals in America and
8:16
they're about to bring it into a
8:18
hospital in Nottingham as well to start
8:21
looking at the more simple cancer slides
8:23
add to free up histopathologist to do
8:25
something else
8:27
it's not a particularly attractive like
8:28
career or part of medicine
8:32
um which is possibly why they're so
8:34
heavily investing in AI but also they're
8:36
heavily investing in AI because it's so
8:38
easily replicated by AI
8:42
yeah and I imagine there are a lot more
8:43
Niche things that those professionals
8:45
could be doing not just looking at
8:47
images and saying yes cancer neural
8:49
cancer they can be talking to people but
8:51
actually
8:53
so these specific doctors don't have any
8:56
patient contact right what do they do
8:59
genuinely as they look at slides and
9:01
they tell you what is on the slides that
9:02
is it that is their entire job okay so
9:04
their job is effectively being taken by
9:06
the artificial intelligence yeah which
9:08
is why they've been talking about oh
9:09
well you know if you go into things like
9:11
prostate core biopsies which are much
9:13
more
9:14
specialist and require a human eye as
9:17
opposed to ai ai isn't as good good with
9:19
those
9:20
um that's like the route that they're
9:22
going because they're they're already
9:23
having to think about AI proof
9:25
sub-specialities all right wow you get
9:28
this idea that all medical professionals
9:30
talk to patients all the time but I can
9:33
see that's not necessarily true there
9:34
are a lot of people in the lab just
9:35
doing analysis supporting the people
9:37
that are I guess on the front line it's
9:39
one way of looking at it
9:41
yeah the the patient client facing part
9:44
of a person those doctors did they go
9:46
through the same training that you would
9:48
have gone through so the same amount of
9:50
like time and sweat to get to that point
9:53
so we all have the same degree and then
9:55
after two years of like basic training
9:59
um and rotating through hospitals you
10:01
then choose what specialty you want to
10:02
do so at that point they they will have
10:04
had a different life compared to me in
10:06
that sense that means loads more trainee
10:09
doctors could be free to go to other
10:12
disciplines it's just the challenge of
10:14
people who are already trained in that
10:17
field will have to find another field
10:19
which they can apply their skills to
10:21
yeah and
10:23
um in a Utopia uh Ai and machines taking
10:27
our work would mean that we have more
10:30
free time and the ability to enjoy our
10:32
lives but of course we understand that
10:34
that's not how Society works so it is a
10:37
threat to people's livelihoods and that
10:38
I think is going to be one of the core
10:40
reasons why if you you speak to a doctor
10:42
their initial response will be no don't
10:44
like it this is my job only I could ever
10:47
do this and AI of course is always going
10:49
to be inferior but actually there will
10:51
be some ways in which is superior
10:53
I I can see again that um being able to
10:56
diagnose something that frees up time to
10:59
do something else would be particularly
11:01
useful if you've got any specific
11:01
examples of how it could be superior one
11:04
of the main ways that I I think it will
11:06
be very useful is when you have for
11:10
example healthcare workers who are
11:12
working in isolation in remote remote
11:13
areas so you're the only doctor for a
11:16
community
11:18
um it's useful to be able to bounce
11:19
ideas off of something that we do quite
11:20
a lot in hospitals as we speak to
11:22
consultant colleagues about I've got
11:25
this really interesting case and I'm not
11:26
really sure what to do I think it might
11:28
be this I think it might be that where
11:30
should I go with this and you can kind
11:32
of have that conversation with an expert
11:34
opinion
11:36
um so I think it would definitely
11:39
you know benefit people who are working
11:41
in isolation
11:43
um it might also improve access to
11:45
health care so for example if you've got
11:47
an impoverished Community then AI is
11:50
going to be cheaper than hiring a doctor
11:53
so it could also improve access to
11:55
Healthcare in those sorts of ways
12:03
statistician compared to an adequately
12:07
trained doctor
12:09
but you have to train those doctors so
12:12
it does then mean that you don't have to
12:14
train people in that skill anymore
12:17
um
12:18
so you can cut out quite a lot of money
12:21
and effort there if you wanted to I
12:23
would argue that becoming overly reliant
12:26
on AI could be quite dangerous though if
12:29
for I mean stuff that happens all the
12:31
time in the NHS like our servers go down
12:33
and then we suddenly have no access
12:35
genuinely we suddenly have no access to
12:37
things like drug charts to things like
12:40
um patients observations so like their
12:42
blood pressure and their heart rate so
12:44
suddenly we have no access to any of
12:46
this sort of stuff and we have to go
12:47
back to being on paper so if we become
12:50
overly reliant on AI we have to have the
12:52
infrastructure there to support it all
12:54
right you touched on a lot of points
12:55
there one that I wanted to pick up on
12:57
was the idea that yeah it'd be great to
13:01
to have a sort of colleague inverted
13:04
commas
13:05
um as an AI to to discuss ideas but
13:09
we've also seen like people talk about
13:11
AI hallucinating and just getting basic
13:14
facts wrong that's what people have
13:16
called it in the industry of just it
13:18
just literally
13:20
says so you could look up on a spec
13:23
sheet sorry specification and it says
13:25
this camera phone has this megapixels
13:28
and instead the the AI just picks up a
13:31
random one and just says now it's got
13:34
like 50 billion
13:36
I have heard this for teaching purposes
13:38
essentially a lot of lecturers were
13:39
saying that students are using AI to do
13:43
the coursework for them and the air the
13:45
eye had been um set up in a way that it
13:48
was deliberately putting errors in there
13:49
so the lecturers could spot it so I
13:51
wonder if that's what the Hallucination
13:52
is about I I'm gonna let you into a
13:55
little trade secret here the majority of
13:58
my medical degree was learned on
14:00
Wikipedia
14:01
one of the one of the skills of being a
14:04
doctor is not necessarily being able to
14:07
have the entirety of medical knowledge
14:11
in your mind but being able to to like
14:15
to know where to go to look it up and
14:18
you have to retain that skill even when
14:20
you're using AI if it outputs something
14:22
that you go well that's a bit weird you
14:25
need to have other options to be able to
14:28
to know that it's you know giving you
14:30
the correct answer I would agree with
14:32
that actually because um I've been a
14:34
scientist working in labs and
14:35
supervising students for years now we
14:37
would all say the same thing that it's
14:39
not that you've got an entire textbook
14:40
in your head you definitely have a lot
14:42
of knowledge in there and that knowledge
14:43
helps you infer things from other bits
14:46
of knowledge that you may be not quite
14:47
as familiar with so same sort of thing
14:49
yeah absolutely there was something you
14:51
said fairly near the start about a
14:53
patient facing roles essentially so I
14:56
have a story from many many years ago I
14:58
went to my GP my general practitioner
15:00
saying I've got a sore throat and I've
15:02
had sore throat for a while but I've not
15:03
been able to do anything about it
15:05
because of my situation my situation has
15:07
now changed so I can get to you during
15:09
normal working hours take a look at it
15:11
tell me what's going wrong you can
15:13
definitely see something is wrong back
15:14
there and they literally laughed at me
15:16
an AI wouldn't have done that no it
15:18
would have asked me questions yeah and
15:21
Antonio you actually had a paper that
15:24
you sent me today saying that AI has
15:26
been proven to have better bedside
15:27
manner
15:29
um than doctors and I can entirely see
15:33
that uh as you said we all are human we
15:36
all have human error and empathy is
15:39
proven to be a skill communication has
15:41
proven to be a skill it can be learned
15:43
but people have to put the effort in to
15:45
learn it so yes on the one hand AI would
15:49
never make a mistake such as outputting
15:51
hahaha are you giving them uh symptoms
15:54
and I should joined it too unless you
15:57
joined it too
15:58
unless it's taught or that all of our
16:01
imperfect data but
16:03
I also think as I said it may be an
16:07
equal or superior diagnostician but it
16:09
might will not necessarily be an equal
16:11
or Superior clinician and the reason why
16:13
I say that is because a lot of my job is
16:15
delivering bad news especially diagnoses
16:18
of cancer and for that you need a very
16:21
specific amount of tact and the ability
16:24
to respond to the patient in front of
16:25
you to the person in front of you and
16:27
yes probably eventually you could get to
16:29
the point where if they look down
16:30
they're sad or if they make that very
16:32
specific twitch of their facial muscles
16:34
they want to hug eventually the AI will
16:36
get to that point but will
16:39
will it ever be able to replace
16:42
the warmth and the emotion that comes
16:46
from a real person connecting with you
16:48
and holding your hand in that moment I
16:50
don't think so no I think that human
16:53
connection is what would make a big
16:54
difference in a situation like that and
16:57
I will say I've been treated by some
16:58
healthcare professionals who have been
17:00
absolutely amazing you don't have to
17:02
defend them it's absolutely
17:04
I'm just thinking of some nurse
17:06
practitioners that are just not knowing
17:07
exactly what to do and sorted things out
17:09
for me and just understood what I'm
17:11
going through without me having to say
17:13
anything to them other than my arm is
17:15
swollen and numb help yeah yeah and I I
17:19
would argue that it's their life
17:20
experience that means that they're able
17:22
to provide you with the exact support
17:24
that you need and it's their experience
17:25
of dealing with people who are similar
17:27
to you and there's no reason why we
17:28
can't input that data into an AI but do
17:31
we want to yeah
17:34
laughs is that what we expect you know
17:37
there's also that kind of managing
17:38
expectations that you know someone's
17:41
always wanting to go I want a second
17:43
opinion I want the person with the most
17:46
credentials or do we then start putting
17:48
AI with the best success rate in front
17:51
of them and say well they've got a
17:53
99.9999 correct rate true which you will
17:58
never be able to say of a human
17:59
absolutely I don't know thinking about
18:01
the film AI
18:03
the one with Jude Law and the other
18:06
child actor that I can't remember the
18:08
name of um
18:11
Haley Haley Osmond Haley Joel Osmond
18:13
that was it anyway
18:15
AI that film because they have robotic
18:18
sex workers they had lots of companion
18:20
robots
18:21
who were able to emulate and then they
18:24
had AI robot children that were able to
18:27
emulate those sorts of human connections
18:30
and human emotions
18:32
um
18:33
so I think the fact that we are creating
18:38
films and literature about it
18:41
probably means that actually ultimately
18:44
we will end up wanting it
18:46
um if you could and it's it's something
18:48
I don't know that I've done with my
18:50
friends of like if you could build your
18:52
perfect man what would they be like
18:55
I mean if you could build your perfect
18:57
doctor what would they be like and if
18:59
you could why wouldn't you because they
19:01
would be perfect
19:03
yeah what I don't really know what I'd
19:05
want a doctor to look like I kind of
19:06
only go to them when I'm told to really
19:09
like um
19:10
oh well I've got a an implant that needs
19:13
to be replaced every three years and I
19:14
get letters saying you have to go for
19:15
this test
19:16
it's not often I get ill or think I've
19:20
got something that says I've got
19:22
symptoms I should go to visit my GP or
19:24
sit in an emergency room so I don't know
19:26
I've not had a lot of experience on that
19:28
front to be able to say what I'd want
19:31
them to look like well like most of what
19:32
I see is what I see on TV that's
19:33
probably not very representative Gray's
19:35
Anatomy yeah Antonio yeah I've I have
19:39
been Cinder docs for about all sorts and
19:42
also just when I had a niggle you know
19:44
when I wasn't really sure what what it
19:47
is but it had some sort of measure
19:50
measurable impact on my life like you
19:53
know can't sleep as well because of said
19:56
sad thing
19:58
um and they do take a while to to get to
20:01
the bottom of it you know there's a bit
20:03
of trial and error that because
20:06
you know it it's not always obvious what
20:10
it is and I think I think part of it was
20:12
drawing covert as well where we didn't
20:15
have face-to-face interaction and so
20:18
there was a little bit of like well we
20:20
had um asked my GP so you could send
20:22
photos which felt really weird like here
20:25
here's this weird patch of my elbow can
20:27
you please see what it is I know black I
20:30
can't really see anything it doesn't
20:32
look that bad and I'm trying to describe
20:34
it with my vague non-common medical
20:37
terms because I feel like you know
20:39
doctors have that like measure of is it
20:42
a stabbing pain is it a throbbing pain
20:45
is it a sharp pain and I'm like I don't
20:48
know I've never been stabbed yeah
20:55
this is my own experience of pain
20:58
so yeah it's almost like you need a
21:00
doctor to be able to interpret your
21:04
human foibles because like if an AI is
21:07
only very good at understanding very
21:09
precise instruction
21:11
and we are imprecise with our uh
21:15
descriptions then they would have to
21:18
figure it out you know but I guess the
21:21
argument is that over time with enough
21:23
data put in they would learn that you
21:26
know the majority of people when they
21:27
say this they mean this and if it's
21:29
unclear then they ask clarifying
21:30
questions I can see that getting really
21:32
frustrating though it's but like on that
21:34
the point Antonio made about pain if
21:36
you're sickness saying well it's it's
21:37
kind of starving but I don't know I also
21:39
throbbed as well and it's like which is
21:40
it is it stabbing or throbbing pick one
21:42
but yeah and I keep I feel like I'm
21:44
defending AI a lot but then you would
21:46
train it to say okay that's it let's
21:49
move on from that question because
21:50
that's exactly what a doctor would do is
21:52
they would just say you're getting
21:53
frustrated by this let's move on let's
21:55
just let's talk about something else
21:56
it's an interesting point though can you
21:58
train AI to be I was going to say more
22:01
human than a doctor with no empathy
22:05
actually that's what I want to say
22:06
though I mean I I remember reading
22:09
somewhere but it might be apocryphal so
22:11
please forgive me
22:13
um but surgeons have a higher rate of
22:16
psychopaths than any other profession
22:20
um because they like the stabbing
22:23
get theme now
22:27
um potentially I mean we teach set
22:30
phrases to medical students on how to
22:33
respond to uncomfortable
22:36
outpourings from patients uncomfortable
22:39
for the patients I'm uncomfortable for
22:41
them so you say things like that must be
22:43
really hard or I can see why that would
22:46
be difficult for you or yes I I get why
22:49
that's frustrating like you you teach
22:51
them very set phrases but if you don't
22:53
apply them correctly with the correct
22:56
like vocal tone and with the correct
22:58
facial expression it does come out like
23:01
AI is just giving you yes I can see why
23:03
that's hard oh that did sound a bit
23:04
patronizing
23:08
talking about training them and pouring
23:12
information into them you mean the AI
23:14
not the DNA doctors yeah no not the
23:16
junior doctors no
23:18
um trading Ai and trading and pouring
23:20
data into AI
23:22
how much data is enough how much data is
23:26
too much
23:27
how much of our private lives should we
23:32
be putting into these machines how
23:36
secure is it
23:38
um who has access to it who has the
23:41
right to read it
23:43
um I mean and at what point
23:46
are we allowed to just not know stuff
23:49
about ourselves like if you for example
23:52
if we put people's DNA and family
23:54
history and medical history and
23:57
everything else in that
23:58
at what point it might come up saying oh
24:01
you're at risk of x y and z
24:03
do they have to know that do I have to
24:05
know that oh what if it knows it and
24:07
tries to stay towards it because you
24:09
don't want to know yeah yeah
24:12
that's a very cyclical argument yeah I
24:15
guess a good example of what you're
24:16
saying is there's a TV program it's on
24:18
Netflix I think called The Bold type
24:20
it's about three youngish women making
24:22
their way in publishing in New York
24:24
um and one of them says um I think my
24:27
mother might have had breast cancer and
24:29
she ended up getting herself checked to
24:30
see if she had the gene and she did and
24:33
she got a bit paranoid about it it was
24:34
doing all these checks and whatever else
24:35
and eventually decided to get a
24:37
mastectomy so she didn't have to worry
24:38
about it and she thought that would
24:40
solve the problem but she obviously got
24:42
implants to replace what had been
24:43
removed and then she started to feel
24:45
like it wasn't her own body and she was
24:46
uncomfortable and there was nothing she
24:48
could do about it and I think if I were
24:51
in that situation I wouldn't want to
24:52
know I'd rather take my chances because
24:55
just because she have a gene doesn't
24:57
mean you will get cancer right there are
24:59
other factors at play but why if it
25:01
becomes so good at predicting that it
25:03
just knew because it's had so much
25:06
information that the chance of it being
25:09
wrong was so little if it could say with
25:12
like 90 certainty that given your
25:15
environment your habits your genetics
25:18
and all this other information that's
25:20
not directly relevant to me well I have
25:22
to sit there and think well is it if I
25:24
am going to get cancer and I can avoid
25:26
it maybe I should maybe I wouldn't want
25:27
to I don't know maybe I'd be happy with
25:29
waiting for that point and then having
25:32
removed when the time came I don't know
25:33
I suppose it's on the one hand you have
25:37
a right to know and make an informed
25:40
Choice with your own body on the other
25:43
hand it's the right to be ignorant
25:45
because ignorance is bliss
25:47
and if you know that you're likely to
25:51
die of a heart attack at 75
25:54
that's so far away but it's always going
25:56
to be hanging over your head if you know
25:58
now know that
26:01
um compared to if you were just
26:02
blissfully up until the age of 75 living
26:04
your life and then you know one day you
26:06
just don't wake up on a lighter note I'm
26:09
sorry I was going to continue down that
26:11
model in veins
26:13
this kind of thought process was applied
26:16
in the Black Mirror episode hang the DJ
26:19
which was where a couple
26:22
had an app and they got a number and it
26:26
turned out that number represented how
26:28
long the app expected them to be
26:30
together
26:31
so if you knew the number right from
26:34
when you met how would you play out that
26:37
relationship would it change the way you
26:39
treated that relationship absolutely
26:41
absolutely if if an app told me or
26:44
you'll be together five years and then
26:46
break up I'd be like no I'm just not
26:48
gonna bother
26:50
it's been five years in a relationship
26:52
that's gonna end see I'd go the other
26:54
way and I'd want to know why it was
26:55
saying that and try and beat that number
26:56
I said you're not telling me it's gonna
26:58
last like this it's going to be
26:58
different it has to be different I
27:00
refuse to believe that's true
27:04
but and maybe maybe on on the side of
27:08
things is that a sign that there are
27:10
some people like me who will put their
27:13
entire faith in what the computer tells
27:16
them when it could be wrong
27:18
and are we you know are there people out
27:22
there who will say oh I'm gonna die age
27:24
50 of Ms no I'm gonna kill myself at 40.
27:27
so I don't have to live that last 10
27:30
years and what if it's wrong you know
27:32
how how much Faith are we going to end
27:34
up putting in Ai and that is that is a
27:36
danger that we shouldn't believe that
27:41
computers are perfect AI is perfect
27:43
because ultimately it is based on human
27:47
fallibility so I guess you'd want some
27:49
way of also training it to see when it's
27:52
been fed something based on something
27:54
that's a fallacy I really don't know how
27:56
you do that though I I mean I guess you
27:58
could always just you'd do it with the
27:59
caveat of like the same way that people
28:01
oh that's just my opinion though of you
28:04
know you have a caveat of this is AI and
28:06
we cannot promise that it's perfect or
28:08
whatever and in the same way that you
28:10
know whenever I'm counseling patients I
28:12
always say based on the information that
28:14
I have this is what I think is going on
28:15
therefore this is what I would recommend
28:17
and there will be risks with everything
28:19
and it might not turn out like this you
28:21
might end up having this problem you
28:22
know
28:23
doing thorough counseling on the risks
28:26
and benefits of AI opinions and are they
28:28
better than human opinions or are they
28:31
still just opinions I guess from my side
28:35
I always see my opinion as just that
28:37
just my opinion this is my professional
28:39
opinion but do patients ever see the
28:42
advice the doctors give them as an
28:43
opinion do they see it as gospel that is
28:46
true I guess it depends on your
28:48
experience with them I think some some
28:50
people just straight up don't listen if
28:52
it's not what they wanted to hear right
28:54
like you know you've been told you
28:58
should cut this out of your diet because
29:00
it's increasing your risk and people
29:02
don't like people know doctors are right
29:05
but ultimately they don't follow it
29:07
ultimately they want to eat that
29:08
lambdish even though it's going to make
29:10
their gout worse yes yes
29:13
this friend who spent a long time trying
29:17
to get this swollen ankle diagnosed and
29:21
it makes me wonder if they had AI well
29:24
the doctor had AI would it have spotted
29:26
it faster because it was like sooner
29:29
than the typical time that you would get
29:32
this condition and also he didn't like
29:35
going to the doctor because he couldn't
29:37
walk so you just go ah it's a bit
29:41
awkward now and then you know you know
29:43
when you you go to a doctor and you say
29:45
oh I need to see a doctor about this ah
29:48
well the next appointment's like two
29:49
weeks away and by the time that two-week
29:52
appointment comes up you're like it's
29:53
gone but it comes back and so you'd book
29:55
another appointment and then it's gone
29:57
again and so they never actually get to
29:59
observe it yeah
30:01
um that's a really interesting point in
30:03
the this friend of ours who's been
30:05
diagnosed with gout
30:08
he is very young to have gotten gout he
30:12
is not the typical sort of person to get
30:15
gout either so usually you associate a
30:17
gout with older white men generally who
30:21
drink a lot eat a lot of meat like think
30:24
like old-timey ships captains that's
30:27
generally Henry VII or Henry VIII yeah
30:30
exactly and that's generally who gets
30:32
gout and he doesn't fit any of those
30:34
stereotypes so would a
30:38
computer that's purely basing it on
30:41
pattern recognition ever come to that
30:43
diagnosis
30:45
um would it come to it quicker based on
30:48
some unknown variables that we that we
30:51
haven't inputted yet so the
30:52
histopathologist I was talking to they
30:55
were saying that actually the AI now
30:57
uses criteria that they that they don't
31:00
know it's using for it it's hard to
31:03
describe but basically they inputted the
31:05
criteria that they've teached that
31:06
they've taught trainees and it's also
31:09
using some extra criteria that it can't
31:12
explain that it's using so would
31:15
eventually AI get to the point where it
31:18
would come to these conclusions quicker
31:20
because it has picked up extra patterns
31:24
that we don't see as clinicians but then
31:26
could that ever be the absolute truth
31:28
because you couldn't explain it well I'm
31:30
just wondering if this goes back to what
31:31
Sarah was saying about you wouldn't just
31:33
rely on the AI alone for various reasons
31:36
so if you can get it to explore blame to
31:39
you so it can teach you what it's
31:41
looking out for yeah because I guess
31:43
that's something with people like well
31:44
that looks like cancer to me but I can't
31:46
explain why that's cancer I just know
31:47
that it is trust me this is my opinion
31:50
yeah
31:51
trust me I'm a doctor
31:54
I'm imagining this 200 300 years in the
31:58
future is what I'm imagining where you
32:00
have ai practitioners who are operating
32:02
independently and at that point I do
32:06
wonder whether it would come to these
32:08
sorts of realizations sooner or whether
32:10
I think definitely in the short term it
32:12
probably would never have considered it
32:14
as a differential because it doesn't fit
32:17
the standard patterns I guess there are
32:20
always going to be outliers in the data
32:22
because our understanding of the human
32:23
body and well the entire world isn't
32:25
perfect yet anyway there are still lots
32:27
of things that we don't know and there
32:29
are still lots of extra patterns to spot
32:31
and I guess that's also where some of
32:33
the biases come in healthcare already
32:34
like for example um women's problems are
32:37
often underlooked people of ethnic
32:39
minorities in a white country have
32:41
particular likely outcomes that white
32:44
people don't get um and that again it
32:46
goes back to what you're saying about
32:47
training it to already have bias because
32:50
the data set is limited yeah
32:52
absolutely to end on a slightly lighter
32:56
and more futuristic you know I've heard
32:58
little bits about advances in surgery
33:00
and like doing things that you would
33:02
never have thought possibly before like
33:03
using little robotic arms to do things
33:05
and
33:06
microscopic images so you can see things
33:09
with Incredible detail is that something
33:11
that you combine with AI so you've
33:13
basically got a robot doing surgery on
33:14
people with no human intervention so I
33:17
spoke to a couple of my colleagues so I
33:19
spoke to one person who is the head of
33:21
Robotics at Leicester
33:23
um University Hospitals of Leicester
33:25
um and he was saying that the main issue
33:28
for humans is there's no haptic feedback
33:31
during robotic surgery because you're
33:34
essentially it's it's like piloting
33:39
um a remote control car
33:41
there's no like you can't feel when the
33:44
car goes over a bump or you know Turns
33:46
Upside Down you just have to see it and
33:49
then base it on what you're seeing this
33:51
is why I don't like computer games no
33:53
feedback at that time I obviously need
33:55
the haptic feedback and not just the
33:56
visual yeah whereas with laparoscopic at
33:59
least you get some haptic feedback
34:01
because you're directly touching stuff
34:02
so sorry for laparoscopic it's Keyhole
34:04
surgery so using a camera and looking
34:07
inside using very small holes but you're
34:09
still touching things directly whereas
34:11
robotic it's a computer attached to that
34:13
equipment and then you're you know I
34:16
don't know a couple of meters away
34:18
controlling that robot with like a
34:20
little like joystick and two two
34:23
joysticks genuinely two joist joysticks
34:25
that move in three directions that you
34:27
can control all of the different
34:29
um instruments with so you've got no
34:32
idea if you're touching something that's
34:33
quite squishy yeah and it's all based on
34:36
your prior knowledge of what
34:39
uh what anatomical structures are where
34:42
which you could easily program into AI
34:45
or you could map it you know in in this
34:48
futuristic world you don't even need to
34:50
give it an estimation you could just
34:51
scan them true very true yeah you could
34:54
do an MRI and then you could program
34:55
that into a robot and away it goes
34:58
that's very true actually in which case
35:01
if you were to do that that would
35:03
actually overcome quite a lot of the
35:04
stuff quite a lot of the arguments
35:06
against
35:07
um Ai and robotic surgery because the
35:10
main problem is having the confidence to
35:14
cut something when you're unsure what it
35:15
is so when you lift something up for
35:18
example and you go well I know it can't
35:20
be this I know it can't be that and this
35:22
other really important structure I know
35:24
it can't be any of that so it can't it's
35:26
nothing important so I can just cut it
35:27
and that is
35:29
a bit of confidence
35:32
the idea of non-important things in your
35:34
body it's a bit
35:36
but yeah like the appendix yeah but the
35:40
appendix wouldn't be there but yeah
35:41
let's just cut it out fans don't need it
35:44
sorry anyway
35:47
so it's having that confidence to know
35:49
that there aren't any vital structures
35:51
that you're going to go through basically
35:53
and would and AI ever be able to
35:57
independently make those sorts of
35:59
assumptions and I suppose eventually you
36:02
would get the technology to the point
36:04
where you could do an MRI scan
36:06
but you would have to have a surgeon
36:09
label every single structure as cut hair
36:13
don't cut hair important not important
36:16
because I mean it's even been it's been
36:18
I mean anecdotally found that nurse
36:21
practitioners nurse surgeons nose
36:24
operators don't have the the confidence
36:27
to cut without having a surgeon there
36:29
saying yes to take it
36:31
and having
36:33
a person there to take that risk for
36:35
them I feel like AI probably wouldn't do
36:38
risk very well yeah I can see why you
36:41
know people have that kind of measure of
36:44
how much risk they're willing to take
36:45
whereas an AI we kind of say like you
36:48
know if it was a programmer they might
36:49
say okay let's say
36:51
99.99999
36:53
absolute but then do we get in a in a
36:57
world where these robotic surgeons go
36:59
into the operating theater and then go
37:01
yeah that's a little that's a little bit
37:03
too borderline for me yeah exactly
37:06
everyone get back out we're coming out
37:09
this yeah and I mean risk in surgery is
37:12
is I mean basically the whole premise of
37:14
surgery is risk management so what what
37:17
is the risk if we don't do the operation
37:18
what is the risk if we do what is the
37:20
risk if we do this specific operation
37:22
versus a different type versus if we
37:24
just do this or we just do that
37:26
um
37:27
so I I think risk management is going to
37:31
be a big area that AI will struggle with
37:34
um
37:35
but then there's other stuff that's
37:37
taking out uh surgery as a required
37:41
specialty which obviously we won't go
37:43
into here uh future episodes maybe
37:45
especially the the idea about risk which
37:47
is the very first episode talking about
37:49
what risk means to us do it over two
37:51
years ago now yeah and we didn't go into
37:52
that much depth though wouldn't AI be
37:54
able to judge risk
37:55
um
37:57
to sum up my my opinion on this whole AI
38:00
thing
38:01
um AI in healthcare AI is built and
38:05
trained on imperfect data from imperfect
38:09
humans
38:10
therefore expecting Perfection from AI
38:13
is impossible we shouldn't expect
38:16
Perfection from Ai and there is no such
38:19
thing as
38:21
perfection in the human world I also
38:24
don't expect Perfection from your
38:25
doctors
38:26
well that just reminds me of the Chaos
38:28
Theory episode where we said you know
38:30
with enough variables and enough
38:32
computing power we could almost put a
38:36
theory to predict something like this
38:39
but will we ever get to that point where
38:41
we have enough computing power to do
38:42
that good question what do you recommend
38:46
for people trying to train AI to be a
38:50
better doctor because you know there are
38:52
pros and cons to it we've talked about a
38:54
lot of con but they're they seem to be
38:56
if we could work it out you know we're
38:58
very early stage in it so what do you
39:00
think should be taken into consideration
39:01
today for the future what would be the
39:07
ideal clinician and then from that build
39:12
the AI into that so you don't just think
39:15
about a diagnostician because we're
39:17
basically there already with AI you also
39:19
have to think about personalized
39:21
management plans you also have to think
39:23
about communication with patients you
39:25
also have to think about
39:26
the ai's own resilience and ability to
39:29
deal with the workload you have to think
39:32
about the way that it interacts with
39:34
other healthcare workers and the system
39:36
in general that is probably what I'd
39:38
recommend is think about what your ideal
39:39
is and build the AI towards that in a
39:42
holistic way so like any good project
39:45
management yeah exactly great Okay so
39:49
we've covered a lot of different things
39:51
about Ai and how it could impact
39:54
Healthcare today or how it already
39:56
impacts Healthcare thank you Sarah for
39:59
sharing your experiences you're welcome
40:00
I hope we continue to have great
40:03
conversations like this in other
40:04
episodes
40:06
the views expressed in this podcast
40:07
belong entirely to the person that said
40:09
them they do not represent any industry
40:11
or organization if you enjoyed listening
40:13
to these views it would really help us
40:14
out if you could rate US leave a review
40:15
and tell a friend this podcast was
40:17
sponsored by no one but if you're
40:18
interested in funding us to continue to
40:20
have Frank discussions about science and
40:21
engineering please get in touch
40:23
[Music]
Podchaser is the ultimate destination for podcast data, search, and discovery. Learn More