Episode Transcript
Transcripts are displayed as originally observed. Some content, including advertisements may have changed.
Use Ctrl + F to search
0:03
Welcome to Preparing for AI with
0:05
Matt Cartwright and Jimmy Rhodes , the
0:08
podcast which investigates the effect of AI
0:10
on jobs , one industry at a time . We
0:13
dig deep into barriers to change , the coming
0:15
backlash and ideas for solutions
0:17
and actions that individuals and groups can
0:19
take . We're making it our mission to
0:21
help you prepare for the human social
0:23
impacts of AI . We're making it our mission to help you
0:25
prepare for the human social impacts of AI . Touch my tears with your lips
0:27
, touch my world with your fingertips . Welcome
0:30
to Preparing for AI with me , matt Cartwright
0:32
, and me , jimmy Rhodes , and
0:35
welcome back after a couple of weeks
0:37
away . We
0:43
are back with the podcast and this is going to be an industry-focused episode . We're
0:45
going to be looking at the comms industry with Daniel Lyons later , but
0:48
because we've been away for a while and because we've
0:50
had well , there's been so much kind of
0:52
going on , as usual , we will
0:54
do a kind of catch-up , but we're going to look at it more
0:56
about a kind of introduction , welcome
0:58
back and the things that have really been interesting
1:01
us over the last few weeks
1:03
. So , jimmy , that have really been interesting us over the
1:05
last few weeks . So , jimmy , do you want to start off ?
1:10
And then I've got a few things that I wanted to bring to our
1:12
listeners' attention as well . Sure , yeah , so I think the biggest news
1:14
is Apple's WWDC conference
1:17
. Apple
1:20
finally got on into the AI game , so they haven't really talked
1:22
about AI much . They've been very quiet
1:25
on the subject . They haven't been developing
1:27
their own models or discussing AI
1:29
, but , as always with Apple
1:31
, they've decided that they're now
1:33
going to own AI and it's kind
1:35
of their idea and it's this , it's this new
1:37
thing that they've come up with . So
1:40
I think they're calling it Apple intelligence , or that's
1:42
what it's been dubbed online . So
1:44
what ? So what they're actually doing
1:46
think they're calling it Apple intelligence , or that's what it's
1:48
been dubbed online so what they're actually doing is they're integrating
1:50
chat GPT into Siri . So Siri is going to be back with
1:53
a vengeance . I think it was pretty
1:55
useless before previously , but
1:58
the idea is now throughout the iPhone
2:00
, you're going to have chat GPT
2:02
.
2:04
Sorry , Jimmy . As you just said , Siri
2:06
there , every
2:11
single Apple device in our studio has started twitching away like crazy , so it's good to see that , at
2:13
the moment at least , Siri is still operating as it did for
2:16
the last however many years .
2:17
Yeah so , absolutely
2:20
so . Apple are going to yeah , so , absolutely so
2:22
Apple are going to bring
2:24
chat GPT into Apple devices
2:26
. That's kind of the gist of it . I
2:31
think there's been a bit of shock around it because Apple have always
2:33
been really pro-privacy and have
2:35
actually got pretty good security on their devices
2:38
and all this kind of thing . And now what
2:40
it sounds like what they're going to be doing is sending
2:42
all your data to chat
2:44
GPT servers to
2:46
do inference , so that Siri gets
2:48
improved and you get a much better
2:51
experience on the device
2:53
, which is a bit of a weird one because , as
2:55
I say , they've been pretty quiet on it . Everyone thought they were going
2:57
to develop their own models , but it seems like
2:59
they're going this chat GPT route instead . So , in
3:02
a positive light , what they're looking to , what they're promising
3:04
? This chat GPT route instead ? So , in a positive light , what they're looking to
3:06
, what they're promising is that you're going to have a
3:09
kind of seamless experience across all
3:11
of your devices , all of your iOS devices , with
3:13
AI features incorporated across all
3:16
of your apps . So you'll and
3:18
, as I say , it should just be a massive improvement
3:20
over what you've had previously , with
3:23
Siri being , I , I guess , on
3:25
the back burner for quite a while well
3:28
, is there any plans for
3:30
kind of hardware ?
3:32
so you know we talked , I think , in the last episode
3:34
we were talking about the um
3:36
, microsoft surface laptops that will contain
3:38
the new kind of chips that will
3:40
. You know , they'll have the gpu , they'll have um
3:43
, obviously normal process , and then they'll have this kind
3:45
of neural unit . I mean , is there
3:47
any talk yet about devices and whether they will
3:49
have any ? You know particular
3:52
change to the , the chip infrastructure
3:54
, or at the moment are we just looking at this as a
3:56
? It's just a kind of software addition
3:59
?
4:00
it's a , as far as I understand it , it's a software
4:02
addition , um . I think
4:04
in the future we are going to see
4:06
more on hardware type ai
4:09
, um , as you mentioned , with the surface devices
4:11
we spoke about recently . I
4:13
think one other thing this has done
4:15
is finally
4:17
I mean it was already pretty much dead in the water
4:19
but the , the rabbit device , um
4:22
, the sort of dedicated hardware device . I mean it was
4:24
stillborn , wasn't it ?
4:25
It was always , it was yeah .
4:28
But what they always said was you know , you can
4:30
do all this with a phone and eventually
4:33
Apple or Google will just introduce this into
4:35
phones . And eventually turned
4:37
out to be like two or three months later , and
4:39
the rabbit was crap anyway .
4:41
I still hope , without going on about it , you
4:43
know that at some point that
4:46
some of the ai tools do allow us to move
4:48
away from screens a bit more . And
4:50
you know you can have a screen but
4:52
not necessarily have to use the screen all the time
4:54
. And I think you know , if you've got an apple watch , one
4:57
of the things with apple watches you
4:59
know I can see that being really useful in terms of you
5:01
can just talk to it and you've got something right next to your
5:03
face , because it is the
5:06
one good thing about the rabbit and
5:08
the AI pin was that idea of you
5:10
know steering people away from from
5:12
screen time , not just in terms of it making
5:14
it a more natural interaction , but actually just in terms
5:16
of you know , the health of your eyes and
5:18
not looking at rectangles every
5:21
day rectangles
5:27
every day .
5:27
Yeah , I totally agree , and now that we've got the Apple ad section of the podcast out of the
5:29
way , other mobile devices are available .
5:32
So the first thing I wanted to talk about was a
5:35
piece of research on impressions of AI
5:37
that the Reuters Institute and the University
5:39
of Oxford put out , probably
5:42
about a month or so ago now . This
5:44
was based on the public of six
5:47
countries and it was on what
5:49
they think of the application of AI in news
5:51
, so specifically in news and journalism , but then
5:53
also across work and life . The
5:56
countries they looked at were Argentina , denmark
5:59
, france , japan , the UK and the US . So
6:01
you know , although it's a , you
6:04
know those countries are not all the same , obviously , but
6:06
it's . I would say this is not
6:09
a . It's not a reflection
6:11
of the whole world let's put it that way , but it's
6:13
still interesting and I would imagine , for people who
6:15
are listening to this podcast , it probably reflects
6:17
the you know the kind of countries that
6:19
you're listening from . So
6:22
ChatGPT was the best known generative
6:24
AI product , unsurprisingly
6:26
, but there was only 1% of
6:29
people in Japan who were
6:31
using either ChatGPT or
6:33
any generative AI tools daily
6:35
, and in France and the UK that was 2%
6:37
. It was 7% in the US , a
6:40
total of around 30% across
6:42
the population . So this was a kind of average out across
6:44
the population of the six countries had not
6:46
heard of any AI tools at all . 56%
6:50
of 18 to 24 year olds have
6:52
used chat GPT at least once , but
6:54
that's only 16% when you get to
6:56
age 55 and over . There
7:00
was optimism around AI's
7:02
impact on science , healthcare , daily
7:04
routine and , surprisingly to me , media
7:07
and entertainment . I'm not
7:09
sure I necessarily agree on media
7:11
. I guess entertainment makes more sense and it
7:13
was quite significant . So 17% more
7:15
optimists than pessimists in that area . But
7:17
then cost of living . I'm not sure
7:19
why cost of living . Maybe that's just a reflection of where
7:22
people's priorities are in general . Job
7:24
security and news were the top areas
7:26
of concern . In
7:28
Argentina , only 41%
7:31
of people had heard of chat , gpt and
7:35
Google Gemini . This was , I thought , really interesting
7:37
. Google Gemini 15% of people in the UK
7:39
had heard of it . France was only 13%
7:42
, usa 24% . Microsoft
7:45
Copilot was about the same . Claude was
7:48
between 2% in Germany and 5%
7:50
in the US , which kind
7:52
of surprised me and disappoints me , because I'm
7:57
massively a fan of Anthropic
8:00
and the way that they kind of operate as a company and
8:02
as and the way that they kind of operate as
8:04
a company and
8:06
you know , as open AI becomes closed AI and becomes more and more
8:08
of a commercial outfit that seems to
8:10
care nothing for security and anything
8:12
other than making money and being the first to AGI
8:14
. Anthropic are the only ones who
8:16
really seem to have a you know , a genuine
8:19
desire to make something that benefits humanity
8:21
. So a shout out to everybody who
8:23
isn't using AI tools yet to use
8:25
anthropic tools , because they are
8:27
by far the best company out there at the
8:29
moment . And the UK had
8:32
the lowest score of only 2%
8:34
of people using AI to try and get the latest
8:36
news . In the US it
8:39
was 10% of people and
8:41
, like I say , this was specifically looking at
8:43
news and journalism . So that's that's
8:45
why I had these kind of specific questions , but
8:47
, yeah , why this was really interesting
8:49
to me . Um , and there was another piece this
8:52
was last year , but saying that 46
8:54
of people in the us at that time had not heard
8:56
of chat gpt . Is that
8:58
, I think , for people like , yeah , jimmy
9:00
and myself , when we are
9:02
kind of , you know , submerging this stuff
9:05
every day , we think this is right at the top of
9:07
people's agenda and everybody is thinking about
9:09
and knows about AI . But what this actually
9:11
shows , if you're listening to the podcast and you
9:13
are thinking about AI , is , you
9:15
know , you're already in a fairly
9:18
small group of people and you're already probably ahead
9:20
of most people , so you know whether people are putting
9:22
their head in the sand because they are scared
9:24
and they , they , you know , you
9:26
know they're worried about what happens next , so they just don't want to think
9:29
about it or whether you're just people's
9:31
lives have , you know , taken over and there's
9:33
enough things to worry about . This is not
9:35
at the top of the agenda , but I would bet
9:38
that if we were sat here in a year's time with
9:40
the advances that are going to happen , people will
9:42
. If we looked at this in a year's
9:44
time , a lot more people will be thinking about
9:46
and worrying about
9:48
and acting on and you know , getting involved
9:50
with AI . I'm pretty sure that's the case .
9:53
And the funny thing about that for me is I know it was focused
9:55
on news , but the interesting thing is how
9:58
many people say they
10:00
aren't aware of these AI tools . But
10:03
now I mean , as I said , as I said , as
10:05
I said the in
10:07
in
10:13
the update , apple are now integrating AI throughout the iPhone , google also announced that they're they're
10:15
bringing more generative AI experiences
10:18
into Google search recently , and
10:20
Microsoft Bing already uses AI
10:22
, so is
10:24
it ? So ? Do people actually need to be aware of
10:26
that ? They're using AI tools ? Because I
10:29
think in a lot of cases , people probably are already using
10:31
them , possibly daily , and they just don't even
10:33
know it , because these things are starting
10:35
to become integrated into all of the software
10:38
that we use . And that's kind of the way that
10:40
I see it going is that , yeah
10:43
, there's going to be a niche who know all about AI
10:45
and know about chat , gpt , but at
10:47
some point soon , everyone's going to be using it all the
10:49
time because it's getting built into things
10:51
that we use .
10:53
Yeah , and it already is , isn't it ? I mean customer
10:55
service , for example . And
10:58
one of the things
11:00
I noticed a lot is you know the calls that you get . Now
11:02
, where
11:09
you used to get a sales call , you can tell . Now a lot of those calls are an AI sales call
11:11
. Um , you know , there are those kinds of changes that are happening
11:13
and we're not . We don't necessarily even need to think
11:15
about it , do we ? It kind of doesn't matter , because you're
11:18
either going to answer that call or not , regardless
11:20
of whether it's an AI . So I think you know every one of those
11:22
calls . I would , you know , I
11:25
would turn off
11:27
, cancel the call , regardless of whether it's an AI or a
11:29
person . But
11:32
, yeah , it is becoming ubiquitous in many ways . I think the thing
11:34
that I would be more concerned about and you know this
11:36
is maybe because of where I
11:39
come at this as a kind
11:41
of problem for civilization
11:43
is being
11:45
aware of know
11:47
ai tools is
11:49
maybe not as important as being aware of ai
11:52
and the changes that it will
11:54
make to our world . You know it's it's
11:56
not about the chatbot , it's about the
11:58
potential in two , three , four , five , ten
12:00
, fifteen years . And that's where
12:03
it scares me a little bit
12:05
to think that people are not aware of this at all . And
12:07
I had a conversation with with
12:09
one of my my tutors on the AI governance
12:12
course that I did the other day and we were talking about
12:14
I said why , why is it not an election issue
12:16
? You know , in the UK , for example , why
12:19
is it not an election issue ? Because if you look at the
12:21
kind of poor sentiment towards
12:24
ai that you see in a lot of developed
12:26
countries and I put developed in kind of , you
12:29
know , inverted commas
12:31
um , there's a lot
12:33
of negativity and so it would seem to be
12:35
an easy win . It's a kind of low-hanging
12:37
fruit for a political party to say hey , we're
12:39
, you know , we're going to sort this out , we're going to protect your jobs
12:42
. And his point to me , which I think he's bang on with , is you know there's're going to sort this
12:44
out and we're going to protect your jobs . And his point to me , which I think he's bang on with , is you know , this is just not
12:46
the bandwidth for it in this election , because
12:49
the most pressing things facing
12:51
people are , you know , costs of living , they're the economy
12:53
. They are unfortunately , people
12:56
think they're immigration the issues that people
12:58
think are important in the short
13:00
term , I guess sorry , I think are important are
13:02
important to them in the short term are
13:04
what are in people's minds at the moment . But
13:06
I do hope that when the dust settles in a few
13:09
months' time from the various elections
13:11
, that there's then more space to start
13:13
looking at this , and I think that will happen . I
13:15
do genuinely think you
13:18
can sort of feel that the
13:20
kind of cogs are turning a little bit and there is a
13:22
lot more going on and a lot more understanding that
13:24
we cannot just allow you
13:26
know three , four companies in Silicon Valley
13:28
to just in a black box , just go
13:31
on completely ungoverned , do
13:33
whatever they want , to develop something that
13:35
has , you know , potential threats to
13:37
the whole of society .
13:40
Yeah , we can't . We . I think over
13:43
time there'll be more
13:45
and more realisation that we can't just blunder into
13:47
this , and some of that's happened already . We've talked about
13:49
it on previous episodes . Where there's
13:51
been international conferences on
13:54
AI , there's been a lot of talk around how we
13:59
that
14:02
. In China
14:04
in particular which we talked about a few weeks ago , but
14:07
absolutely , I think , elections
14:10
the focus is obviously
14:12
going to be right now on some of the bigger
14:14
topics , particularly in the UK , but
14:17
anywhere in the world right now , we've just had a period
14:19
of massive inflation and there's been lots
14:21
of societal problems , which
14:23
you know . So AI is right down
14:25
the list at the moment , but I think it is going to become
14:28
more and more significant .
14:31
Another thing that I wanted to first have a
14:33
chat about and another thing that's been
14:36
, you know , out in I say media , I
14:38
mean sort of AI , specific media
14:40
and social media , but is this question
14:42
around whether there is enough data and
14:44
whether we're running out of data for for large language
14:47
models ? And I think , as an extension of that conversation
14:49
and something that you know me and you have talked
14:52
about almost to
14:54
the cows come home recently , is this
14:56
question around whether the current
14:58
large language model architecture , so that the
15:00
kind of neural networks that are that are currently being
15:02
, whether that's enough for
15:04
us to get to , you know , agi
15:07
, advanced AI , whatever you want to call it , or
15:10
is everything being overhyped at the moment
15:12
? So you know where do you stand on this
15:14
data point , whether we have run
15:16
out of data or whether we're going to run out of data .
15:20
It's really difficult . So we clearly have
15:22
run out of data . I mean , we actually ran out of data
15:24
a long time ago , so , for the benefit of
15:26
everyone listening , basically
15:28
, these models have been trained on all
15:31
of the information that's available to all of humanity
15:33
, like everything they can get their hands been . Restrictions
15:35
put on the APIs that Twitter have and
15:45
and forums like Reddit use , and
15:48
that's as a bit of a backlash to
15:50
the fact that AI models were just trained
15:53
on all their data and it was all freely available
15:55
previously . So these
15:57
models , like chat , gpt , three , four , they've
15:59
all been trained on everything that's available
16:02
already . So they've they have literally
16:04
run out of data . In that sense , the
16:06
question is whether you believe open
16:09
ai when they say that they can . So
16:11
what they're saying now is that they can generate effectively
16:13
. What they can do is generate data using
16:15
ai and , using
16:18
that generated data , they can then go and train
16:20
like on like , continue to train their ais
16:22
and they continue to improve . Now
16:25
that it like that remains to be seen
16:27
, because I guess what you have to do is wait for the next
16:29
models to come out and see whether they do actually
16:31
keep improving and do get , do get better , which
16:33
they are um , but is that going
16:35
to slow down ? Um , is the ? Is
16:38
it ? Is it going to plateau ? Has it already plateaued
16:40
? I honestly don't know . And and again
16:42
, as you said , open ai a
16:44
much more closed ai now
16:46
. And so I don't
16:48
know .
16:49
And again , as you
16:51
said , open AI much more
16:53
closed AI now
16:56
, and so I don't necessarily
16:58
you can't really take what they're
17:00
, you can't really to what they say
17:02
, watch what they do . And I think that really
17:05
applies here is , although
17:08
OpenAI say , oh , there's no problem , but
17:10
you know the amounts of money that players
17:12
are trying to buy data
17:14
from . You know newspapers , magazines that
17:28
have large amounts of kind of high
17:31
quality data . Um , another
17:34
point is you know why did people wonder
17:36
at the time ? Why did elon musk buy twitter ? Well
17:38
, because it's data . You know there's a huge
17:41
amount of data in there . Now the data in twitter
17:43
scares the hell out of me the idea that that
17:45
is . I mean that's . You know if we're talking about
17:47
crap in , crap out , you put that stuff in
17:49
my God . But it's data
17:51
to advanced AI , agi , because it just doesn't kind
17:54
of make sense . You
18:10
know as much as it seems to be . I don't want to
18:12
use the word sentient , but it seems to be kind of
18:14
intelligent . It's parroting back stuff
18:17
that it's been trained on . I
18:19
sort of worry more about the idea , you
18:22
know , the kind of dead internet theory
18:24
. So dead internet talks about how I
18:26
think it's you know potentially more
18:29
than 50 of the internet now is is
18:31
just nonsense because it's , you know , troll
18:33
farms . It's ai making
18:35
it up and therefore the information that's out
18:37
there on the internet is not accurate
18:39
. There's so much crap out there that basically
18:42
you're putting crap into it , it's it's going to output
18:44
crap and so , regardless of whether
18:46
there's more data or not , the existing
18:48
data is not good . So you
18:50
know , it's a bigger question around amount
18:53
of data , where there's more data , the
18:56
quality of the data that was previously used
18:58
, um , and in turn that kind
19:00
of you know feeds , a never-ending kind
19:02
of loop . If you've got aiIs training themselves
19:05
on that existing data , I think you're right
19:07
. I mean , we don't know because we're not privy to
19:09
what's going on within
19:11
those organizations and we don't know enough because
19:14
no one knows enough about the way large language models
19:16
work . But it definitely seems like
19:18
something that is highly possible and I think I
19:21
more and more think at the moment you
19:23
talked about OpenAI . I mean , they're so far from the original
19:25
purpose , they're so focused
19:27
now on being the first to create AGI
19:29
and you know , investment
19:31
and money , that it's quite easy to believe
19:33
that there is a lot of hype
19:35
just to generate investment . I do think
19:37
we're probably at the top of a hype cycle . I
19:39
don't think that necessarily means that you
19:42
know there's going to be an IO winter for the next
19:44
10 years , but I do wonder whether things have been
19:46
a little bit oversold . And you know , ai
19:48
, you know agi by 2025 , agi
19:50
by september . Some of that
19:52
seems to be now 2027
19:55
, 2028 .
19:56
It seems to be kind of rolling back a little bit
19:58
yeah , and no one even
20:00
agrees on the definition of agi , so we
20:02
were chatting about it earlier , I think .
20:04
I think what was the term you said they're now
20:06
using advanced AI , which is
20:08
which is not defined , but which avoids
20:10
the the need to kind of , you know , find
20:13
an AGI definition yeah , because
20:15
this is what everyone's been struggling with , right so
20:17
AGI ?
20:18
does AGI mean conscious machines
20:21
that have their own free will
20:23
and a self-determination , or
20:26
does it mean something that can
20:28
perform almost any task
20:30
in to the same level as a human
20:32
and doesn't need supervision ? I
20:34
would , I would lean towards the the
20:36
latter , um myself
20:38
, because I think we're we
20:41
don't even really understand any of the former
20:43
, like what consciousness is and all
20:45
this kind of stuff which we're probably not going to get into now
20:47
, maybe for a future episode . But
20:49
I I'd sort of lean
20:51
on the latter of those definitions , which
20:53
that's . I
20:56
feel like that is the kind of aim and the
20:58
target and the goal for companies
21:00
like chat , gpt , is having a machine
21:02
where you can just let
21:04
it loose and it will , it
21:07
will , it will automate
21:09
a vast array of tasks , um
21:11
, and hence the talk , the , hence the podcast
21:13
and the sort of talk about how
21:16
that's going to threaten jobs . But
21:18
I don't even think that we're that close
21:21
to reaching that definition
21:23
. And the reason I feel like
21:25
that is because it's
21:27
like how , like with however
21:29
smart a large language model appears to be
21:31
and however many questions it can answer and however
21:33
many puzzles it can solve
21:35
and however many things it can do better and even
21:38
than the average human , it
21:40
still seems to require
21:42
a level of supervision which
21:45
a human wouldn't require , like I . I
21:47
wouldn't trust it to go and just get on with something
21:49
. And I've tried . I've tried some
21:52
of the agentic type models as well
21:54
, which where you can actually use an agent to go off
21:56
and write code and , you know , talk to another
21:58
ai to get testing done on
22:00
the code , and then there's another ai which is
22:02
supervising them and all this kind of stuff and it doesn't
22:04
really work . Yet then devin is
22:06
an example of that . So there was devin , and
22:09
then there's open devin and various models , but
22:12
they don't really work . They end up costing
22:14
you a fortune because they go around in circles
22:16
, um , and they and they don't
22:18
know when they've completed the task . There's all sorts of
22:20
real kind of complications
22:22
with it which seem to be very human
22:25
problems where a human would just be like
22:27
okay , you know , I need to point
22:29
you in a different direction now . Um , stop
22:32
what you're doing , let's have a review . Whatever it is
22:34
, we're not there yet and and
22:36
maybe we'll get there , but I feel like
22:38
, um , I feel like that is a
22:40
is a sort of elusive
22:43
moving milestone ? Yeah , yeah
22:47
.
22:47
So the last thing , and
22:49
this is , I guess , quite important . So , you know
22:51
, governance , alignment , the sort of general
22:53
security is what's been
22:56
kind of occupying my mind
22:58
and this
23:00
is , I guess , a sort of soft launch announcement
23:02
. But we're going to be relaunching
23:05
the podcast , going forward . Going
23:11
forward , we're still going to have an element where we focus on jobs
23:13
, but we're going to sort of branch out a little bit because we think there
23:15
is an urgent need now , and particularly post-elections
23:18
in many Western countries
23:21
this year and we've added France to that list in the last
23:23
week or so we think there's an
23:25
urgent need to inform people
23:27
and actually to help try and
23:29
achieve our original purpose , which was giving
23:31
people actions that they can take
23:33
to try and mitigate the human impacts
23:35
of AI . So I think it's not
23:37
an exaggeration . We've said on the
23:40
dystopia episode you know , if
23:42
nothing changed , we're on a pretty
23:44
fast path to destruction of you
23:46
know humanity , whether that's destruction
23:48
of the kind of social system or it's destruction
23:51
of the planet . You know , I'm not saying
23:53
for a second that there will be nothing
23:55
. So we're not saying that that is necessarily
23:57
the end goal . But you know that's where we're headed
23:59
without those measures and whether
24:01
those measures are taken quickly enough to
24:04
address the kind of more existential threats
24:06
is , you know , a properly
24:08
kind of defining moment
24:10
for humanity . So we
24:12
want to keep it light hearted . We want to keep
24:14
it funny where we can . We want to keep interviewing
24:17
people , but we want to branch out a little bit
24:19
more than jobs . So we'll continue to focus
24:21
on industries , but we will also look
24:23
at the alignment problem , the
24:25
security and safety around ai
24:27
and governance . So hopefully , when
24:30
we relaunch that , we will be able to get some really
24:32
interesting guests on the show
24:34
, and we'll be doing that from the
24:36
next episode onwards . So let's
24:38
move on to our main
24:40
episode . So , as I said , we have a guest
24:43
on , so we're going to change
24:45
into our dressing gowns and
24:47
then we're going to get into the other studio in
24:49
the back and we will be back with
24:51
you in two minutes time . So
25:03
welcome back . Jimmy and I are in our dressing
25:05
gowns now . That's a site you don't want to
25:07
see , so that's why we keep the videos off YouTube
25:09
and keep this to a podcast . So welcome
25:11
to the podcast , dan lyons . Dan is
25:13
the strategic comms advisor , who's
25:16
worked across a variety of roles
25:18
. He started out as a journalist , he's worked in government
25:20
and private sector , and his last role
25:22
was a managing director of a global
25:24
strategic consultancy .
25:25
So , dan , welcome to preparing
25:28
for ai thank
25:30
you great to be'm a fan
25:32
of the podcast , so it's lovely to actually
25:34
be here and chatting to you guys .
25:36
Well , that's why we wanted you on , because we , you know , obviously
25:39
we have 2 million listeners , but to have one of
25:41
them who's such an expert in a field on
25:43
the podcast is a pleasure for us as
25:45
well . So I guess let's start off , let's
25:47
have a look at your kind of own
25:50
experiences . So , if we make this
25:52
, I mean let's look at , I guess , the last six
25:54
months , so from the beginning of the year
25:56
, what have you seen
25:58
in the industry in terms of both
26:01
the adoption of AI tools
26:03
but also the kind of attitude , guess
26:12
I'm interested in the attitude of .
26:12
You know people at the top , but also you know people working in the industry and how they are reacting
26:14
to those tools and how they're reacting to you know potential
26:17
for job losses , or you
26:19
know changes or insecurity
26:21
around their roles the first thing to say is that
26:24
sort of ai has actually sort of been creeping in as
26:26
as sort of in terms of ai-based
26:28
tools to industry for quite a few years I
26:30
think , starting with mainly
26:33
kind of executional tasks
26:35
, particularly around sort of data analysis
26:37
, media monitoring , the use of
26:40
AI tools to sort of gather in large amounts
26:42
of you know media articles , to analyze
26:45
trends , to sort of say , for example , how negative
26:48
an article is , how positive it is , and
26:51
derive performance-related
26:53
data from that . Also
26:56
, in a place like China where I'm based , the
26:59
use of AI for translation , which
27:01
has really accelerated certain areas of the
27:03
industry , and abilities to produce
27:08
content and to analyse content . I
27:11
think adoption is still
27:13
low , though I
27:18
think the introduction of chat , gpt has been an inflection
27:20
point , but really the usage across the industry
27:23
is still relatively low . I think there's
27:25
been some studies last year
27:27
by the Chartered Institute of Public Relations . I
27:29
think only 40%
27:31
of tasks that are performed by PR professionals
27:33
are now assisted by AI
27:35
tools and I think that's up from about 12%
27:37
the previous year . So you know there's
27:39
still a lot that's going on within the industry that doesn't
27:42
rely on AI and
27:44
within that I would say that sort of most of
27:46
the usage is , as I said , low level rather
27:48
than strategic . So you know , monitoring
27:50
, data analysis , information analysis and
27:52
executional tasks . What tends
27:55
to sort of remain on touch is more strategic
27:57
work , so that's sort of crisis management , uh
28:00
, you know , risk mapping , risk forecasting
28:02
and , obviously within a business like
28:04
pr , relationship management
28:06
. So that's both with your
28:08
stakeholders , with your clients , with
28:10
with the media , with journalists . I
28:12
think that's very much still sort of a , you know
28:15
, a human-led task rather than anything
28:17
that relies on um
28:19
, on ai , um . I
28:22
think there's two sort of issues that are impacting how
28:24
it's being sort of adopted . I
28:26
think the first one is a skills gap . I
28:28
think , you know , within
28:30
sort of the you know my , you
28:32
know amongst my peers and within sort of
28:34
companies that I've worked at , I think there's
28:37
very few people who you could
28:39
say were experts in sort of the use of AI tools
28:41
, and usage has tended to be sort of
28:43
fairly organic and has evolved over time
28:45
. And I think there's
28:48
a particular issue around ethics . You
28:51
know ethics of AI in in PR
28:53
. I think PR
28:55
professionals , communication professionals , are a little bit
28:57
nervous about using them , mainly
29:00
because you know how accurate are they
29:02
? You know I'm , you know I'm relying on sort
29:04
of information I'm getting from these tools and
29:07
I don't want to pass on any accurate information to either
29:10
within my company or to clients . You
29:12
know it doesn't get the tone and the style right all
29:14
the time . I mean , I personally use
29:16
things like chat gbt for , you
29:18
know , the first draft just to get something
29:20
down on paper , just to sort of spark an
29:22
idea . But often , you know , I'll
29:24
completely change sort of what's produced . I
29:27
rarely , if any , you know , if
29:29
at all , use anything that is produced without
29:32
editing and
29:34
I think , particularly on the agency
29:36
side , there's
29:39
an issue around the ethics of it . So billing
29:42
clients for work that has been created
29:44
using AI , the optics of that , particularly
29:50
if you're charging quite a lot as an agency , are
29:52
tricky . You know it feels a little bit like cheating
29:54
. So
29:56
those are the two issues that I think are sort of
29:59
having an impact on the adoption . But
30:01
you know the launch of chat GPT
30:04
has definitely been an inflection
30:06
point within the
30:08
last sort of six to nine months . You
30:12
know there's a definite increase in sort of the use of large language models
30:15
and so that 40 figure that
30:17
I mentioned earlier could definitely be a lot higher . Um
30:19
, and you
30:22
know sort of the types of things that are being
30:24
used . Uh , you know low , low level
30:26
content creation , you know , social
30:28
media , press releases , uh
30:30
, again sort of media analysis and translation
30:33
. So people are trying to get up to speed fairly
30:35
quickly .
30:36
It feels like . So I mean , first
30:38
thing I was going to ask is like , do you think you said
30:40
you use it already ? So why
30:42
do you use it ? Does it save you
30:44
time if you don't use , if
30:46
you don't actually use most of what it writes , but you
30:48
just get it , get the ball rolling with it ?
30:59
Yeah , don't use if you don't actually use most of what it writes , but you just get it , get
31:01
the ball rolling with it . Yeah , so it says so . For me it it saves time . You know , if you , if you sort of
31:03
put in you know a suitably detailed prompt , uh , you know you could save up to an hour uh in in creating
31:06
sort of the first draft or something . If you've um
31:08
, if , if you use chat , gpt
31:10
or a similar tool , uh , you know , so
31:12
it's very efficient . But for me , actually
31:14
, it's also , you know , even if you've been
31:16
in pr or you know sort of the communication
31:18
industry for a long time , you , you
31:20
know , you might not always get the inspiration . You need , sort
31:23
of uh from the , from the bat , and you
31:25
know , sometimes I like to see something
31:28
on paper , even if it's not , if it's something that I will
31:30
not use , just as a kind of a sparring
31:32
pad or , you know , a launch pad , instead of doing something
31:34
else . So so it's , it's not
31:36
just sort of the efficiency aspect
31:38
for me , it's also that it sparks
31:41
my own thoughts on on something and
31:44
, as I said , you know it rarely gets , you
31:46
know , it rarely gets it right first time , but it's , you know
31:48
a good start , for you know if you , if
31:50
you need a combination of wording
31:52
that sort of might make a good social media post or
31:54
or sort of you know
31:56
key messaging or or press releases
31:58
, or you know kind of a keynote speech
32:00
or quote . Um , you know , there's always something
32:03
you can use that then sort of uh
32:05
, that prompts you to sort of edit
32:07
in your own style I know exactly what what you
32:09
mean , so I've kind of used it in the same way .
32:11
It feels like a little cheat to get around
32:13
writer's block or something like that , where , like
32:15
, staring at a blank piece of paper can
32:17
be quite daunting , where if
32:20
you just pop a prompt into chat , gpt , it gives you
32:22
something , even if it's just a scaffolding where
32:24
you have to then rewrite almost everything .
32:26
Yeah , and I think I everything , yeah , and I think I mean going back to the point about
32:28
why , sort of why take up in the industry
32:30
as a whole might be . Like you know , I
32:33
, I sort of am a regular user of of
32:35
, uh , of chat , gpt
32:38
and and the large language tools , that sort of
32:40
um that are available and in fact
32:42
my , my sort of . Some
32:45
companies are actually using , uh , you
32:47
know they're creating their own versions of of chat
32:49
, gpt , sort of you know that are sort of
32:51
a tailored for for that business and and for
32:53
the sort of the . You know the work that
32:55
that is done by sort of uh , the , the
32:58
teams within the business . Uh , my own , I
33:00
recognize my own sort of skills , uh , you
33:02
know my skills gap and the need to
33:05
uh , the need to sort of build it
33:07
up . So I mean , there's probably a lot more I could be
33:09
doing and there's probably a lot more that could be , could
33:11
be done within uh . You know the , the
33:13
industry and the companies that I've I've
33:15
worked for but are not done because people just
33:17
simply don't uh you know , they're not knowledgeable
33:19
enough and you know , as you
33:21
say , you've said on sort of um previous podcasts
33:24
and I'm sure you've sort of covered it
33:26
today that you know things
33:28
are changing so fast and and keeping
33:30
on top of the you know , proliferation of
33:33
tools that are out there , you know , I think sort of
33:35
again within the
33:37
studies that have been done by by sort of
33:39
the governing bodies , you know there's probably up
33:41
to about 10 000 ai
33:43
assisted tools that that you know could potentially
33:45
be used within sort of pr industry alone . So being
33:48
able to keep on top of that is incredibly hard . Um . So
33:51
you know , people are people like me are getting up to speed as
33:54
quickly as they can , but there's , you know there's still
33:56
a way to go .
33:57
I think sort of at your level and
33:59
and we we maybe face the same point
34:01
in a lot of episodes . I'm thinking of a law episode
34:03
as a as a good example of this actually , where
34:06
there feels like that
34:10
there may be a big difference in terms of the
34:12
immediate threat depending on
34:14
where you work . And sometimes we hear
34:16
that you know AI is different from other
34:18
revolutions because it's coming for , you know
34:20
, more senior jobs first , but actually
34:23
if you look at the area , so you know , just looking
34:25
at some of the areas in the notes , I've got social
34:27
media management . You know the AI
34:29
driven tools for scheduling , managing
34:35
your social media content and stuff already . Data analysis and reporting ai can
34:37
already process and analyze your data sets and you give you feedback on that
34:39
presents the for your media monitoring and analysis
34:42
. I mean , that's media monitoring for me is . You
34:44
know that that's gone . I think that that
34:46
if anyone's still paying for people to you
34:49
know do media monitoring , then I think you're you're
34:51
throwing your money away and content
34:53
creation and writing . You know it feels like and
34:55
, having done the interview a few weeks
34:57
ago , it feels like
34:59
one of the issues with content
35:02
might actually be it's not
35:04
that the generated content is
35:06
as good as content created
35:08
by people . But it's just that , whether
35:11
it's the editors or the , you know , the senior
35:13
management , or the consumers themselves
35:15
, people are just willing to accept poorer quality
35:17
content . So you know , with all those kinds
35:19
of different areas , that we think
35:22
they are being already
35:24
affected or they are going to be massively affected
35:26
by AI . Your role
35:28
may not be directly affected , but
35:31
what are you seeing in terms of the
35:33
people you work with ? I mean , are they
35:36
seeing the sort
35:39
of writings on the wall and they're thinking my
35:41
job's gone , or are they
35:44
not really thinking that way ? I mean , what
35:46
is the sentiment in the industry ? Is there a lot
35:48
of fear ? Is there a lot of excitement
35:50
? I know it's difficult to
35:52
speak for everyone across an entire industry
35:54
, but you know your experiences of people . Are
35:57
they optimistic , pessimistic
35:59
? You know what are their feelings about the
36:01
kind of AI revolution .
36:03
Yeah , I mean , I think , in general , the
36:06
feeling is that this is something that needs
36:08
to be taken seriously , not not only from
36:10
the perspective of how it will impact the
36:12
, the pr industry or the communications
36:14
industry , but also how it will affect our
36:17
sort of the environment which
36:19
we kind of operate in . So that's the wider media environment
36:22
, the kind of the corporate world . You
36:25
know . How do we as a , how do
36:27
we as an industry , tailor our offering
36:29
and the services we provide to
36:32
basically take account
36:35
of not only
36:37
opportunities that AI provides , but actually the risks
36:39
as well , and these are risks
36:41
from a media point of view how
36:43
to , for example
36:45
, help companies protect themselves
36:47
against misinformation or , you
36:49
know , other organizations against mission misinformation
36:52
? How to , you know , help
36:54
a company through a deep fake crisis , for example
36:56
, that you know may sort of have a big impact on
36:58
their , their business ?
37:00
um , so there's a new world
37:02
of work for you yeah
37:04
, exactly .
37:05
So you know , I think sort of a new kind of sub-industry
37:07
here , right , yeah , so I mean , within the kind of the strategic consultancy environment . There's
37:09
a new kind of sub-industry here , right , yeah . So I mean , within the kind of the strategic
37:11
consultancy environment there's definitely a sense of we
37:14
need to get up to speed , but there's definitely an opportunity
37:16
to be able to advise people on
37:18
how to sort of handle this brave new world
37:21
. I guess the sense or the
37:23
general sort of feeling is that yes , there's a recognition
37:25
that you know , as adoption accelerates , low-level
37:28
, entry-level tasks will be displaced , but
37:31
actually you know this is an opportunity for you know
37:33
profession wide strategic shift in
37:35
focus and actually you know any threat
37:37
to jobs would be because people
37:39
need upskilling , not because the jobs will necessarily
37:42
disappear entirely . So
37:44
I think I think the sort of the kind
37:46
of the general consensus view is that yes
37:48
, the PR is infused with AI , but
37:51
wholesale job replacement is
37:53
not happening yet . So
37:55
I've heard people say it's like the introduction of Excel
37:58
and the impact that had
38:00
on the accountancy profession . People
38:02
were fearing that the creation of spreadsheets
38:04
and Excel as
38:08
a software would obviate
38:11
the need for paid professionals who
38:13
would do your accounting . But obviously
38:15
accountancy still thrives . You
38:19
know , I personally am not
38:21
convinced that that's sort of a good analogy and
38:23
I'm not 100% convinced that eventually
38:25
the proliferation
38:28
of AI tools and the development of AI
38:30
won't eventually touch more strategic
38:32
areas such as crisis
38:35
management , c-suite advisory , or
38:37
that certain roles , for example , won't become
38:39
superfluous once companies , both agencies
38:42
and in-house , realize it's just cheaper to use AI
38:44
. So
38:48
if you're an agency and you can
38:50
see over time that actually you don't need so many
38:52
junior associates or
38:54
junior team members you know doing
38:56
the work for you , because actually a lot of that can be done by
38:58
you know fewer people using
39:01
tools , then you know the
39:03
economic logic of it is that that
39:06
it that it sort of those roles would
39:08
disappear . So
39:10
I've seen this sort of the idea that people
39:12
become trained as prompt architects
39:15
and I think you know that's a part of why
39:17
people you know would
39:19
traditionally join us .
39:22
It's also bollocks the
39:25
idea of prompt engineers and prompt architects
39:27
is bullshit . It's
39:31
an industry that might exist for a year and
39:33
then it's gone . I mean I I sorry to interrupt
39:35
, but I mean I I've done a course on um
39:37
from Vanderbilt university , an online course on
39:39
prompt engineering , and it's fun and it's kind of useful
39:42
. But I did a course a
39:45
couple of months ago and the course was
39:47
obviously written in late 2023
39:49
. And when I did the course , it was
39:51
already out of date to
39:53
me because , a lot of the things it
39:55
was teaching you to engineer . You no longer need to
39:57
engineer and I think for the same reasons , if you're
39:59
learning to engineer things now , you
40:01
know those things will be . You
40:03
won't have to . The whole point of the advancement
40:05
of the models is that you'll be able to speak
40:08
in a natural way and it will understand and be able to
40:10
. You know prompt , or you can even just tell it now
40:12
give me the prompt to do this and it will give you the prompt
40:14
. Then you give it back the prompt and then it does it . So , yeah
40:17
, I , I think the idea of any
40:19
roles you know , or not necessarily
40:21
roles . I mean , there might be roles but the fact that you can go
40:23
and become and have a career as a prompt architect
40:25
or prompt engineer is is for the
40:27
birds I , yeah , I totally
40:30
agree .
40:30
I mean I I thought that right from the start
40:32
, with things like prompt engineering Even
40:35
between I think I'll use mid-journey as an
40:37
example , but between mid-journey I think
40:39
it was two and three the need
40:42
to do any kind of lengthy
40:45
prompt engineering to get the
40:47
model to output images just went
40:50
away . It
40:52
went from .
40:52
You have to be really specific about how you , how
40:55
you prompt it to , you can just tell it
40:57
you want a picture of whatever , like a
40:59
bird on a mountain or something I think I
41:01
think learn to prompt sorry , just to say that I think learning
41:03
to prompt is useful and I think
41:05
studying these courses is useful for helping
41:08
you to be able to prompt better . And I can , you
41:10
know , I taught uh chatPT
41:12
, a language I wanted to use so
41:14
that it would I could prompt it quicker to
41:16
give me actually information
41:18
for this show . So I could give it three
41:21
asterisks is , followed by a word and a number
41:23
, and three asterisks is , and it would then give
41:25
me information within a timeframe
41:27
on a certain thing . And it's quite fun and it allows
41:29
you to do things . But the idea that that would
41:31
be something that is useful enough
41:34
to be a career or a job , I think is is yeah
41:36
, it's , yeah , I think it's
41:38
a non-starter .
41:39
Sorry , dan for uh so
41:43
I think sort of the , the acceptance
41:45
and adoption within pr will will have
41:48
an impact . Um , as
41:50
I said before , you know there is a
41:52
hesitancy at the moment and a nervousness
41:54
, but I think within the next sort of six
41:56
months you know 12 months I think I
41:58
think that will slowly sort of ever way and
42:01
then actually the sort of the industry itself will sort
42:03
of start to see the impact within the sort of the structures
42:05
of companies and within the sort of the the industry itself
42:07
on , uh , you know , from from ai
42:09
. I think the second sort of slightly sort of
42:11
linked aspect of that is the
42:14
reaction of in-house teams
42:16
, but also you know clients who are , you
42:18
know , in-house teams . You know at the moment there's
42:20
probably not a lot of awareness of how
42:23
AI is being used , sort of both
42:25
within , you know , within companies
42:27
, but by sort of external parties possibilities
42:33
. Possibly , even if there was full awareness , that sort of clients would still feel they need
42:35
that . You know the access to the sort of the top level advisors . You know the people with you
42:37
know huge experience in sort of
42:39
certain fields , you know , and in
42:41
certain with certain capabilities , for example
42:44
, around crisis and or sort of political
42:46
advisory . You know that I
42:48
, I can possibly see that changing
42:50
if , if , for example , 12
42:53
months down the line , a few years down the line , someone produces
42:55
something called the boardroom ai
42:57
advisor , which is perhaps built
42:59
into companies business
43:02
continuity plans and , and you
43:04
know , provides that pr function . It's sort of it's
43:06
, it's built in . In terms of risk forecasting
43:08
, you know you can assess reputational
43:10
risk scenarios , provide
43:12
playbooks , because I I mean , we're all using playbooks
43:15
, so nothing is sort of entirely original and
43:17
you know , and then even execute these plans , you know
43:19
, linked to media databases
43:21
, linked to sort of you know , social media
43:23
channels , could basically sort of assess
43:26
kind of the ongoing sentiment
43:28
around a particular issue and prompt
43:31
you know prompt responses
43:33
that go public . I mean , you know that's way
43:35
off , but you know that could be another game changer . Ultimately
43:46
, it comes down to a point that I think
43:48
probably is key to adoption within
43:50
the industry and you know , beyond
43:52
PR , beyond communications , and you know beyond , beyond , uh , beyond pr , beyond communications
43:55
, and you know , for the overall adoption of ai
43:57
and that's , you know , trust , and we can talk about
43:59
that a bit more if you want yeah , I just want
44:01
just before we do , because I think trust is a
44:04
a massive one
44:06
, not just in this industry , but I I
44:08
just want to go back just .
44:09
We talked about crisis management , so I think crisis management
44:11
actually is a really good example of one where it
44:13
seems initially , you know , I
44:16
had an example the
44:18
Institute from PR it was from a link from
44:20
them about AI tools that are being used to monitor
44:22
sort of real-time data and
44:25
detect potential crisis
44:27
, allowing companies to respond swiftly
44:29
and effectively and kind of mitigate damage
44:31
. But it's about detecting them . It's not
44:33
about you know , giving you the advice on
44:35
how to deal with it . And I think a great example
44:38
is you know you work in China . There's a very specific
44:40
environment in China where things which maybe
44:42
somewhere else would not be an issue the way
44:44
that you , you know , refer to the mainland
44:47
, a certain island , an administrative
44:49
region , you know can create
44:51
huge crisis and
44:53
therefore having that requisite
44:56
knowledge and understanding the nuance
44:58
and the political situation is
45:00
really , really important . But I think we
45:02
always talk about how you know the
45:05
kind of advancement of things and I think part
45:08
of the problem I can see is that if I
45:10
was in your organization and I was
45:12
adopting AI , I would build
45:14
an offline LLM
45:18
system within your organization that
45:20
only had your data in there and
45:22
was picking up all your data and basically , every
45:24
time you handle a crisis , you're telling it about
45:26
the nuances of the situation in China and
45:29
you're training it to basically do your
45:31
job , and you're training it every time you do
45:33
a good job're training it . You know
45:35
, one more step to you losing your job , and
45:37
that was the example we gave a couple of weeks ago , that the
45:39
guy who you know lost his job and was like , hey
45:42
, it was my data that was used to train the AI
45:44
that's now replaced me . I think when
45:46
you say a long way off , I don't know , I
45:48
don't think it's a long way off , I think it's maybe a couple of years
45:50
off , but you can easily see a
45:54
model being created that allows
45:56
you know within an organization
45:58
for it to pick up the nuance , to be able to do
46:01
most of that crisis management . And then again
46:04
, you don't remove people completely . You
46:06
might still be there as the kind of last step in the chain
46:09
, because we need someone to blame when it all goes wrong
46:11
, but you're certainly able to take out
46:13
a lot of people in
46:15
the chain and we're able to , you know
46:18
, massively reduce the teams that are working
46:20
on it .
46:20
So it was not so much a question .
46:22
It was more a kind of observation no
46:25
, no , I think you're right .
46:26
I think , yes , it sort of , and
46:29
again it probably comes down to the trust issue that we're talking
46:31
about . But you know the , the sort
46:33
of the , I
46:35
guess the framework for risk is
46:37
programmable . You know , and
46:40
in terms of the red lines
46:42
you're talking about , when it comes to sort of operating
46:44
in China , you know they're already well known , they're already sort
46:46
of well publicized and published
46:48
. So there's no reason why if
46:50
you typed in , you know what
46:53
are the three red lines that companies need to bear in
46:55
mind , when you
46:57
know , when operating in China from a reputational
46:59
risk point of view , then that probably already exists
47:01
there , so
47:04
you can set the parameters of your risk , you can
47:06
monitor that risk , and
47:08
then it's
47:11
easily sort of programmableable . You know
47:13
how would you respond to that and perhaps , you
47:15
know , perhaps a tool would come up with based
47:18
on all sort of the inputs , you know , three possible courses
47:20
of action and maybe there's a person at
47:22
the other end who makes that decision , but eventually
47:24
maybe it's just another ai tool that makes that
47:26
decision and then passes the , you
47:29
know , bypasses the sort of the
47:31
need for human interaction at all . I mean , that's , that's
47:33
theoretical and and I don't be given
47:36
sort of risk issues . You know people
47:38
probably wouldn't want to hand that over entirely to to
47:40
sort of you know how it all throughout any sort of uh
47:43
without any human input . But you could
47:45
, as you say , you could do away with a lot of people in the chain you're
47:47
giving me uh , you're giving me loads of great business
47:50
ideas here , dan yeah
47:53
, I went in on that and so
47:55
, yeah , I think sort of the a lot , a lot of
47:57
this , I think , within the pr industry . But , you know , possibly
47:59
on that comes down to to trust , because
48:01
they're currently nobody really trusts ai to get
48:03
things right , um , and
48:05
there's too much sort of uncertainty . So the
48:07
logic is that you'll always need a sort of a human guiding
48:10
hand and , as I said , you know
48:12
the wider environment , you
48:14
know with what with ai , ai comes ai
48:16
risks , um , deepfake misinformation
48:19
, you know that's potentially new avenues of
48:21
business , uh . So
48:23
you know , maybe the trust will never materialize and
48:25
you'll always need this sort of this . These human custodians
48:28
and I know that sort of the professional
48:30
bodies you know actually see a role for the pr
48:32
industry . For , you know , advising on governance
48:35
issues , about the ethical use of AI , you
48:37
know regulatory issues , um
48:39
, you know , almost sort of setting themselves up
48:41
as the reputational authority , uh
48:43
, around AI . You know that is the
48:45
kind of the million dollar question . I guess that sort
48:47
of trust is the inhibitor
48:50
. If you remove that inhibitor , then you know what's possible
48:52
. And actually , I mean , before we get to
48:54
that , I do think sort of there's probably a silver lining
48:56
. And actually , jimmy , what you were saying
48:58
in terms of the business ideas , I think actually
49:00
sort of in the next few years you might see actually
49:02
a new wave of entrepreneurialism , uh
49:05
, you know , within the industry . Currently
49:08
, you know you've got lots of big companies . There's
49:10
economies of scale . You know they sort of need
49:12
the expertise teams . But if you
49:14
know , if AI tools are doing
49:16
a lot of the specialist work , then actually you
49:18
could see sort of you know one man bands
49:21
, smaller PR companies that
49:23
actually can do the
49:25
work of a huge agency because actually you
49:27
know they can offer a full suite of creative
49:29
and advisory services .
49:31
Yeah , I was gonna say I mean , I said it half
49:33
jokingly , but exactly that , like I think there's two
49:35
. I think there's two different things . When we talk about
49:37
jobs on the podcast , there's
49:40
two different ways that jobs might be affected
49:42
. One is like adoption within existing
49:44
industries , like within , for example
49:47
, within existing pr companies , things like that
49:49
, um . The other is exactly
49:51
that . It's the kind of the , the
49:53
entrepreneurs and the innovators and the disruptors
49:56
that come in and just set
49:58
something up that just out competes
50:00
because it's they've you know , they've
50:02
figured out the technology . It's just based on ai
50:04
. Maybe the , the
50:07
trust thing is still an issue , and so you know
50:09
, certain , certain entities need to rely
50:11
on um , you know , rely on , rely
50:14
on corp um companies or pr companies
50:16
that have humans in the loop , but
50:18
it's sort of you know you
50:20
, you get this kind of lowering of the bar where
50:22
actually services like this
50:25
probably become accessible to a broader
50:27
group of people much cheaper as well
50:29
.
50:30
They don't need economies of scale either , do they like
50:32
they don't need economies of scale ? You could actually argue that as
50:35
a one-man band or a small company can
50:37
do things more efficiently because they don't have all the overheads
50:39
of a big pr firm no , exactly
50:41
yeah so I think that's .
50:43
I think there's there's potential for massive disruption
50:45
in that kind of sense as well
50:47
, and I don't think we've seen that
50:49
yet , I think we're still . You
50:52
know , there are some companies that have set up , that have done
50:55
that , are doing this kind of one-man band sort of thing
50:57
, and I know it's been talked around that
50:59
you know , in the next , maybe in the next
51:01
five , ten years , we'll see the first sort
51:03
of you know company that's just one person
51:06
and a bunch of ais , who becomes
51:08
worth you know billions of dollars
51:10
, because because of exactly that
51:12
, because they can just automate everything , but it's
51:14
, it'll be interesting to find out perhaps
51:17
leading into that .
51:17
I think you know the the one I obviously
51:19
pr is an industry in isolation
51:22
and it's you know . The other is , the other side
51:24
of a coin is is the mass
51:26
media market and you know , I think sort
51:28
of if you're going to have a conversation about , about
51:30
sort of pr , then you need to
51:32
have a conversation about you know what , what , what environment
51:34
is pr operating ? And you know the wider
51:37
, you know the wider media environment
51:39
and you know that's probably a discussion for another
51:41
podcast about . You know how will people consume
51:43
media and what , what
51:45
will media be and how ai
51:47
will influence that . Um
51:49
, you know , and you know what
51:52
is the role of journalism sort of in the ai world
51:54
and , as I said , you know in some ways
51:56
that will build into the kind of you may
51:58
actually see a sort of a reaction where , because , because
52:01
the proliferation of of content and
52:03
you know imagery and video , that
52:06
, that , no , you know sort of it's it's
52:08
very , very hard to sort of uh
52:11
, pinpoint where it's originated and you know
52:13
concerns around misinformation and deep fakes . You know you actually mean that that sort of pinpoint
52:15
where it's originated and you know concerns around misinformation and deep fakes . You know you actually may
52:17
. That sort of that may mean you're in a kind of
52:19
a kind of a virtuous
52:22
circle where you know people will
52:25
never fully trust the media and therefore you'll always need
52:27
, you'll always need , sort of PR
52:29
, I guess a PR industry to
52:31
to help companies you know companies
52:33
, organisations , but also just the wider public to help company you know companies , organizations , but also just the wider public to
52:35
navigate , you know to navigate that sort of
52:37
uncertainty . So , yes , I mean . So
52:40
trust for me is the kind of the , the sort of the
52:42
inhibiting factor , and it'd
52:44
be interesting to get your guys views on at
52:47
what point do people start to trust ?
52:49
ai , I mean , I , I think I
52:51
think never and and I
52:53
I agree with you more so I've
52:55
been thinking about it a lot the last two
52:58
days , since we kind of exchanged notes actually and
53:01
I think , yeah , I think it's potentially
53:03
the biggest barrier
53:05
or sort of challenge for AI to overcome
53:07
. Something
53:13
that really stood out to me I can't remember who it was , it was a member of the general
53:15
public or a comment on a board but someone had said we
53:17
never asked for this . They were talking about AI
53:19
and they were saying we never asked for this . No one asked us whether
53:22
we wanted this . And okay
53:24
, yeah , jimmy said yeah , well , that's the same with everything
53:26
. You know , we don't ask for it and I agree
53:28
. But you know we're being kind of given
53:31
this thing that we're told , hey , this is going to change the world
53:33
, you just got to accept it , it's going to happen
53:35
to you and so , yes , we are
53:37
going to have to accept it and it is . It is
53:39
going to be part of our lives and it's going to make fantastic
53:42
positive changes and it's going to potentially threaten
53:44
, you know , the existence of humanity
53:47
and all of these challenges we're going to have to face . But
53:49
if you start on that basis that people feel
53:52
this is being imposed on them . And
53:55
then you take the distrust that we have in the
53:57
world at the moment you know , I think
53:59
quite rightly in institutions
54:01
and authority , and you put those
54:03
two things together and then you
54:05
say you have to trust this thing . So
54:07
let's , for a second , throw out the
54:10
fact that we're talking about trusting a
54:12
super intelligent , you know form that
54:14
has a level of intelligence
54:16
we've never seen before . You know some
54:19
point in the future and may have its own wants and wills
54:21
and desires . Forget that that's . You know that's a way
54:23
off at the moment . But someone's controlling
54:26
AI . You know big
54:28
tech firms , governments , whoever it is , the
54:30
military , whoever's got control of of those
54:32
. I think the assumption for most
54:34
people is that ai is being
54:36
run by them , whoever
54:38
they are , and therefore , how
54:41
do you overcome that trust issue ? It's
54:43
fine when you're using it for things
54:45
which are , you know , potentially
54:47
fun , or or or
54:49
frivolous , or , you know , semi-useful
54:52
, or even for things you know there seems
54:54
to be a lot of trusting kind of science and health care , that
54:56
it will be a positive there . But when
54:58
you're having to make a decision
55:00
that affects the potential
55:03
future of your organization , for example , or
55:05
the future of your safety . You
55:07
know , getting in an autonomous vehicle you
55:10
might get an autonomous vehicle as a normal
55:12
private citizen , but if you're someone who knows
55:14
that there are people who are , you know , out
55:17
to get you , are you ever going to get in an
55:19
autonomous vehicle ? And I think it's
55:21
the sort of the fears of society
55:24
in general , the distrust that's out there . That
55:27
, for me , is why we will probably never
55:29
overcome that barrier of trust . I
55:31
say never . I think we always say on the podcast
55:33
we should never say never . We're talking
55:36
in a finite amount of time
55:38
, so let's talk in our lifetimes . But
55:40
yeah , I think trust is absolutely
55:43
the biggest barrier and I think you raise it in the PR
55:45
industry . I think you're right , because you
55:47
are putting your business's future
55:49
and you're putting a crisis
55:52
or you're putting the reputation of your
55:54
business in the hands of a person
55:56
or an organization or an ai tool
55:59
.
55:59
But it does translate across the entire spectrum
56:02
, I think yeah , and I think sort of you might
56:04
, you might also then have a kind
56:06
of a dual track world where you know
56:08
there's an acceptance , that sort of a lot of the information
56:10
that's out there is produced by ai , but actually there's
56:12
a kind of almost like a quality mark that
56:14
goes with you know , this , this , this
56:16
, this article , this , this , this
56:18
product , this content is , you
56:20
know , has has been produced 100
56:23
by , you know a human being , um
56:25
, and that you know this sort of , and
56:27
I think that you know you're already seeing , uh
56:30
, you know , news websites . I
56:33
think they're well , I don't know whether people are being
56:35
compelled to or whether it's just a sort of you know an
56:37
ethical , uh , you know an ethical
56:39
decision to to sort of indicate where , where
56:41
articles have been written with the help of ai
56:43
, that it may become sort of necessary , you
56:46
know , I don't know whether through regulation or otherwise
56:48
to sort of indicate where , where information has
56:50
been produced with the help of ai and to what extent
56:53
.
56:53
I think the EU's AI Act that
56:55
will be covered , and
56:58
what usually happens with a lot of territories
57:00
is that they follow the EU because the
57:03
EU's the strictest and so
57:05
we may as well just follow what they put in . But I think that
57:07
I'm not 100% , but I'm pretty sure that
57:09
in the Act is exactly that that
57:11
you will need to with whether it's images
57:13
, you know stories , articles everything
57:15
will need to be labeled to be quite clear that it's produced
57:18
by ai and and you're right in terms of you
57:20
know organizations taking that decision
57:22
. So the economists , for example , have
57:24
, uh , have taken a kind of
57:26
you know editorial policy where they will quite
57:29
clearly state what has used ai and what
57:31
hasn't . I think a lot of , I say , reputable
57:34
organizations will will choose to do the same
57:36
thing don't you think , though sorry
57:38
to jump in , but as
57:40
ai becomes more and more ubiquitous
57:43
?
57:43
we talked um in the kind of introduction
57:45
to the episode about actually how few people have
57:47
used ai or heard of it or
57:49
various things , and it's actually still
57:52
relatively low numbers . Do
57:54
you not think , as ai becomes more and more ubiquitous
57:57
, though , what isn't
57:59
going to be have had ai
58:02
used in its production
58:04
, because I feel
58:06
like it's going ? I mean okay , for like to give you
58:08
a concrete example , google
58:10
google are building ai into their search function
58:13
right now , and so if you use
58:15
Google search to assist
58:17
you with finding information , does
58:19
that mean you have to label it as AI
58:22
assisted ? I'm curious
58:24
about this , because I genuinely think it's not
58:26
created it .
58:27
I think the issue here is about whether it's a creation
58:29
of AI , so that's AI using or
58:31
assisting you in
58:34
carrying something out . I think the issue here is about
58:36
intellectual property and whether an
58:39
article or a piece of music
58:41
or an image is AI created
58:43
. I think that's where it is , because it comes out . Dan
58:46
mentioned about deep fakes and stuff . I think that's
58:48
probably at the heart of it , isn't it About how
58:50
do you make sure that people know
58:53
what's AI and what's not ? Actually , if the
58:55
search is helping you do it , but the AI is not
58:57
producing the content , it's just
58:59
helping you with the process , I don't
59:01
see what the issue would be with that .
59:03
But I'm still not clear , Like if
59:05
you create an article using AI , but
59:08
you just tweak a few bits then
59:14
it's not created by ai , like I think it's a really great area actually
59:16
.
59:16
Yeah , you're right , you're right , no , you've , you've banged on there . I just , I just read
59:18
it and I guess it will .
59:19
There will be someone , you know , there'll
59:22
be lawyers and judges who probably argue where the fine line
59:24
lies between assistance
59:26
and creation . You know , you
59:29
know that that will be the sort of the , the key
59:31
issue . I think that it will be a badge of one to
59:33
say , you know , that sort of a hundred percent
59:35
of this article or 100 of this content
59:37
was was produced .
59:39
You know , in an analog way , I wanted
59:42
to finish the episode on a on a quote
59:44
that you , that you sent
59:47
me yesterday I thought it was longer ago than
59:49
that , but yesterday actually . So , you
59:51
, we were talking about the
59:54
sort of conversation today , what we might talk about
59:56
, and and you mentioned trust and , and you said
59:58
, ultimately , if trust was an issue , there's
1:00:00
no part of communications that couldn't be done
1:00:02
by ai , and I guess , I think
1:00:04
not now , but , you know
1:00:06
, two , three , five years in the future . I
1:00:09
think that applies across almost
1:00:11
every job and probably almost every task
1:00:14
, and that that's why I think you know the
1:00:16
point that we've made here trust
1:00:18
is an issue . So if
1:00:20
trust was an issue , we could do lots of things , but trust
1:00:22
is an issue and therefore I
1:00:25
do think you know you're quite right , it will
1:00:27
be a barrier for certain roles in communications
1:00:29
, but it would be a barrier for a lot of things . And I
1:00:32
still think that sort of self-driving
1:00:34
vehicles thing is a great example
1:00:36
. Would you trust the self-driving
1:00:38
vehicle ? It's not about the
1:00:40
technology of the vehicle , is it ? It's
1:00:42
your trust that you're putting in who's
1:00:44
got control of that vehicle or what
1:00:47
has got control of that vehicle , and that's
1:00:50
what I think is the issue with trust . It's
1:00:52
not about the technology
1:00:54
, it's not about the ability
1:00:56
of AI to do the role . It's about the biases
1:00:58
and it's about the motivations and
1:01:01
who's controlling it and what are their motivations
1:01:03
. And we're in a , you know , post-truth , post-trust
1:01:06
era . I think you're bang
1:01:08
on . I think I think trust it's going to be a thing
1:01:10
that we're probably going to
1:01:12
touch on more and more , I think , um
1:01:14
, over the next however many months and
1:01:16
years that we do this podcast . But , um , thank
1:01:19
you , dan . It's been an absolute pleasure
1:01:21
, really interesting conversation . So thanks for giving
1:01:23
us your time this evening . Thank you very much . So
1:01:25
that's it for this week . Um
1:01:27
, I want to just finish off . No-transcript
1:02:02
could have if it's not managed and
1:02:04
developed properly . So I want to ask
1:02:06
people who listen to this show this
1:02:09
week to please ask three
1:02:11
people just recommend our show to
1:02:13
three people three friends , three family , whoever
1:02:15
it is but actually not
1:02:17
just recommend them the show . Recommend
1:02:20
them a particular episode that you think would
1:02:22
be of interest to them and help us to try
1:02:24
and grow the show . It's something we're going to focus on in
1:02:27
the next few weeks and months . This is
1:02:29
not about generating money for us
1:02:31
. This is about generating an impact and if we
1:02:33
don't have an audience , then we can't get that
1:02:35
message across . So it's just a bit of a request
1:02:37
for me that people do that . Three people get
1:02:40
them to listen to an episode . Ideally
1:02:42
they'll subscribe and they'll follow the show . If they're not interested
1:02:45
, then that's fine . But the thing that
1:02:47
we would ask you is just to let people have a listen and hopefully get them
1:02:49
to be involved . Have a listen and hopefully get them to be involved
1:02:51
, so we'll finish , as always
1:02:53
, with a song . So
1:02:56
thank you , jimmy and Suno , for that
1:02:58
and we will hopefully
1:03:00
have you listening again next week . Thanks again
1:03:02
, dan , thanks Jimmy , and take care everyone
1:03:05
.
1:03:14
See you soon . People talk in shadows
1:03:16
, gossip
1:03:18
fills the air , stories
1:03:22
told in countless echoes , but
1:03:24
AI wouldn't
1:03:27
dare . It's
1:03:30
a task of subtlety . Trust
1:03:34
is hard to gain . Whisper
1:03:38
words and empathy AI
1:03:41
can play that game . Ai
1:03:46
won't replace us . Communications
1:03:50
need touch Tapped
1:03:53
into the human force
1:03:56
. Ai just ain't
1:03:58
got that much . Understanding
1:04:25
hearts and minds isn't data or code . In every word
1:04:28
, compassion finds a hand to lighten
1:04:30
the load , and white
1:04:33
lies as lost without a sign
1:04:35
While we see through
1:04:37
all disguise . Ai
1:04:41
won't replace us . Communications
1:04:45
need a touch Tapped
1:04:48
into the human pulse
1:04:51
. Ai just ain't
1:04:53
got that much . I'm
1:04:55
that much
1:04:57
, I'm
1:05:01
that much , I'm that
1:05:03
much , I'm
1:05:07
that much . Thank you , you
1:05:09
.
Podchaser is the ultimate destination for podcast data, search, and discovery. Learn More