Podchaser Logo
Home
PODCAST RELAUNCH: It's time to urgently act on mitigating existential threats of AI

PODCAST RELAUNCH: It's time to urgently act on mitigating existential threats of AI

Released Monday, 24th June 2024
Good episode? Give it some love!
PODCAST RELAUNCH: It's time to urgently act on mitigating existential threats of AI

PODCAST RELAUNCH: It's time to urgently act on mitigating existential threats of AI

PODCAST RELAUNCH: It's time to urgently act on mitigating existential threats of AI

PODCAST RELAUNCH: It's time to urgently act on mitigating existential threats of AI

Monday, 24th June 2024
Good episode? Give it some love!
Rate Episode

Episode Transcript

Transcripts are displayed as originally observed. Some content, including advertisements may have changed.

Use Ctrl + F to search

0:00

Welcome to Preparing for AI , the

0:02

AI podcast for everybody . With

0:05

Jimmy Rhodes and me , Matt Cartwright , we

0:08

explore the human and social impacts of AI , looking

0:10

at the impact on jobs , AI and sustainability

0:13

, and safe development of AI , governance

0:15

and alignment .

0:18

It's the terror of knowing what this world's

0:20

about watching some good friends screaming

0:23

. Let me out . Welcome to

0:25

Preparing for AI , the AI podcast

0:28

for everybody . We're your hosts , jimmy

0:30

Rhodes , and I'm Matt Cartwright . A

0:32

special welcome back , as always , to our Amish

0:35

listeners . I hope you're listening on an LP . Today's

0:38

episode is a bit of a relaunch of our podcast

0:40

. Let's call it season two . Today's

0:42

episode will be the first of many , exploring governance

0:45

and alignment . How can governments

0:47

and society prepare for AI and

0:49

how can we be sure that AI , as we develop

0:51

, will align to our goals . And

0:53

with that I'm going to hand over to Matt

0:56

for a brief . I say that in quotes introduction

0:59

to this new format . Keep an ear out

1:01

for his call to action . Oh and

1:03

subscribe , comment in the show notes

1:05

and share our podcast if you enjoy it . Over

1:07

to you , matt .

1:08

Thanks jimmy , uh , this is going to be anything

1:10

but brief . So , uh , get your dressing

1:13

gowns on , pour yourself a whiskey and

1:15

then sit back and I would say , relax

1:17

. But probably don't relax because , uh , I

1:19

think what I'm going to say hopefully will make you

1:21

stand up to attention rather than lie

1:24

back and think of the queen . So both

1:27

Jimmy and I find that recently we

1:29

flip on an almost daily basis

1:31

between thinking that Sam Altman

1:33

and OpenAI are already being controlled

1:35

by an all-powerful artificial

1:37

super intelligence and , on the other

1:40

hand , thinking that everything's been massively overhyped

1:42

and actually this is all about investment

1:44

. We've already run out of data and the whole

1:46

large language model architecture has probably almost

1:49

taken things as far as it can . But

1:52

it almost doesn't matter , because

1:54

even if at this point we have an AI winter for 10 years

1:56

and AI winter is basically

1:58

a long period with little to no progress

2:01

on AI it's just a matter

2:03

of timing . There will be

2:05

an advanced AI , whether

2:07

it's called artificial general intelligence , artificial

2:09

super intelligence or just advanced

2:12

AI . The name is kind of semantics at this point

2:14

and we can question whether it will

2:16

be sentient or not and whether it will be in

2:18

control or be controlled by us , with

2:21

whoever us is unlikely to be

2:23

a force with purely altruistic intentions

2:25

. We can question whether

2:27

it's going to go terminator to skynet on us or

2:30

whether it will just be more of a mass surveillance tool

2:32

. I remember reading

2:35

a comment recently which said I

2:37

more and more think the only good outcomes with

2:39

agi involve a oh whoops , there

2:41

goes tokyo moment to get there . And

2:44

that's kind of where I am on this , without

2:46

massive , unprecedented levels of intervention

2:49

on how we develop advanced ai

2:51

. That's kind of where we're headed

2:53

now . Look , I'm

2:56

not suggesting we don't develop ai . Actually

2:59

, in my idea world , we'd go back , stick

3:01

it back in its box , put all of social media

3:03

in there with it and bury it at the bottom of the

3:05

Mariana Trench . But let's face

3:07

it , that ain't happening . The box is open

3:09

, the chicken's been taken out and the egg has

3:11

already hatched . So all we can do

3:13

now is commit every possible resource

3:16

to ensuring that , as much as is possible

3:18

, we develop AI safely and in a

3:20

way that does not threaten society and

3:22

humanity itself . And

3:24

let's be brutally honest here those

3:26

working on AI , the real experts

3:28

, almost all place between

3:31

a 20% and 99.999%

3:33

chance on AI posing an existential

3:36

threat to humanity . Timeframes

3:39

differ , but Geoffrey Hinton , who's

3:41

often called the godfather of AI , places

3:43

the odds of a human versus advanced AI conflict

3:46

within the next 5 to 20 years at

3:48

50-50 . And Roman

3:50

Jampolsky puts the odds at

3:52

a 99.999999

3:55

, with nines going on forever , with

3:58

the only doubt being the timeframe . 20%

4:01

seems to be fairly generally agreed as a flaw

4:03

of the risk level . 20% seems to be fairly generally agreed as

4:05

a flaw of the risk level . And are we okay with developing something where there's

4:07

a one in five chance it wipes out humanity

4:09

or , I guess , in an even worse case

4:11

, enslaves us all ? Some people might

4:13

not believe this . They think it's all sci-fi

4:15

. Ai is just a big computer , but

4:21

it's not Think of it as an inorganic brain . It's much , much less efficient

4:24

, much bigger , but

4:26

capable of thinking in a way that

4:28

and this is the key thing we do not

4:30

fully understand . Now

4:32

, of course , there are things that could be barriers to development Data

4:35

, having enough energy , global

4:38

geopolitics and supply chains , solar

4:40

flares . But let's just consider

4:43

for a moment the current situation . Most

4:46

of the frontier AI models are black

4:48

boxes being developed completely unregulated

4:50

by a small number of Silicon Valley big tech firms

4:52

and , to a lesser degree , by

4:55

some Chinese state-backed startups . There

4:58

is regulation being put in place , but

5:00

the majority of it is about the use and application of

5:02

AI tools and models models and less about

5:04

the development . Openai

5:07

just disbanded their safety and alignment team

5:09

and they moved further and further from their original

5:11

goal to develop safe AI that benefits

5:13

humanity . It's not

5:15

often I agree with Elon Musk , but I

5:17

think he's got it right with OpenAI . It's

5:20

increasingly a race to the bottom , and

5:22

only Anthropic seem to be actually trying to

5:24

develop a frontier model in an ethical way

5:26

. When we add in the addition of

5:28

a military chief to the open AI board and

5:30

the increasing likelihood that the military in China and the

5:32

US have woken up to the fact that it's going to be AI

5:35

that decides the future balance of military and political

5:37

power , the situation's getting more

5:39

, not less , dangerous , and

5:41

we live in a world run by people in their 70s and 80s

5:44

who are out of touch , quite

5:51

often mad or suffering from dementia , and likely won't see the most cataclysmic impacts of this

5:53

technology . Recently we saw the US Senate quiz

5:55

some of the big AI players . Most

5:58

, if not all , are over 65 years old . They

6:00

don't even have a basic understanding of AI

6:02

and I would happily bet money that 90%

6:05

plus have never even used generative

6:07

AI tools . People

6:10

often compare AI development to nuclear weapons

6:12

. There are two big differences

6:14

. Firstly , with

6:17

nuclear weapons , once you have it , you

6:19

have the deterrent . Of course

6:21

, there are always ways to advance it , but you

6:23

already have the ability to deter because if

6:25

they strike first , you can strike back

6:27

. But with AI it's

6:30

not that simple . Having it is

6:32

not enough , because if you do that but

6:35

yours is not as advanced as the enemy's , you

6:37

can't strike back because they can simply disarm

6:39

and disable your AI tools and weapons

6:41

. Secondly , and

6:44

here's the most relevant one here , around

6:47

90% of the money spent on nuclear

6:49

this is more generally so , it's

6:51

nuclear power as well as weapons is

6:53

spent on safety and just

6:55

a little is actually spent on development . But

6:58

with AI , I can't find any reliable figures

7:01

on how much is spent on safety , but

7:03

it's certainly less than 10% . And remember that disbanding of that open AI safety team . Find any reliable figures on how much you spent on safety , but

7:05

it's certainly less than 10% . And remember that , disbanding of that

7:07

open AI safety team , they

7:09

were meant to be committing 20% of their compute

7:11

to safety and alignment , and I

7:13

presume that now it is much less than that . Well

7:17

, we've got more urgent problems . I hear you say Costs

7:19

of living , keeping my job , staying healthy

7:22

, fighting the suppression and abuses

7:24

of power , climate , increasing levels

7:26

of sickness and ill health Absolutely

7:28

, and it's understandable that

7:30

this is not the most pressing issue on other people's agendas

7:33

. But there are two things I want you to consider

7:35

. If we

7:37

continue to develop AI at the current rate , with

7:40

the current lack of alignment and governance , we're

7:42

putting ourselves on an accelerated path to self-destruction

7:45

Self-destruction . Climate

7:49

change is already happening . We need to dial

7:51

back and change behaviours . We're being told

7:53

to make changes to our lives in massive ways to mitigate

7:55

, but with AI , we could just

7:58

pause tomorrow and that's it . This

8:01

is a problem that hasn't arrived yet and

8:03

it's a problem potential of our own making , so

8:06

we can actually choose to act now and

8:08

mitigate some of the dangers without having to make

8:10

huge changes to our behaviors . I'm

8:13

going to steal a line from my favorite ai safety

8:15

researcher , robert miles there

8:18

is no rule that says that we'll make it . Please

8:21

don't think that there is someone out there who's

8:23

going to swoop in and save all of humanity and

8:25

that humanity has to survive because it always did

8:28

before . You never

8:30

died before , but you're still going

8:32

to die someday . It's

8:35

us that needs to save us to

8:37

rise to the occasion and to figure out what to

8:39

do . The challenges might be

8:41

too big to solve , but we need to try

8:43

, and unless we make it the

8:45

most serious or one of the most serious endeavors

8:48

of humanity , we will fail . It's

8:51

only about the time frame . I

8:54

believe there are two ways we can avoid that dystopian

8:56

future and two ways we can mitigate

8:58

existential threats by ensuring we dedicate

9:00

every possible resource to the safe development

9:02

and alignment of AI . One

9:04

is the aforementioned whoops . There

9:07

goes Tokyo , so essentially an accident

9:09

or an incident caused by AI that

9:11

is so catastrophic that the majority of the

9:13

planet starts a backlash that

9:15

results in there being no choice but to restrict

9:17

and regulate the development of AI . The

9:20

second option is that there's

9:22

enough of a shift in public sentiment on

9:24

a massive , massive scale that results in pressure

9:26

, action and activism that

9:28

causes governments and democracies who

9:31

are concerned about the effects on their election prospects

9:33

and in dictatorships who are concerned

9:35

about legitimacy and social stability

9:37

, to urgently take action to regulate

9:40

the development of AI Once

9:42

it has developed enough that it can be used to control

9:44

society and democracy . I think

9:46

it's already too late for option two , so

9:49

that is why we want to use this podcast as

9:51

a catalyst to get as many of you as possible to

9:54

share this message and to do our small

9:56

part in shifting the narrative and putting

9:58

pressure on those in power to act urgently . Through

10:01

the next few months , and possibly years , we will explore

10:03

AI , governance , alignment and

10:05

safety , with a focus on what you can do . It's

10:08

not our intention to scare people , but

10:11

we do want to open your eyes to the reality and the

10:13

urgency of the challenge in front of us and

10:15

to empower you to make a difference . Before

10:18

I hand over to Jimmy , I just want to make a recommendation

10:21

for those who want to look into this more . In

10:23

the show notes , I'm going to link two videos by

10:25

the aforementioned hero of mine

10:27

, robert Miles , and not the one who wrote Children

10:30

, rip Bobby . One

10:32

is the aforementioned there's no rule that says we'll

10:34

make it video and the second is

10:36

a video titled why ai ruined my year

10:39

. I will also link

10:41

ai safetyinfo , which is a great place

10:43

for loads of two to four minute articles

10:45

on things like how we might get to agi . Can

10:48

we stop ai upgrading itself ? Why don't

10:50

we just stop building ai completely . Loads

10:52

of great and really simple resources and

10:54

there's information now on how you can get involved with them

10:56

. And the final thing I will link

10:59

is Blue Dot's AI , safety Fundamentals

11:01

, governance and Alignment courses . I've

11:04

recently studied the governance course

11:06

. It's funded by philanthropic

11:09

sources . It's been attended by policymakers

11:11

, technical experts , national security experts

11:13

and normal people like me . So

11:16

if you want to do more than just raise awareness , it's

11:18

a great place to study with like-minded people and

11:21

, more importantly , to build a network . There's

11:23

loads of resources out there and we'll explore

11:26

those in future safety-focused episodes

11:28

. So after all that , we

11:30

will take a 10-second break , we

11:33

will change into our lounge suits and

11:35

then I will hand back over to Jimmy .

11:47

Okay , thanks for that , Matt

11:49

. I think there's quite a lot to unpack

11:52

there . I mean , I hadn't

11:54

heard that before and I'm going to have to listen to it again myself

11:56

.

11:56

Did you cry ?

11:58

It brought a little tear to my eye , made

12:00

me , uh , slightly worried

12:03

, slightly more worried about the potential

12:05

future , but , as I say , there's quite a lot to unpack

12:07

there . I think , as I said at the start , we're , we're

12:10

, we're . We're relaunching the podcast a little

12:12

bit and

12:20

one of the things that we really want to focus on is some of the things that Matt talked

12:22

about there around , like just sort of the big , the bigger ticket item , the sort of helicopter

12:25

view of what are the bigger problems here . And

12:28

we feel that that is . We feel that the main

12:30

things are some of the things Matt talked

12:32

about there around governance

12:34

and alignment , and my first

12:36

point on that would be that around

12:39

governance and alignment , and my first point on that would be

12:41

that , as you say , like you

12:45

know , open AI have been up in

12:47

front of the Senate in the US , been up in front of a . You know , forgive

12:52

me , but you know a bunch of old duffers that don't really understand anything that was

12:55

said to them and the questions you know were really lame in terms of the , you know , probing into what's

12:57

going on with AI , and you know they didn were really lame , um , in terms of the you know , probing into

12:59

what's going on with AI , and you know

13:01

they didn't bring up any of the things around alignment or any

13:03

of that kind of stuff , which are the , you know , the real

13:05

questions at the heart of it , and I

13:07

think this is at the core of the problem . Right

13:09

At the moment . You've got , you

13:11

know , three or four big tech companies that

13:13

are self-governing . They're voluntarily

13:16

, you know , doing a bit of safety and alignment

13:18

stuff when it suits them . They're

13:20

doing it because it probably looks

13:23

good for PR purposes and

13:25

that's not going to work long-term . Hence

13:27

the reason why , you know , ultimately AI

13:29

have decided to cut their safety team because

13:31

maybe it didn't align with what they

13:34

wanted at the time , maybe it didn't suit their needs

13:36

, maybe it was holding or they felt it was holding them back

13:38

in terms of making a profit , which I would

13:40

imagine it would . So so

13:42

, yeah , I think that gets to the heart of the problem , like

13:44

, what , how , how can we expect

13:46

these companies to regulate themselves ? When has that

13:48

ever happened ? When has that ever been effective ? And

13:51

so what we need is for governments

13:53

to step up and step in and actually

13:55

start taking action on this and start regulating

13:57

at a governmental level and start

14:00

putting in place guidelines for

14:02

what ai companies should

14:04

and shouldn't be able to do , how much power they shouldn't

14:06

shouldn't hold , and to start thinking about

14:08

and taking some of these problems seriously

14:10

, and I believe there's been some steps in

14:12

that direction in the eu recently

14:15

there has .

14:16

I mean , you know , we

14:18

we talked a little

14:21

bit , uh , one of the early episodes I kind

14:23

of gave a a bit of a download of at

14:25

that point where we were in terms of governance

14:27

in the eu , the uk , the us and china

14:30

. Um , the eu and china

14:32

are probably at the forefront . I

14:34

think the eu is number one . I mean , it

14:36

is kind of dictated by their approaches . The

14:38

eu ai act is

14:41

a massive kind of all-encompassing kind

14:43

of umbrella piece of legislation , is very much

14:45

kind of vertical legislation , um

14:48

, you know , as you'd expect

14:50

from the eu , I

14:52

think . The issue , from what I've seen

14:54

of it , though , is it focuses

14:56

and this was the

14:58

thing I was trying to kind of get across in the speech

15:00

at the beginning . It looks at the

15:03

use of AI , so how

15:05

tools are used , and you know , an

15:08

example was your biometrics

15:10

. It can't be used for any biometric uses that

15:12

, would you

15:15

know , include , uh , protected

15:18

characteristics , so you could use it as a tool

15:20

at the border , but you couldn't use it

15:22

to identify , you know , people

15:24

with a certain skin color or people of a certain religion

15:26

, etc . Etc . Which is , you

15:28

know , exactly what you'd expect from the eu

15:30

. My concern about that is that it's

15:33

all very well , and I'm not saying that we don't need this

15:35

. Of course we need , you

15:38

know , regulation on

15:40

how tools are applied and I

15:42

guess at the moment , with the

15:44

way that AI tools are , you could kind

15:46

of argue that that is the most important thing

15:50

. And

15:54

the existential threats

15:56

which , like I've said , they're real , they could

15:58

be 20 years away , 30 years away or three years away

16:00

. We just don't know the existential

16:02

threats are only going to be solved by

16:05

governing and regulating the development of frontier

16:07

models . So , from

16:09

what I've seen and I may be wrong and we

16:11

hopefully will get I'm talking

16:13

to experts in this area about

16:16

coming on the podcast in the next few weeks . They

16:18

will probably know more about the intricacies

16:21

of this , but my understanding is that it

16:23

doesn't regulate the development of the models

16:25

and therefore all

16:28

the threats that we're talking about , that

16:30

could potentially be happening in this black box

16:32

, are not going to be addressed by the current regulation

16:34

. And that , don't forget , we're talking about the eu

16:36

is the regulation that is the kind of

16:38

gold standard for regulation .

16:40

It doesn't include the us and all the models are

16:42

being developed in the us for

16:45

me , this feels a bit like talking about

16:47

tax regulation , though regulating

16:50

financial companies around tax loopholes

16:52

where basically

16:54

the the the

16:57

amount of money that the big corporations

17:00

can employ to to

17:03

employ people who can

17:05

work the financial advisors to get

17:07

around these tax loopholes is just

17:09

far greater than governments could ever afford

17:11

to employ people who

17:13

employ experts in closing the tax

17:15

loopholes , and it feels like

17:17

an area that's quite similar , like , clearly

17:19

the big tech companies have got almost unlimited

17:21

funds to spend on top

17:23

AI researchers and

17:26

top AI you know even

17:28

alignment researchers and things like that . So

17:32

for me it seems like

17:34

an almost a non-starter , in a way .

17:38

I sort of agree , and

17:40

you know , sometimes

17:42

we talked about that you know we wake

17:45

up and sometimes we flip from one side to the other

17:47

in terms of you know whether ASI

17:49

is on our doorstep or it's already here or whether

17:52

actually it's nonsense . I

17:55

think you could look at it and

17:57

say it's a non-starter and yeah

17:59

, it's just impossible . My

18:01

counter argument would be okay

18:03

. But we're talking about a potentially

18:05

and I don't want to keep over egging

18:07

this word but existential threat to humanity

18:10

. So isn't

18:12

it worth , you know , even

18:15

if we think it's not going to work , isn't it

18:17

worth pushing it ? Otherwise , the answer

18:19

is we just give up and then we hope

18:21

for , you know , option one

18:23

which I talked about

18:25

before , which is , you know , there goes a

18:28

million people . So I

18:30

kind of see your point . I also think this

18:32

is where you know we'll get onto

18:35

, I'm sure , later and in future , episodes

18:37

about , you know , people starting to act , because

18:39

it's

18:41

all about I think it's , it's all about

18:43

this moment public sentiment and if there's

18:46

enough of a push , I mean there's been a narrative shift . I

18:48

think that has . It might be

18:50

the algorithm that's feeding me this , but there's been a narrative

18:52

shift , I think , in the last few weeks where safety

18:54

and alignment and governance is coming up more

18:57

in , you know , in

18:59

mainstream media and not

19:01

just the clickbait headlines , because they've

19:03

always been there , um , and you could kind of

19:05

argue that my speech , you know , is clickbaity

19:08

. I don't think it is . I'm I'm sort of trying to come

19:10

from a genuine place with this um

19:12

, but you could sort of say , you know this is

19:15

we're talking about . You know , existential

19:17

threats , wow , these are kind of , you know , headline

19:19

grabbing things . But I think there's been a narrative

19:21

shift in that you are seeing the

19:24

talk of the need for governance and

19:26

the need for regulation and the

19:28

need for some you know levels

19:31

of control . I

19:33

also do think and

19:35

I'm trying to see the good here I

19:37

do think you can see from the letter

19:40

that came out a few weeks ago , where you know

19:42

a number of employees from the big firms

19:44

uh , wanted to get , you

19:47

know , a guarantee that they were able to speak freely

19:49

and voice their concerns , that there

19:52

are a lot of people within the industry

19:54

who want to do the right thing . You

19:56

know it's not just anthropically the good guys and everyone's the

19:58

bad guys . I think there are lots of people . I

20:01

think what's happened , as you know , always happens in politics and in business is

20:04

that , you know , even good people get dragged down

20:06

by the system . And there is this

20:08

race to the bottom . Yeah , I think OpenAI

20:10

started out with good intentions . I think they've

20:12

been dragged to the bottom because there's

20:14

either this we need to get to AGI first , because

20:16

we're the only ones who could do it right , we

20:19

can't trust the other companies or and

20:21

this is the kind of Leopold Aschenbrenner

20:23

line of this is a competition between

20:26

democracies and , you know , dictatorships

20:28

. I've got a view on that . We'll maybe touch

20:30

on this later . I think it's oversimplified

20:32

, but I think a lot of people are convincing

20:34

themselves of that to , you

20:37

know , convince themselves that they're , they're sort

20:39

of trying to do the right thing , and I think that

20:43

may be the case . It may not be . I don't want

20:45

to give too much benefit of the doubt at this point , but

20:48

you know , I do think there are a lot of people

20:50

in the system who

20:52

probably do want to do the right thing and actually , you

20:55

know , a backlash , a change

20:57

in the narrative might empower those people

20:59

. To , you know , come forward and

21:01

to speak out . Still

21:17

working in the industry , aren't they ? And they've come out , and not all of them . There are still

21:20

some in there who might you know , the people who haven't left , who would like

21:22

to have to be empowered to speak their mind

21:24

yeah I'm not talking

21:26

sam altman here . No , I know I mean .

21:28

My feeling with this is , if you sort of go back and

21:30

have a brief history of open ai

21:32

, it all started when microsoft put

21:35

a massive investment into open ai . At the end of

21:37

the day , they put a massive investment into open AI . At the

21:39

end of the day , they put a massive investment into open , the open AI . A couple of months later

21:41

, there was a rebellion by the board . They kicked some

21:43

out and now that's like . That lasted like three

21:45

days , five days , something like that . Sam

21:48

Altman was back in . The rest , all

21:50

the people on the board who disagreed with him

21:52

. It was all reshuffled . All of

21:54

the people who were , you know

21:56

, had the moral position that open

21:59

AI was going in the wrong direction . They're all out now

22:01

. Um , and slowly those

22:03

people are leaving the company and resigning because

22:05

they either no longer align

22:07

with the company or the company no longer aligns

22:10

with them . So I think

22:12

and this is what this is kind of what I was talking about before

22:14

with the if you expect these companies

22:16

and open ai is only one company

22:19

in the game as well , you know , you've got other

22:21

, you've got google competing , you've got anthropic

22:24

, although they , you know , arguably they split

22:26

from open ai for good reasons and have . I agree

22:28

, they seem to have a better moral compass

22:30

right now , the moral compass that maybe open ai

22:32

used to have . But yeah , you've got google , you've got

22:34

facebook , you've got meta , um , you've got um . You've got

22:36

meta um , you've got um . You've got

22:39

a , a menagerie of big tech firms

22:41

who are all competing and they're competing

22:43

, ultimately for cash . So

22:45

if alignment

22:48

, if safety gets in

22:50

the way of what their aims are , which

22:52

is cash making , cash

22:54

, then of course it's going to

22:56

take second seat , second fiddle making

22:59

cash , then of course , it's going to take

23:01

second seat , second fiddle , so

23:05

it's probably a perfect time .

23:05

I happened to get sent this message the other day . It's from a while ago . But Satya Nadella

23:07

to board members about OpenAI . So , quote

23:10

, including GPT-4 , it is

23:12

closed source , primarily to serve Microsoft

23:14

proprietary business interests

23:17

. And then there is I is

23:19

, I think you know , a lot of people who are interested . Now I

23:21

will have remembered this quote from november

23:24

, which was around the time that the drama you

23:26

were talking about with open ai unfolded . So again

23:28

, satya nadela , uh

23:30

, if open ai disappeared tomorrow , we

23:33

have all the intellectual property rights in the capacity

23:35

, we have the people , we have the computing capacity

23:37

, we have the data , we have everything . We are below them

23:39

, above them , around them . I

23:42

mean , yeah , you , you , you're spot

23:44

on . I , I , I don't know because I that's

23:47

I think microsoft sort of gets

23:49

a bit of a free free ride sometimes

23:51

. I mean you know bill gates doesn't . But since

23:54

bill gates has left , I mean you know , we obviously know

23:56

bill gates caused the pandemic 5g

23:58

and every other theory out there .

23:59

But Microsoft as an organization

24:02

seems to get a bit of a free ride , I think they

24:05

yeah , they relatively yeah

24:08

, no , I agree , like Microsoft , sort of Microsoft

24:11

to one of these quiet companies where the

24:13

you know there seems to be a lot of controversy around

24:15

Facebook and meta . Similarly

24:17

, sometimes with

24:19

google they've been in the news , um

24:21

, even apple , although apple , you know a lot of it's

24:23

positive , but they also have have had quite a

24:25

lot of negative publicity recently . I

24:28

don't know how microsoft do it , but they sort of seem

24:30

to quietly tick along um

24:32

and , and actually I mean let

24:34

to a lesser extent but nvidia , who have

24:36

recently become the um , because

24:38

jensen's a rockstar CEO .

24:40

I think that's how they get away with it , right . You either

24:42

love him or you hate him , but he looks like you know , he

24:46

looks like someone you'd like to be friends with and therefore

24:48

I genuinely think it's him that gets

24:50

them that free pass and the fact that they've made a

24:53

lot of people a lot of dough in the

24:55

last year .

24:56

Yeah , absolutely , I

25:00

a lot of dough in the last year . Yeah , absolutely , I mean um , if anyone hasn't been following the

25:02

news , um , nvidia are like the largest company in the world , I think , as of last week , by

25:05

market cap , so , um , but but

25:07

also also nvidia can kind of stay out

25:09

of the limelight because they basically they

25:11

sell their products to businesses .

25:13

It's business to business boring bits right . It doesn't

25:15

appeal to the general public . No one cares about

25:17

what a chip looks like , or a gpu

25:20

or you know a neural processor

25:22

, because no one understands it no , but they

25:24

are . Everyone has a microsoft or an apple product

25:26

yes , exactly , they have to do marketing

25:28

.

25:28

They actually have to do marketing . Whereas nvidia

25:31

, the 90 90 of their

25:33

business now they do sell gaming , gpus and

25:35

stuff , but 90 of their business

25:38

now , because of the ai boom , is

25:40

selling ai chips to businesses

25:42

and that's where all of their growth has come

25:45

from and why they're so , why they're absolutely huge right now

25:47

, because they are the fuel for

25:49

the ai fire .

25:51

I just want to go back , as I'm conscious

25:53

that we could go off on a tangent , but

25:55

just I was thinking as , as you said that

25:57

about microsoft and

25:59

that quote from satya

26:01

nadella where he talks about you know being

26:03

all around open ai , I wonder

26:05

if the reason they get that free pass is they're

26:07

all around everything . So if

26:09

you're the us government or the uk government

26:11

or , to a lesser degree , you're the chinese

26:14

system or you are

26:16

an individual or a business , you're probably using

26:18

Microsoft kit for pretty much everything

26:20

you do . I mean , you know the

26:22

number of security instances have been with

26:24

Microsoft tech , whether

26:26

that's you know , software , cloud hardware

26:29

over the years and yet they haven't been replaced

26:31

because they're so ubiquitous . I think

26:33

that's part of the reason they get a pass is , you

26:35

know they , they control

26:38

all of the gear that controls

26:40

most of the world . So

26:42

you know they have a very

26:44

big chunk of the pie . I think that's maybe why , maybe

26:46

why they get a pass . But digress a little

26:48

bit , I guess , from the point of the podcast ultimately

27:01

, regulation is not

27:03

going to come from within these companies

27:05

.

27:06

They have like less

27:08

and less self-interest to self-regulate

27:10

, um , and they , why would

27:12

you , why would we think that self-regulation would work

27:14

anyway ? So you

27:17

know , I think what's

27:20

the , what's the solution

27:22

here ? I guess it's

27:25

, you know , it has to come from , it has to

27:27

come from governments , but it also has to come

27:29

from people . And I think , going back to a point you

27:31

sort of started to make a little while ago , I

27:33

feel like it has , it is in the headlines more and

27:36

I think that one of the ways

27:38

I mean , obviously our podcast is hoping to

27:40

raise awareness and and

27:42

um , and hopefully we can

27:44

get it out there to more people . But I

27:47

do think that , going back to some of the topics in

27:49

previous podcasts , like if as as

27:51

job losses and as real world

27:53

tangible , sort of impacts

27:56

start to happen . As we start

27:58

to see those , as we start to see more and more

28:00

job losses and AI

28:02

replacing jobs and robots replacing jobs and

28:04

all the things we've talked about in previous episodes . I

28:07

think that's when it will

28:09

start to become a real topic

28:11

for discussion , but but I still don't

28:13

know whether that's going to be more down

28:15

the line of . You know , we need to look at

28:17

limiting the impact on

28:19

jobs and limiting the impact on society

28:22

, which is what governments are sort

28:24

of able to do and it's their comfort zone

28:26

, isn't it ?

28:27

Regulating that stuff , which is why the AI , the

28:29

EU's AI Act I think that's

28:31

the EU's comfort zone . This , what we're

28:34

talking about here , is regulating

28:36

the development . Although

28:39

I said , nuclear power is not an ideal sorry

28:41

, nuclear weapons are not an

28:43

ideal kind of comparison point , I think

28:45

it is still the best comparison point

28:47

in terms of how you regulate

28:49

that form of technology . You

28:52

know , I I did an exercise as part

28:54

of the , the ai governance course where we

28:56

we looked at potential kind

28:58

of you know ways and I came

29:00

up with an idea that I actually think is . I

29:03

think it kind of stinks , because I think

29:05

it involves trusting an organization

29:08

which you know we would not want to

29:10

do , but is almost like a

29:12

team , that are like the kind of un weapons inspectors

29:14

that you used to have , that are embedded within

29:16

these models and are rotated

29:18

out on a regular basis so they can't

29:20

become corrupted by it , but they're in there monitoring

29:23

the kind of development . I think that

29:25

something along those lines , although

29:28

I don't know how you would do it in the current geopolitical climate

29:30

. I don't know how you would have another kind of three-lettered

29:32

organisation that people would trust . You

29:35

know , with the lack of trust in the WHO

29:38

, the WF , etc . I don't know how

29:40

it would work , but it feels like as

29:42

a starting point . That's what you

29:44

need is you need something embedded that is

29:46

ensuring that this development

29:48

is happening in a way that it's not even about

29:50

ethics , it's not even about doing it in

29:52

an ethical way , it's just about avoiding

29:55

cataclysmic risks . So

29:57

you know the letter that

29:59

was signed in 2023

30:01

about the six month pause . I

30:03

don't think the six-month pause is right because you can't put

30:05

a timeframe on it , but I think what it's about

30:08

is , if you cannot prove

30:10

that this is safe and you do not

30:12

know how it is working , then

30:14

you need to pause until you work that out . And

30:16

I come back a lot to this point about we

30:19

don't know how large language

30:21

models work , and I don't think large language models

30:23

, as I said , are the answer to advanced

30:26

intelligence . I think there'll need to be a new architecture , but

30:29

advanced ai

30:31

at some point is going to develop from something

30:33

and we need to know how it works to

30:36

be able to align it yeah

30:38

, at that point

30:40

I mean , is it worth me doing like a real

30:42

sort of quick introduction to alignment ? I

30:45

was gonna say exactly that

30:47

, that we've covered governance , and governance

30:50

people understand , even if they don't understand ai . But

30:52

yeah , alignment . It would be good if , if you could

30:54

explain what alignment

30:56

means in a kind of simple way yeah

30:59

, so , first

31:01

of all , um , matt referenced robert

31:03

miles .

31:05

Um , if , like , the links are in the show

31:07

notes , uh , robert miles has actually been doing youtube

31:10

videos explaining and talking about alignment

31:12

for years and he explains it really well . So

31:14

if you want more information , you want to get

31:16

more in depth on it , I would recommend watching

31:19

some of his older videos that are from a few

31:21

years ago now . He's a ai

31:23

safety and alignment researcher I

31:25

can't remember which university yet he's

31:28

now .

31:28

So he's now just as an aside , he's now advising

31:31

the uk government . So he's now advising them

31:33

on ai safety . So he's actually , you know

31:35

, he's not just kind of on the alignment part , he's actually

31:37

advising the uk government

31:39

on how they deal with ai

31:42

safety in general . And hopefully

31:44

the new government , which you

31:47

know will be in place soon when Nigel Farage

31:49

is the the new leader .

31:52

Um , yeah , hopefully , hopefully , nigel listened

31:54

to uh , robert , Um , I

31:56

don't think he'll understand what he's on about , to be honest , but

31:59

, um , but maybe you can have a

32:01

listen to this podcast and uh , and

32:03

get up to speed . So the

32:05

core idea behind AI alignment is

32:07

basically making sure that AI systems do

32:09

what humans intend and avoid

32:12

harmful behaviors . So

32:14

, in a real simple sense

32:16

, that is that , like you know , if you've got a vacuum

32:19

cleaner that's got AI built into it , that

32:21

it , you know , cleans your carpet , as opposed

32:24

to , like , choose it up and

32:26

jumps out the window or something crazy like that . But

32:29

more seriously , it's

32:31

, you know , these systems we've

32:33

talked about it before like AI

32:36

, is training a black box to

32:38

do a specific task , to carry

32:40

out a specific task . And as as those , as

32:43

the ai models get more and more complicated things

32:45

like large language models , things like that the black box

32:47

essentially gets more and more powerful , gets

32:49

bigger and bigger and we don't understand

32:52

what's going on inside there . All we can do

32:54

is say this is your goal . Now

32:57

we're going to train you , to train the

32:59

ai , train the model towards that goal . We

33:01

don't care about what goes on inside the black box . To

33:04

a certain extent while we're doing the training . But

33:06

then what we do is we look at the output and we say

33:08

, okay , does that align to what we want ? And we try

33:10

to reinforce what our

33:12

. We try to reinforce our requirements . So

33:14

an example of alignment with things

33:17

like chat GPT is that if

33:19

it didn't have any alignment

33:21

or safeguards or safety rails

33:23

built in , it would tell you how to do things

33:25

that are absolutely illegal . It would tell you how

33:27

to make a bomb , how to um

33:29

, how to do all like make

33:32

meth or all manner of illegal things , and

33:34

so that's a question

33:36

of alignment . That's . That's chat GPT

33:38

. Sorry , that's open AI's decision

33:40

in that case , because it's them . It's them that own

33:42

the model and they want they don't want

33:45

their model to tell you how to do illegal things

33:47

. Now Elon Musk's made an argument against

33:49

this , and part of the whole thing with

33:51

Grok , which Twitter released , is

33:53

that it will tell you whatever you want

33:55

and there are open source models that

33:57

will give you whatever information

33:59

you want . So in

34:02

that case , we're talking about , you know , quite

34:04

a specific application of alignment . You could call

34:06

it like the ethics or the morality of the model

34:08

. I guess , when it comes to an LLM , we're trying to

34:10

instill it with some kind of ethics

34:12

or morality , because it could be very dangerous

34:15

. It could tell you how to make the next

34:17

pathogen , something like that , and this is one of

34:19

the fears , as these models get larger

34:21

and larger and more complex and more sophisticated

34:23

and more for want of a better word intelligent

34:26

. Somebody could use one of these models

34:28

to , you know , not just make

34:30

a bomb , which is information you can find on the internet

34:32

, but they could use one to actually develop

34:35

a new unseen pathogen

34:37

which could be hugely

34:40

, hugely damaging . Or a new cyber

34:42

attack , a new computer virus

34:45

which we've never seen before . There are

34:47

things like this that large language

34:49

models are sort of getting towards being

34:51

able to do , and you

34:54

know just it just within that example

34:56

, one of the like , the challenges with alignment are enormous

34:59

and , again , like you can watch robert Miles videos

35:01

to sort of understand how complex

35:03

this becomes , especially as models get larger

35:05

. But as a simple example

35:07

, you can still , to this day , you can figure

35:10

out how to you can find on the internet information

35:12

on how to jailbreak any of the large language

35:14

models .

35:15

There are things called I think they're called universal

35:19

, universal jailbreaks , basically

35:21

yeah

35:25

, universal jailbreaks basically , and they um , and there are images with certain noise

35:27

in the background that I and I don't , we , I think we talked about before . I have no understanding

35:30

of how it works , but by putting that , that picture

35:32

in and uploading it , you can

35:35

then suddenly just do whatever you want . I mean , it's , it's

35:37

, it's nuts . I I highly recommend

35:39

people have a look at this

35:41

, just because it's fascinating . Even if you don't

35:43

care about the technology , it's fascinating

35:46

and it makes absolutely no sense , but

35:48

it's real exactly

35:50

, and so there are examples like that where .

35:53

And then the point with the point with that is that even

35:55

open ai , with all their resources and

35:58

all their money and they own

36:00

the model , so to speak they

36:02

can't make it impervious

36:04

to jailbreaks . You can still jailbreak

36:06

chat , gpt , and so

36:08

it all demonstrates how little understanding

36:10

we have of what's actually going on inside

36:12

that black box and

36:15

this . The problem only

36:17

gets larger once you start talking about

36:19

things like artificial general

36:21

intelligence , which we may or may not have reached yet

36:23

. If we have , then it's , you know , it's within

36:25

open ai and it's not open to the public

36:28

. Um , and maybe large

36:30

language models will get there , maybe they won't . There's

36:32

, you know there's various debates around that

36:34

, but the point is that eventually

36:36

, at some point , some of the risks matt was alluding

36:38

to before , the , the

36:40

existential risks are because at

36:42

some point , once you reach artificial

36:45

general intelligence or beyond that

36:47

, artificial super intelligence

36:49

, so general intelligence meaning a

36:51

model that's generally capable of doing all things

36:53

that roughly humans can do there's

36:55

lots of debate about the actual

36:58

definition , or

37:00

artificial super intelligence , which is something

37:02

that's basically more intelligent than

37:05

humans . Once you get to that point , then

37:07

you really don't understand

37:10

what the model's doing and what it's , even what

37:12

it's potentially what its desires and

37:14

um wishes are , and

37:17

something like that you

37:19

know it would . It would potentially have the

37:22

you know it would potentially have something

37:24

to gain by hiding its true nature

37:26

from you . It would also be very sophisticated

37:29

and able to do that in a way that you

37:31

know would be undetectable whilst it's actually

37:33

acting out on its own aims

37:36

, its own ambitions . So we're . I

37:38

mean it starts to sound a little bit sci-fi

37:40

, but those are the kinds of things we're talking about

37:42

and that's why Matt says you know , that's to sound a little bit sci-fi , but

37:44

those are the kinds of things we're talking about and that's why Matt says you know , we don't know

37:46

whether it's five , 10 , three , 20 years away , but those are the things that we're

37:48

talking about and need really

37:51

careful consideration , and also

37:53

why it's so complex .

37:55

There's someone called them Well , it's not a name

37:57

, actually , but this is their handle for

37:59

YouTube and Twitter

38:02

, which is now known as

38:04

X , actually , for those of

38:06

you not aware .

38:06

I wasn't aware of that .

38:07

Yeah , so Pliny , the Prompter

38:09

which is at Elder underscore

38:12

, plinius , there's not actually

38:14

that much on YouTube . There's four videos

38:16

, but this is

38:18

whoever this person is . They

38:22

do a lot of red team work for the model . So

38:24

red teaming is basically the kind of

38:26

you know , testing and hacking to see how

38:28

models work . But they've jailbroke

38:30

every one of the major models . So there's ChatGPT

38:33

4.0 , lama 3

38:35

, claude . You know , this

38:37

person has , or

38:40

machine maybe they're not a person has

38:42

jailbroken everything . So you know , know , if you're interested

38:45

, look them up . But it kind of shows

38:47

how , how easy it is for

38:49

people with the right skills to be able to

38:51

jailbreak . I just want to make one

38:53

point before we we kind of finish this section

38:56

off on on alignment , and I'm not an expert

38:58

in this field , but I think the point

39:00

that you make about you know developing

39:04

kind of you know , biological weapons

39:06

or whatever at

39:08

the moment it is well

39:11

there's an argument at least that you

39:13

know . All that you can do is get

39:15

easier access to information that you could access

39:18

otherwise . The issue

39:20

is that in the future , models

39:22

are not just you

39:24

know and working on spaces , for

39:26

example , that they're not large language models . The

39:29

models are not necessarily just going to tell

39:31

you stuff . It's not just going to be about having a conversation

39:34

, giving you information . They're potentially going to

39:36

be able to work together with other models to actually do

39:38

things . So if it's a

39:40

problem at the point that large

39:42

language models are giving you information

39:44

to align it , imagine the

39:46

issue and imagine

39:48

the potential consequences when , whatever

39:51

advanced AI looks

39:54

like , it's not just able to give you information

39:56

but it's actually able to carry out processes

39:58

, whether that's working with other models and agents

40:00

, whether that controlling , you know , weapon

40:02

systems , power grids , whatever

40:05

. At that point it

40:07

is an existential threat , or it's at least a

40:09

threat to , you know , life

40:11

and society and health

40:14

, etc . Etc . So that's why the alignment thing

40:16

is super important . If we can't even get it right

40:18

at this point , then how can we advance models

40:20

to the point that they're actually able to to

40:22

do things instead of just telling us

40:24

and giving us information ?

40:27

Yeah , and that actually is a really good point . It comes

40:29

back to the point that we talk

40:31

about in previous episodes , where we talk about AI

40:34

starting to make its

40:36

way into industries and make its way

40:38

into companies and jobs and things

40:40

like that . You know , it's

40:42

really tantalizing because you can

40:44

save huge amounts of money by bringing ai

40:46

into your business . But you

40:49

start doing that now and then , by the time

40:51

we get to the you know the notion of asi

40:53

or these really advanced models , you've

40:55

then got something that's got its claws into

40:58

all the businesses in the world potentially , which

41:00

you don't really fully understand . And

41:02

so there is a kind of a like a

41:04

huge creeping danger there where

41:07

you know ai slowly

41:09

, slowly , we allow ai into all our

41:11

industries , like , for example , in the future will

41:13

it be given a place in controlling

41:16

our power grids and maybe it'll be benign

41:18

for quite a long time . But what about in

41:20

the future ? What about the next version

41:22

? What about the updated model , all that

41:24

kind of thing . Like you

41:26

know , it's starting to sound a little bit sci-fi in

41:28

a way , but I think that's kind of

41:30

the sorts of scenarios we're talking about , where

41:33

you let this stuff creep into everything and

41:35

it's not just replacing jobs , it's pervading

41:38

industries that are really important potentially

41:40

yeah , and , like we said , even if

41:42

the model yeah , sorry , even

41:44

if the ai itself is not sentient or

41:47

it's not in control itself , there

41:50

are still some people

41:52

, organizations , whatever that are in

41:54

control , right .

41:56

So if it's being controlled by the

41:58

us military or , you

42:00

know , nato or the

42:02

chinese military or

42:05

whatever , is

42:07

that better or worse than it being controlled

42:09

by , you

42:11

know , the the ai itself

42:13

? Maybe it's better , maybe

42:15

it's not .

42:16

The issue is that somebody or

42:19

some organization or some entity is

42:21

going to have control of absolutely

42:25

everything potentially yeah

42:28

, there's huge potential for a massive

42:30

kind of centralization of power

42:32

resource whatever you want to call it

42:34

over the like coming years there's

42:37

only one answer the

42:39

aimish .

42:41

It is what I always keep coming back

42:43

to . Now join me and jimmy's

42:45

aimish community . Make a donation

42:47

in bitcoin three bitcoins

42:50

transferred to jimmy's account

42:52

and , uh , if we ever started

42:54

up , you can come and join us on the

42:56

aimish community and you can listen

42:58

to the podcast on

43:01

lps or wax discs

43:03

or something like that , and help us to change horseshoes

43:06

.

43:07

Man , I dream of three Bitcoin .

43:10

I dream of living on an Amish

43:12

community in the forest . I

43:31

am conscious that this episode has strayed into the more pessimistic side and , for those

43:33

of you that understand the term P-Doom , our P-Doom score has been pretty high

43:36

today . I think that's kind of necessary

43:38

because we're talking about relaunching the podcast

43:40

, so that

43:42

it , you know , talking

43:46

about relaunching the podcast , um , so that it , you know , picks up these

43:48

themes and that we , we , we try and empower people with things to

43:50

do . but I wanted to try and put a bit of

43:52

a more positive spin on things , because I I

43:54

said in that speech at the start about how I

43:56

compared to climate change . Um

43:58

, maybe some of you listening are climate

44:01

change skeptics , but just , you know , stick

44:03

with us for for a minute here on and

44:05

let's assume for a minute that we accept that climate

44:08

change is real . Um , so

44:10

, climate change is happening and we're

44:12

, like I said , we're having to unpick it . The

44:14

thing with this is we're not , we don't

44:17

think we're at that point

44:19

with ai , right , so we're out

44:21

ahead of it potentially now . We're not

44:23

at the moment the way things are . We've said , like governance

44:25

alignment is not ahead of it , but

44:27

it's still possible to do it . So I

44:30

personally feel

44:32

more energized today

44:35

and , since you know , we , we

44:37

decided to kind of reposition the podcast

44:39

then . Then I did before because I feel like

44:41

you know we're doing something and I

44:43

think , for those of you that care about this , like I would

44:45

say , when you start trying

44:47

to do something and trying to get involved , like

44:49

it's , it's invigorating , it's

44:51

energizing . I think there are for

44:54

me , three big issues

44:56

in the world climate change is one , um

44:58

, you know , health pandemics is

45:00

another one , and and ai is the other one . I I

45:03

sometimes flip between which is the more urgent

45:05

, which is the longer term . I

45:07

think potentially , you know , ai

45:10

has the potential to make all of

45:12

the other ones kind of irrelevant , but

45:14

on the other hand , it could be the one that's furthest away . We

45:16

just don't know . But there is room

45:18

to act on it and you know , what we

45:20

want to do is get people along on

45:23

that journey and get people to

45:25

you know , start realizing there are things that we

45:28

can do to get this conversation

45:30

moving . And we've said it many

45:32

times , there are elections in a lot of countries

45:34

this year . When the dust settles

45:36

from those , I think this will be more

45:39

on the agenda and I think there

45:41

is a space to start making

45:43

a difference and getting involved in that

45:45

space . So it's not all negative

45:47

, um , and there's potentially

45:50

a lot of time . And this is , you know , ai

45:52

is a really , really interesting thing . I mean , you

45:54

know , the use of AI tools don't get me wrong

45:57

. I'm not suggesting people don't use AI

45:59

tools . I mean they are like life changing . They

46:01

make your job easier , they make make your life better

46:03

. But there are all these risks

46:06

to think about as well , and

46:08

that's why we want to address those things . But still

46:10

, you know , keep using ai and keep

46:12

being excited by some of the other developments , because

46:14

there are things that are going to make the world better

46:17

. Um , it's just , there are things that are going to

46:19

make the world worse at the same time

46:21

nice was

46:23

that , you being positive matt I

46:26

mean , that's the best you're going to get from me . Um

46:28

, I used to be an optimistic person , you know

46:30

, and things changed at

46:32

some point and now I I take the

46:34

uh , the other role what

46:37

what I was going to say is .

46:39

So , just following on from that and and we've talked

46:41

about this before but actually if we get this right

46:43

, the positive side of ai is it could be a

46:45

massive , massive benefit to society

46:47

. If we get all this stuff right and

46:50

if it doesn't go wrong , and if we mitigate the

46:52

negative side effects , it could be something that

46:54

works alongside us and even

46:56

solves problems like climate change and

46:58

many of the problems that potentially we

47:00

face , which we've talked about that loads in previous

47:03

episodes . So I would say that's also really

47:05

, really important to bear in mind . Obviously , this

47:07

episode was focusing on a really

47:09

serious subject and we've taken it seriously

47:12

and I think that's fair enough . So

47:14

, with that , in future

47:17

episodes we're we are going to introduce

47:19

a bit more of this kind of stuff around governance

47:21

and we're going to get some guests on

47:23

. We're going to get more guests on that can talk . You

47:26

know , experts in the field unlike us who

47:28

can talk around governance and alignment and some

47:30

of these kinds of things . Maybe we'll get Robert

47:32

Miles on . I'd be really keen .

47:35

Uh , we , we are definitely going to try . We are

47:37

definitely going to try , although I think you know me

47:39

and you might be kind of little

47:42

nervous school boys around

47:44

Robert Mars , if we're , uh , if

47:46

we're in his company yeah

47:48

, for sure , for sure he's .

47:50

Uh , he's more of a rock star than

47:52

us in the AI world . I think , for sure , um

47:54

.

47:55

I'm not sure many people have called him a rock star

47:57

. Um check out his videos

47:59

. He , he's , he's yeah

48:01

, he's a great guy , but I'm not sure he he doesn't

48:03

look like a rock star .

48:04

Let's put it that way . Well , and if you know

48:06

him , put him in touch , um

48:08

. So so , yeah , so in the future

48:10

we're going to do more episodes like this , but we're

48:12

also going to obviously stick to our some

48:14

of our original format . So we are going to look

48:17

at some specific industries as well . We're going to mix

48:19

it up . We've done episodes on dystopia and

48:21

utopia . Overall , the

48:24

theme is that we want to be the podcast

48:26

for everybody . As

48:28

we said at the start , we want to be the podcast for everybody

48:30

. We want to be able to speak to

48:32

everybody about the issues

48:34

that are coming up , about some of the technical

48:36

stuff , but hopefully not with too much jargon

48:39

, as we said before , um , and

48:42

that's our aim . That's our aim to sort of reach

48:44

a wide audience , to speak to everyone . So , as I said at

48:46

the start , subscribe like comment

48:48

if you like it . If you like it , of course

48:51

. If you like it , share it with your friends , um

48:53

. If you don't like it , what's wrong with you ? Yeah

48:55

, good point , um . And

48:58

with that , thank you very

49:01

much and , as always , enjoy

49:03

our latest song .

49:07

Thanks everyone . See you next week for

49:09

a more positive

49:11

episode , possibly light-hearted , maybe

49:14

. No guarantees . We're

49:16

off to watch england versus denmark , so

49:19

, uh , let's see how that goes . Maybe

49:21

we'll be on a positive note next week , maybe not . Take

49:24

care , everyone bye , let's see

49:26

how that goes . Maybe we'll be on a positive note next week , maybe not . Take care , everyone

49:28

Bye

49:44

. Blackboard schemes All my decisions Are just ambitions , good

49:47

for daily Hype

49:50

or reality . 20

49:53

to 99 .

49:55

Threat to mankind . Ai

49:58

horizon , neon

50:02

glow . Know

50:05

who will make it , but we must shape it

50:07

. You and me , settled

50:09

confused over

50:12

65 Do

50:15

they know how to keep

50:17

us alive ? 50

50:20

disbanded and

50:23

puppies stand alone ? Military

50:26

on board . Where's

50:28

this heading for AI

50:31

horizon ? Meet

50:34

on board . No

50:36

room will make it , but

50:39

we must shape it , you and me

50:41

. Charts

50:43

are fading , climate

50:46

changing , custom living

50:48

rising , but

50:51

AI surprising , tokyo

50:55

Falls . Wake

50:57

up , call . Will we rise

50:59

or just demise

51:02

AI horizon

51:04

? Me on

51:06

the low , here we go , we'll

51:09

make it . We

51:12

must shape AI

51:14

horizon me

51:17

on the low . Our

51:20

future's ours if

51:23

we try , you and me , here we go

51:25

, I'm a 65

51:27

.

51:55

Do they know how to

51:57

keep us alive ? Safety

52:00

disbanded .

52:03

And what we stand for Military on board Safety , disbanded , antropic stares along

52:05

.

52:05

Military on board

52:08

. Where's this heading for AI

52:11

? Horizon me on board

52:13

, here

52:15

we go . The rule we'll make it , can't

52:18

you see what we might change ? Think

52:21

you and me . Jokes are made in Climate

52:24

changing , cost of living rising , but we might change . Think you and me . Jobs are making climate changing

52:26

, cost of living rising

52:28

, but AI surprising , so

52:33

you don't fall wake

52:36

up call .

52:38

Will we rise or just

52:41

demise ? Ai

52:44

horizon , don't you know , neon

52:47

glow , here we go . No

52:49

rule will make it , can't you see

52:51

, but we must shape it

52:53

, you and me . Ai horizon

52:56

, don't you know Neon glow

52:58

, here we go , our

53:00

future's , ours , can't you see , if

53:03

we try AI

53:06

to rise up .

53:08

Beyond the here we go , our

53:11

future's , ours , if

53:14

we try . You and me , you

53:18

don't

53:24

know

53:26

.

Unlock more with Podchaser Pro

  • Audience Insights
  • Contact Information
  • Demographics
  • Charts
  • Sponsor History
  • and More!
Pro Features