Podchaser Logo
Home
Rand Hindi: Zama - Fully Homomorphic Encryption in Blockchain Applications & Privacy

Rand Hindi: Zama - Fully Homomorphic Encryption in Blockchain Applications & Privacy

Released Friday, 24th November 2023
 4 people rated this episode
Rand Hindi: Zama - Fully Homomorphic Encryption in Blockchain Applications & Privacy

Rand Hindi: Zama - Fully Homomorphic Encryption in Blockchain Applications & Privacy

Rand Hindi: Zama - Fully Homomorphic Encryption in Blockchain Applications & Privacy

Rand Hindi: Zama - Fully Homomorphic Encryption in Blockchain Applications & Privacy

Friday, 24th November 2023
 4 people rated this episode
Rate Episode

Episode Transcript

Transcripts are displayed as originally observed. Some content, including advertisements may have changed.

Use Ctrl + F to search

0:13

Introducing the next generation of DYDX

0:16

and the next version of the DYDX token.

0:19

Welcome to the DYDX Chain. New

0:21

token mechanics mean you can stake to secure

0:23

the network. Staking is fully decentralized

0:26

and controlled by DYDX token

0:28

holders. All fees are distributed

0:30

to stakers. Earn rewards from

0:32

using the DYDX protocol with

0:35

rewards planned for traders and early adopters

0:37

too. No governance means you

0:39

are in control. Trading has been democratized.

0:42

You can vote on protocol improvements, token

0:45

distributions and more. Bridge

0:47

your DYDX to seamlessly transition

0:49

to DYDX Chain. Bridge now

0:52

at bridge.dydx.trade

0:54

and contribute to the evolution of DYDX

0:57

Chain. Open source and community

0:59

driven. Run your own validator.

1:02

Validating is fully permissionless. Join

1:05

us on our mission to democratize access

1:07

to financial opportunity today.

1:14

Welcome to AppaCenter, the show which talks

1:16

about the technologies, projects and people driving

1:19

decentralization in the blockchain revolution. I'm

1:21

Federica Anz and today I'm speaking with Rand Hindi,

1:24

the CEO and co-founder of XAMR.

1:26

XAMR leverages fully

1:28

homomorphic encryption for all kinds

1:30

of privacy stuff that we'll talk about

1:32

in just a second.

1:34

Before we start properly, Rand,

1:36

thanks for coming on. Can you tell us a little bit

1:38

about yourself? Sure.

1:41

Thank you for having me. So my name is

1:43

Rand. I'm an entrepreneur and investor

1:45

in Deep Tech. I started

1:47

coding when I was 10 years old, built my first company

1:50

as a teenager, then did a PhD

1:53

in AI in 2007 before it was actually a cool

1:55

thing to do. I

1:58

was then running an AI company focusing on... privacy

2:00

that got acquired in 2019. And

2:03

since 2020, I've been running XAMA

2:05

with my co-founder Pascal Payet, and

2:08

also to quite a bit of investing in DeepTech.

2:10

So my sweet spot as an investor is companies

2:13

that most other investors cannot understand yet.

2:17

Ah, that sounds fascinating. What kind

2:19

of, can you give us some examples of companies

2:21

you've invested in or you're thinking of investing

2:23

in? Sure. At

2:25

the moment, I'm looking

2:26

a lot at biotech,

2:29

specifically psychedelics, longevity.

2:31

My PhD was in AI applied to biology,

2:34

so I can read bio paper

2:36

just as well as I can read AI papers. And

2:38

so I was able to get

2:40

in early on a lot of things that people thought

2:43

were like sci-fi medicine. So

2:45

for example, I invested in a company

2:47

that's doing like a miniature MRI,

2:49

or Chitial. It's an MRI

2:51

machine that's 10 times smaller and cheaper

2:54

than existing ones. So something

2:56

you can put to the back of like an ambulance and things

2:58

like that. When I invested, nobody

3:01

thought it was even possible to build it. And

3:03

now these guys just produce their first MRI

3:06

images.

3:07

Yeah, super cool. And obviously longevity

3:10

has also seen quite

3:12

the boon over the last couple of years. Any

3:15

notable investments there? I

3:19

mean, I've been investing

3:20

quite a few things. My most recent

3:22

investment is in a diagnostics company called

3:25

GlycanAge, where they basically

3:27

analyze some very specific markers in your blood

3:29

called Glycans,

3:32

which gives you a good estimation of your biological

3:34

age, but also of all kinds of different diseases

3:37

that might progress in

3:39

the future. A great company,

3:42

very strong science, 20 years of research, just

3:45

something most people overlooked. And so I always try

3:47

to find those things, you know, those sort of non-obvious bets where the science is very,

3:49

very strong, but nobody really thought about

3:54

making a company out. Super cool. I think it's a great company.

3:57

I

4:00

think we will kind of see how this

4:03

segues into privacy in just a bit. So

4:05

basically all of these biotech companies, kind

4:07

of the ones that kind of tell you your biological age

4:09

and kind of your risk markers and so on. I

4:12

would be actually thrilled to kind of

4:14

know where I stand. The

4:17

reason I've never taken any of these tests

4:19

is because I'm concerned about

4:21

the privacy of the data. So

4:24

going from that to kind of like a

4:26

privacy focused space

4:29

is quite a leap and then on some levels

4:32

it's not. So kind of tell us kind of what

4:36

kind of drove you to kind of found

4:38

something so intensely mathematical

4:41

and kind of like computation

4:43

based. I mean it's a tour set, right? So

4:46

basically it's kind of it's not, yeah, basically

4:48

what made you do it?

4:51

Basically that dates

4:52

back from when I was 14 years

4:55

old. So at the time

4:57

I was 14 in the 90s. So it was

4:59

the beginning of the internet, you know, people

5:01

were starting to build websites. And since

5:03

I was coding since I was a kid, for me

5:05

it was easy to start building websites. And

5:08

at the time with a friend we had built a social network

5:11

that was quite popular in France.

5:14

So the reason why I'm saying this is because at some point at school

5:17

I got bullied by an older kid,

5:19

right? I mean, you know, when you're 14, the guy is 16, he's

5:21

twice my size, what can I do about it, right?

5:24

And I got so fed up of this guy

5:27

that I thought, well, maybe he's

5:29

a user of my social network. So perhaps

5:32

if I looked into his private messages,

5:34

I could find something compromising and blackmail

5:36

him so that he would stop. So I

5:38

looked into the database and I found

5:41

amazing things about him, which

5:43

I challenged him with and say, hey, if you

5:45

ever mess with me again, everybody's going to know

5:47

about your Darryl Little secret and your secret crush.

5:50

He never,

5:52

ever spoke to me again or approached

5:54

me. So victory, I

5:56

bully the bully,

5:57

but then I thought

5:58

something's wrong. about what I just did.

6:01

Just because I'm the one operating

6:03

the service doesn't mean I should be able

6:05

to see everybody's data.

6:08

And so from that point, in 1999,

6:11

I knew that privacy was

6:13

gonna be a very important topic

6:15

for the future of the internet and

6:18

of just digitizing everything. So

6:20

as far as I can remember, I've always

6:22

tried to think about privacy just

6:25

as a necessity as something that you build

6:27

by design in everything that

6:30

you put out there.

6:31

That's very true. I feel,

6:34

makes me feel bad for the bully, which

6:37

almost, you know what, never do, right?

6:39

It's kind of like, but yeah,

6:41

it's kind of, if you probably didn't even think about the

6:43

fact that you had kind of clear text access

6:46

to kind of messages you were sending.

6:48

Nobody thought about that in the 90s, right? Like

6:51

it wasn't something people talked about. And

6:53

yeah, and so I think

6:56

I don't feel bad about it because if anything,

6:59

that particular episodes eventually

7:01

made me focus on privacy,

7:03

made me create Xama. And if anything,

7:06

more people will benefit from privacy

7:09

because this guy bullied

7:11

me and forced me to confront at

7:13

the time the fact that I had access to

7:15

so much data and that this was wrong. So,

7:18

you know, Silver lining, he stopped bullying me

7:20

and hopefully a lot of people will end up benefiting

7:22

from that. So

7:24

you just said that you co-founded with

7:26

CTO. Pascal, yeah.

7:30

Yeah. How did you guys meet?

7:34

Pascal is like, Pascal, Pascal Payet

7:36

is super OG in homomorphic encryption.

7:38

You know, he's one of the early inventors of

7:40

homomorphic encryption. He was the first

7:43

one to invent a homomorphic

7:45

addition. So the ability to add encrypted

7:47

numbers in the 90s as well, actually.

7:50

So when I was doing my social network, he was

7:52

inventing that. And

7:54

him and I have been friends for a few years. You know, we

7:56

kind of like hang out and we kept

7:58

in touch.

7:59

And when he...

7:59

He heard that I was selling my

8:02

previous company. He reached out and he said,

8:04

hey, I just had a breakthrough in homomorphic

8:06

encryption. I think that was the right time to build

8:08

a company. You want to do it together. And

8:11

quite frankly, initially I thought, I don't

8:13

know, man, I'm kind of like tired. I want

8:15

to do the whole like, sell my company, take

8:18

six months, travel the world kind of thing. But

8:20

then I thought, how often do you get a chance

8:23

to build a company at the right

8:25

time with the perfect co-founder

8:27

based on that new scientific breakthrough? And

8:30

so I just couldn't resist. So a week after

8:32

I sold my company,

8:33

we started Zirvana.

8:35

There wasn't 2019

8:38

and there was before.

8:40

2020, officially 2020. 2020, sorry.

8:44

Before the ZK boom. So kind of, if

8:46

you look at kind of like

8:48

the trajectory of kind of ZK technology

8:50

and kind of the mind space it's taken up in

8:55

the ecosystem, that

8:57

was just the very, very beginning. And

8:59

you guys weren't even primarily working on ZK

9:02

stuff, but fully homomorphic encryption, which is kind

9:04

of like two pay

9:06

grades beyond, you know,

9:08

Veneta ZK piece. We'll

9:11

talk about kind of the differences between all of these

9:14

technologies in just a bit. Back then,

9:16

the only project that I'm aware of who was

9:18

aiming to use fully

9:19

homomorphic encryption was NumRi.

9:21

I don't know whether that's still going, but kind

9:24

of when you came

9:26

out and said, I will build

9:28

something on this, and basically,

9:30

fully homomorphic encryption has been around for

9:33

50 years or so. But basically making

9:35

it usable, this has kind of been

9:37

the whole degree. It's not really

9:39

been done so far. So kind of what,

9:42

how did people react when you came out with this?

9:46

So fully homomorphic encryption

9:48

was actually only figured out in 2010.

9:51

Before that, you could only do either additions

9:53

or multiplications, but you couldn't

9:55

do both. 2010 is

9:58

the first time that someone invented. a

10:00

homomorphic technology where you could

10:02

do any kind of computation. The

10:05

problem is it was extremely slow, very,

10:08

very slow. It was very hard to

10:10

use for things that required

10:12

complex computation. And unless

10:15

you had a PhD in cryptography, you couldn't really

10:17

use it. So the big contribution

10:19

from XAML is that we

10:22

actually created a homomorphic scheme that

10:24

is very fast that

10:26

can do any kind of computation. So

10:28

it can do any kind of thing you want to do

10:31

without any sort of difference to

10:33

the data that was not encrypted. And it's

10:35

very easy to use. So as a developer,

10:37

you don't have to learn anything about cryptography.

10:41

And so I think, you know, from a mathematical perspective,

10:44

we've sold FHE. Now

10:47

it's surely about making

10:49

it used by as many people as possible

10:52

and that improving performance all

10:54

the time.

10:56

I think maybe this is the time to kind of explain all

10:58

of these terms in depth. Right. So

11:01

basically, in terms of privacy, people tend to think

11:03

in different tiers. So tier one is CK proofs,

11:05

which we have talked about on the shows many

11:08

times. So basically, it's

11:10

this idea that you can prove

11:13

your knowledge or something without revealing the

11:16

thing itself. And they're

11:19

typically generating the proof

11:21

is computationally expensive, but can

11:23

be done off chain and then anyone can verify

11:26

it on chain. Tier two that we

11:28

don't actually talk all that

11:30

often about is multi party

11:32

computation. And then in my head, kind of tier

11:35

three is kind of like the holy grail tier

11:38

for the homomorphic encryption. Can

11:40

you explain the differences between the KPs

11:44

and MPCs and FHE?

11:47

That's a good question. They're fundamentally

11:50

very different technologies. And I believe

11:52

that three of them have a role to play in

11:54

building a privacy protocol. Zero

11:58

knowledge proofs are great. when

12:00

you want to prove something without

12:03

revealing the data. But it

12:05

doesn't actually allow you to compute on

12:08

the private data itself. So

12:10

whoever has to produce a proof has to

12:12

actually have the data. So for example,

12:14

you know, if I want to create a ZK

12:17

proof that I have enough tokens, me

12:20

the prover have to have access to the

12:22

actual balance. Otherwise, I cannot prove

12:24

anything about it. So if you wanted

12:26

to actually compute on an encrypted balance,

12:29

you wouldn't do that with ZK. That's really not what this

12:31

is about. ZK is a technology

12:33

that creates proof of

12:36

correctness. It's not a technology about

12:38

computing on private data. If

12:41

you want to compute on private data, you've

12:43

got basically MPC, multi-party computation

12:46

and FHC. At least you can talk about software

12:48

based solutions. You have hardware based solutions, ballistic

12:51

to software based solutions. The idea

12:53

of multi-party computation is that instead

12:55

of having one machine do

12:57

the computation,

12:58

you basically split

13:01

the data and the program

13:03

to be executed on multiple machines, each

13:06

of them doing a piece of it and then putting

13:08

the result back together.

13:10

So as long as, you

13:12

know, a majority of those machines

13:15

are honest,

13:16

nobody can retrieve the original

13:19

encrypted data. They can only retrieve the final

13:21

results.

13:23

MPC is great. The only downside

13:25

is that you're limited by networking time.

13:28

So at some point, it doesn't matter how fast

13:30

the machines go because sending

13:32

data back and forth is going to be the bottleneck.

13:36

FHC

13:37

is basically running on a single server.

13:39

So you encrypt the data and

13:42

you compute on the encrypted data

13:44

itself without having to decrypt it. And

13:47

because this happens on one single server,

13:49

you could always throw more computational power

13:51

at it and make it faster, contrary to MPC. So

13:55

what we believe is a holy grail of

13:57

privacy and what we do at XAMA actually, people

13:59

only grail. is we use FHE

14:01

for computing on the private encrypted data. We

14:07

use ZKP to make sure the

14:10

user is doing what it's supposed

14:12

to do at the end. And we

14:14

use multi-party computation to

14:16

secure the private key and decrypt

14:20

the result of the homomorphic computation whenever

14:22

we need to. So MPC is

14:24

great for managing the keys. ZK

14:26

is great for proving stuff about

14:28

what users are doing. And FHE is great

14:31

for doing the computation itself. And that's

14:33

really how people should think about combining them.

14:36

Okay, let me kind of see

14:39

whether I got this right by kind

14:41

of just recapping it. So kind of if you look

14:43

at fully homomorphic encryption, it's

14:46

kind of like by giving the example

14:48

of kind of say, my

14:50

sequence genome. So I would kind

14:52

of encrypt the genome

14:55

that I sequenced I had

14:57

someone sequence and send it to these

14:59

biotech institutes that kind of can tell

15:02

you what your risk factors are and

15:04

so on. And then they can do

15:06

computation on my genome

15:09

without knowing what exactly

15:11

they're looking at. They're just doing their

15:13

computation on it. And then kind of they send

15:16

me the result back and I can encrypt it with

15:19

my private key.

15:21

Correct, yes. That would be the simplest

15:23

way to think about FHE. So

15:25

if you wanted to do that with ZK, it would be the other

15:27

way around. So in the case of ZK,

15:30

the user, me, would

15:32

run the genomic analysis

15:35

on my own computer, on my

15:37

unencrypted genome sequence.

15:40

And I would then produce a proof that I've

15:42

correctly executed the program

15:45

and send back

15:46

the results alongside with the proof

15:49

so that I don't need to show them the actual genome

15:51

input. So this would be typically

15:53

if the research organization wants

15:57

only the result of the analysis, but they don't want

15:59

to see it. your individual data, that's how you

16:01

could potentially do it. So you see you're

16:03

swapping things around. In the case of ZKP,

16:06

it's not the company

16:08

that's providing a service during the computation, it's

16:11

you, the user, doing the computation and just

16:13

sending them the result and a proof that this

16:15

result is for the correct

16:18

data.

16:19

Okay, so basically it's kind of like say, I want

16:21

to take out no health insurance and kind

16:23

of they want proof that kind of I am generally

16:25

healthy and I kind of I want to be charged

16:28

like on the low here so I don't need to tell them

16:30

exactly what my ailments are but basically I

16:32

can tell them that I'm generally healthy person by kind of

16:34

sending the

16:35

cases. Exactly, exactly. But

16:37

you're the one doing the computation. In the case of FHE,

16:40

would encrypt your data, send it to them and they

16:42

will do the computation and you would then

16:44

send back the other results.

16:45

That sounds like magic because as someone

16:47

who kind of who is processed large

16:49

batches of data before, typically

16:51

the first thing that you do with data is

16:54

you clean it up, right? And then the

16:56

right format and does FHE

16:59

work even with data that's

17:01

not cleaned up? Say for instance, I take

17:03

a picture of a natural scene and kind

17:06

of I want image processing about kind of what

17:08

we see, say dog walking across the street

17:10

and kind of traffic light and whatever

17:13

kind of like automatic

17:15

processing usually

17:17

happens. Can this happen on

17:20

data that's not been tidied?

17:25

Absolutely. FHE is Turing

17:27

complete. So anything you can

17:29

do on unencrypted data, you can do

17:31

on encrypted data with FHE. The

17:33

question is how efficient is it going to be?

17:36

So often cases, you probably

17:39

want to do some of the pre-processing before

17:41

you encrypt the data and send it as FHE

17:44

because if you can, why not? It's cheaper,

17:46

right? At the end. But

17:48

for image processing, we have an example. You should go on the

17:50

hugging face. XAML has like a space

17:53

where we actually have a demo of

17:55

an encrypted image filtering

17:57

application. So it's literally that way.

18:00

you upload an image, it encrypts it, send

18:02

it to the server, the server applies different

18:04

filters on it and you get back the response.

18:07

So 100%, like

18:08

FHE does not have

18:10

any requirements on

18:13

what kind of data is being computed

18:15

on. You can do whatever you want.

18:18

Okay, I'm still kind of struggling.

18:20

This seems like magic. So basically

18:23

kind of if I think about it, say for instance,

18:25

I ask an LLM

18:28

something, kind of like an embarrassing

18:31

question that I don't want the makers of

18:33

open AI to know that I have. So

18:35

basically, let's say I send

18:38

an encrypted version of it. But

18:41

basically, the way that kind

18:43

of say LLMs work and also kind

18:45

of like these image processing

18:48

software is that they kind of quote

18:51

unquote understand something about

18:54

the data. Right? So basically kind of they

18:56

have context. So kind of how does the embedding

18:59

in the context work in

19:01

terms of FHE?

19:04

When you think about AI, or

19:06

any other application, you

19:08

have some inputs, there's some data going

19:11

into the model, this data

19:13

is then transformed, you're applying

19:15

all kinds of different algorithms

19:17

on it. So it could be, you know, looking

19:19

up value in a table for

19:21

your embeddings, or it could be applying

19:24

like an activation function on the data, right?

19:27

All of these things, you could do

19:29

an FHE in exactly the same way.

19:31

So the program itself

19:34

doesn't change.

19:36

The only difference

19:37

is that the inputs are

19:40

encrypted. And so if you

19:42

look at the input itself, you're only going to see

19:45

some gibberish, unless you have the key

19:47

to put things back in order, you're not going to understand

19:49

much. But the perfect clarity of

19:51

the way that this was encrypted is that it conserves

19:54

some mathematical property. So

19:57

for example, adding two encrypted numbers

19:59

would result in the same thing

20:01

as adding two unencrypted numbers. So

20:04

a good analogy is like, imagine

20:06

you have a box,

20:07

okay?

20:08

So you cannot see what's inside the

20:10

box, you can put something inside a box, but you

20:12

cannot really see what's inside a box. But

20:14

you know, that box has a few buttons and a few

20:17

knobs that you can turn, you know, maybe one button,

20:19

squashes, whatever is inside the box, maybe

20:22

one button sprays red paint on it. So

20:25

even though you don't know what's inside the box,

20:27

you can still press a button that will squash

20:29

it, you can still press the other button that would make it red.

20:32

And when you take out whatever object was in the

20:34

box, it will be squashed red,

20:37

right? So the person who applied the

20:39

transformation on the data, they

20:42

didn't have to actually see the data

20:44

itself that was in the box, you just had to know

20:46

that there was something in the box and just press those

20:48

buttons. It's exactly the same idea here,

20:51

right? It basically encrypting

20:54

is just putting something into this box and giving

20:56

this box to you and you are the one pressing the buttons

20:58

on it. So you

21:00

know exactly what you did, you

21:02

just don't know what

21:05

you did it on.

21:08

I think on some level, to me, that makes sense,

21:10

but kind of, especially kind of when

21:12

you have

21:13

a large repository

21:16

that you kind of have trained your data

21:18

on, it kind of it seems intuitive

21:21

to me that kind of maybe you should

21:23

have had to do the same thing on the

21:25

data you used for training. So

21:27

that kind of you actually compare

21:30

apples with apples rather than apples with really

21:33

squished and green apples.

21:35

But it is the same

21:37

thing, right? It's just that it's

21:39

translated into a different language, but

21:41

it's fundamentally the same data. So

21:44

when you train your model, what

21:47

you're doing is you're telling it, if

21:49

that number is five, I want to multiply

21:52

it by two, if the number is six, I want to divide

21:54

it by three. Let's say something like that.

21:57

That logic, the application itself,

22:00

still holds as long as

22:02

the input that represents the

22:05

value you want to transform is

22:07

able to support those operations you want

22:09

to do on it.

22:10

So you should encrypt the data,

22:13

but the encrypted data can still be multiplied

22:16

and divided and things like that, then

22:18

it doesn't matter. It's like you're

22:21

basically just shifting the space that

22:23

you're operating in, but you're not changing

22:25

the operations that you're applying

22:28

on. Okay. And that was the big difficulty

22:30

of FHE is supporting all those

22:32

different operators on the encrypted

22:34

data. Okay,

22:36

but just to be

22:37

perfectly clear, the only thing that kind

22:39

of needs to be encrypted is kind of the data that

22:42

I send. Basically, you're using kind

22:44

of the plain text operations that

22:46

kind of I would use on unencrypted data

22:48

as well. Okay.

22:50

And that's exactly a challenge,

22:52

by the way, like, FHE for 50

22:54

years, the whole challenge was, can

22:56

we do any operation on the

22:59

data, not just additions or multiplications?

23:01

Okay, yeah. So I

23:04

hear you, and I think I've read this before, that

23:07

seems crazy to me. But

23:09

I mean, this is often I mean, it's mentioned,

23:11

which is exactly a reason why we put so much

23:13

effort into building good developer tools

23:15

so that people don't have to figure it out. Because

23:18

at the end of the day, people don't care about how

23:20

it works, right? They care about it working.

23:23

So the users want to know that the inputs they're

23:26

sending aren't readable, the

23:28

developers want to know the applications are building

23:30

are gonna behave as expected.

23:33

That's it. And as long as you can guarantee

23:35

that

23:36

the internals in the mathematics, which

23:38

I agree, sounds like black magic. Honestly,

23:42

most people probably not going to care. I

23:44

mean, at that point, I'm the CEO in the company.

23:47

And I'm pretty good at math and coding.

23:50

I can't even keep track of everything happening

23:52

inside Zama. There's like there's so

23:55

much complexity in terms

23:57

of the underlying mathematics. And

23:59

it's okay. You know, it's not our

24:01

job to understand those things to use them.

24:05

And we've been

24:06

told time and time again that commercial

24:09

use of FHE is years

24:12

in the future. From

24:14

what you're saying, it sounds like you disagree

24:17

and it can be used for things now

24:19

or things soon. Where

24:21

do you think kind of this disconnect comes in?

24:26

Well, I think that people saying that aren't

24:28

the people working on it. So

24:30

it's very difficult to know that

24:33

it's very difficult to know what's possible

24:36

unless you're yourself in the field.

24:38

At some point it becomes evident to everyone. But

24:41

you know, ZK today seems obvious to

24:43

everyone, but it was very obvious to a small

24:45

group of people five years ago as well. Right?

24:48

FHE is the same. FHE today is obvious to

24:50

people working in FHE that this is working. It

24:53

hasn't yet transpired to

24:55

everybody else. So your

24:57

short answer is it depends who you're asking to.

24:59

The longer answer is

25:01

they're kind of right because up

25:03

until now FHE was

25:05

not yet practical. You

25:07

know, there were three problems. It was limited

25:10

in terms of what you could do with it. It was very

25:12

difficult to use as a developer and

25:14

it was very slow. Zama

25:16

made it very easy to use. We've

25:19

made it such that you can do anything you

25:21

want with it. You don't have to worry about what's

25:23

coming in, what's coming out, what's taking care of everything. Performance

25:27

is a last mile.

25:29

So right now we can basically,

25:32

let's take blockchain as an example,

25:34

you know, using for morphic

25:36

encryption in a blockchain. Right now

25:38

we can support between two and five transactions

25:40

per second. It's not bad considering

25:43

that most L2s on average

25:45

only actually have five TPS. So

25:48

FHE today already matches the average

25:51

load of most EVM chains. We

25:54

believe that we can 10x that number just

25:57

with better cartography the next 18 months.

26:00

you're going to get to 15, 20 TPS in the next couple of years. That

26:06

basically is the same throughput as Ethereum.

26:09

For blockchain, FHE works, period.

26:12

Done.

26:13

If you want to have 1,000 transactions per second,

26:15

then you need some additional hardware

26:18

accelerators. You need a kind of GPU

26:20

for homomorphic encryption if you want to go beyond

26:23

this 10 or 20 transactions per second.

26:26

If you want to use it for

26:29

something that you would do with Ethereum,

26:32

done. It works.

26:34

If you want to use it for something you would do in Solana,

26:36

then you need this hardware accelerator. Okay.

26:39

Maybe it's time to kind of

26:41

go into the products and services that you currently

26:44

offer. So, what's another?

26:48

So XAML is a full stack company. We

26:50

basically have a solution

26:52

all the way down from, you know, FPGA

26:54

and GPU acceleration all the way up

26:56

to solutions for blockchain and machine

26:59

learning. At the core of everything

27:01

we do, there is a unique technology we built

27:03

called TfHE, which we

27:06

basically have like an FHE

27:08

developer library. And on top

27:10

of that, we built one solution for machine

27:13

learning where you can take some Python code,

27:15

so an existing model that you wrote in Python,

27:18

and we automatically convert it into

27:20

a homomorphic equivalent that can work on

27:22

encrypted data. So as a data

27:24

scientist, you don't have to learn anything about cryptography,

27:27

you just write Python code and we take care of everything

27:29

else. On the blockchain side,

27:32

things were a little bit more complicated. So

27:35

we basically created this protocol, the FHEVM,

27:38

which is a way to have confidential smart

27:40

contracts in EVM chains using

27:43

homomorphic encryption.

27:45

And so that particular protocol

27:47

works great,

27:49

but it's a little bit more than just FHE

27:51

because you have multiple users, you

27:53

know, interacting with each other, you need

27:55

composability between contracts.

27:58

And so this is where we're using MPD. for example,

28:01

for key management. So our FHEVM

28:03

protocol uses FHE for the on-chain

28:06

secret computation, but it also uses

28:08

NPC for managing

28:10

the secret key of the network.

28:14

And this FHEVM,

28:16

is it live today? Can I write it on

28:18

here? Is it like its own chain? What

28:21

are the costs?

28:23

There is a set of precompiles that you need

28:25

to integrate into your EVM chain to support

28:27

homomorphic encryption. So it doesn't work on

28:29

Ethereum, it would work on any chain, any

28:32

EVM chain who just basically implements

28:34

those precompiles pretty much. So it's a very

28:36

easy integration, but it does

28:38

require effectively to at

28:41

least self fork the EVM itself. I

28:44

don't think Ethereum will ever use FHE just

28:46

because of the computational requirements. It's

28:49

just not in the spirit of Ethereum

28:51

of running that like on cheap hardware, right? I

28:54

think FHE still needs some pretty powerful hardware. So

28:57

there are a number of companies integrating the FHEVM.

29:00

The first one that's public who announced

29:03

it is called Phoenix. So Phoenix

29:05

is a new L2 based

29:08

on homomorphic encryption using our technology

29:10

that's built by the

29:12

team behind secret network. That's

29:15

already like a privacy network. So they're launching

29:17

this new protocol called Phoenix. Great

29:20

guys, incredible team, great

29:23

investors too. So they raised like a 7 million

29:25

seed round led by multi coin. So

29:28

I think that's going to be probably one of the very

29:30

successful projects.

29:32

But let's just say that

29:34

if we do our job right,

29:36

homomorphic encryption will be a

29:38

commodity technology in blockchain.

29:41

Because when you think about it, right

29:43

now in a blockchain, everything is public. If

29:46

you want to build any kind of confidential

29:48

application, you're going to need something

29:51

like homomorphic encryption.

29:54

Yeah, absolutely. So kind of let's

29:56

talk. I mean, I, the use case

29:58

to me is absolutely I'm

30:01

still not clear on kind of the technical implementation.

30:04

Say I want to launch an

30:06

FHE-enabled

30:09

chain SNL21E

30:12

theorem. First of all, does

30:15

everything need to be fully

30:18

homomorphically encrypted or can I do

30:20

like plaintext things and FHE

30:23

things

30:24

on the same chain? That's a great question. You

30:27

can do both at the same time. In fact, we

30:29

don't actually change the EVM itself. You

30:32

can take an existing EVM. So

30:34

let's say you take a Go theorem. Okay, yeah,

30:36

you use that.

30:38

You can take that and you basically

30:40

add the XAML precompiled

30:42

libraries,

30:43

which are basically linking your EVM

30:46

to our FHE library so that you can

30:48

start doing FHE stuff in Solidity.

30:51

But the way you do that is that we expose

30:54

some new data type in Solidity. So

30:56

basically an encrypted integer, an encrypted Boolean

30:58

value. At your contract,

31:01

you can specify what's supposed to be encrypted,

31:03

what's not encrypted. So

31:05

you have full composability, not

31:07

just between encrypted FHE contracts,

31:10

but also between encrypted and non-encrypted

31:12

states. And that's a very important

31:14

part because you can take an existing chain that's

31:16

already running and without changing anything,

31:19

without breaking anything, you

31:21

can add FHE capabilities on top

31:23

of it.

31:25

Let's just say on the same chain. So basically

31:27

if everything happens on one chain, how

31:29

do you deal with composability

31:32

of some parts are plaintext and some parts

31:34

are encrypted?

31:36

FHE operations

31:40

can work between two encrypted values, between two

31:42

ciphertexts, but it can also work between

31:44

a ciphertext and a plaintext value. So

31:47

the operator in FHE basically

31:49

exists for both flavors. So

31:52

that part is, I would say, a natural

31:55

feature of FHE technologies. What's

31:59

really difficult... is actually composability

32:01

between encrypted states. Because

32:04

if you think about it, if you have multiple users

32:06

or multiple contracts interacting with each other,

32:09

it does imply that they've

32:12

all encrypted their data under the same

32:14

public key. Because if the data is encrypted

32:17

under different keys, it cannot be mixed,

32:19

right? It just won't work. So it has to be under one global

32:22

network key. And so

32:24

if there is one global network key that everybody's

32:26

encrypting under, the question is who

32:29

has a decryption key and how do you

32:31

selectively determine who's

32:34

allowed to see which encrypted value, right?

32:36

How do you decrypt what for who? And

32:39

this is where MPC comes in. The smart

32:41

contract itself can define access

32:43

control logic. It can say this user

32:45

who owns this balance can decrypt his

32:48

own balance.

32:49

Makes sense, right?

32:50

And the way this works is that the validators will

32:53

split the private key of the network in

32:55

different pieces. And you need

32:57

a majority threshold approval

33:01

for something to be decrypted. So it's called threshold

33:03

decryption. And by having

33:06

this threshold decryption combined

33:09

with your traditional blockchain, you

33:11

can actually have this decentralized

33:14

system where nobody has a private key and

33:16

where the smart contract dictates what

33:19

can be decrypted or not.

33:21

I think I understand that. So

33:23

this means that kind of theoretically things

33:26

can't be stolen from you, but kind of

33:28

if enough of the MPC

33:30

key holders collude, the

33:33

network state could be frozen, right?

33:36

I mean, not

33:37

necessarily. If the MPC

33:39

nodes colludes, it doesn't change anything to

33:41

the blockchain itself. It only means that

33:44

they would be able to decrypt anything they want.

33:47

So the worst is that you would lose confidentiality,

33:50

but you would never be able to double spend or anything

33:52

like that because of that. But I would argue

33:55

that if you don't

33:57

have an honest majority in your protocol,

34:00

is broken anyway, so you probably shouldn't

34:02

use it. But having said

34:04

that, securing an NPC protocol is

34:06

a very, very tricky thing. Very, very tricky.

34:09

So most likely, we believe

34:11

that this is going to go through a combination

34:14

of this threshold NPC protocol

34:17

running inside some kind of secure enclave.

34:20

So for example, if each of the NPC node

34:22

participants are running the NPC software

34:25

inside an HSM, then

34:27

you would need to break multiple

34:30

HSMs

34:31

at the same time

34:32

faster than the keys are rotating.

34:36

And so arguably, this means that

34:39

I don't even think a government could do it, because if those nodes

34:41

are in different countries, nobody would have full access

34:44

to all of them at the same time. So you would

34:46

need some kind of global

34:48

international operation

34:50

where people synchronize

34:53

to break HSM in a few minutes. If

34:55

they can do that, they can break into any bank.

34:58

So the goal here is to make this protocol bank

35:01

grade security, right?

35:02

Okay, how fast are keys

35:04

rotated? That

35:06

you can determine pretty much

35:08

any way you want, I would say at least as

35:11

often as validators rotate.

35:13

Okay, and in terms of kind of like

35:15

NPC numbers,

35:18

so kind of how many parties do you

35:22

recommend? Because I mean, kind of you

35:25

need to cover like different

35:27

jurisdictions, you kind of need to have like geographical

35:30

independence and the operator

35:32

independence and so on. So kind of

35:36

just so to make sure that you don't have run

35:38

into like either a jurisdictional

35:44

catastrophe or kind of like a

35:46

dark-doubt scenario.

35:47

These are very

35:49

important questions. These are very hard questions.

35:51

I think today, the hardest question

35:54

for FHE isn't FHE anymore. It's the key.

35:59

management of the threshold protocol

36:03

for those composable multi-user

36:05

FHE use cases like blockchain. That's

36:09

a very hard problem, right? Because it's not

36:11

about cartography anymore. This is about upside

36:15

security. Yeah. Exactly. Upsack, pretty

36:17

much. So we believe that

36:20

a combination... So first of all,

36:22

we believe that the threshold

36:24

MPC protocol is probably not going to be

36:27

run by the validators of the network itself because

36:29

it doesn't have to be. Secondly,

36:32

we believe that this KMS

36:34

will probably have much, much stricter

36:36

requirements in terms of what hardware should

36:38

be running on, who should be allowed to run

36:41

it. It might even need permissions if

36:43

people want to have it really even extra secure.

36:46

So it's possible that you might have one

36:48

permissioned threshold

36:51

network that everybody's using and the

36:53

people running that are going to be Apple,

36:56

Huawei, Zama. You

36:58

basically have companies from different countries

37:00

that have no incentive

37:03

to collaborate whatsoever running

37:05

those things. So it

37:07

could be five participants, it could be 10.

37:10

We even have a protocol for 50 participants.

37:12

So the number doesn't really

37:15

matter. It's really more about just

37:20

who's running it effectively.

37:23

Okay. But there's no way of

37:26

establishing a scheme such that you don't

37:29

need the same encryption

37:31

keys to kind of four things to be able to

37:34

operate, right?

37:36

There is something called multi-key homomorphic

37:38

encryption. The problem is that it requires

37:41

every participant to be online for

37:43

decryption. And the size

37:45

of the keys basically explode

37:48

quadratically with the number of users. So

37:51

if it's like three people, sure, why not,

37:53

right? If it's like 100 million people on Ethereum,

37:56

no way. And plus not all of them will be there.

37:58

So

37:59

no, the correct

37:59

way, 100%,

38:00

the correct and only way that this will

38:03

work is homomorphic

38:05

encryption using a public key that everybody's

38:07

sharing with some sort of

38:09

threshold protocol for

38:12

securing the private key. There is maybe

38:15

a longer term idea where

38:18

you could basically have what's

38:20

called functional encryption combined with some

38:22

kind of ZK proof. So if you can provide

38:25

a proof that the FHE computation was done

38:27

correctly, there is a technology

38:31

called functional encryption that

38:34

takes an encrypted input and produces

38:36

an encrypted output only if certain

38:38

conditions are satisfied. The

38:41

problem is that this technology

38:43

is so, so, so, so, so,

38:46

so, so slow and limited that it's basically

38:48

not possible right now to do it. But

38:50

maybe in 10 years, you're going to have

38:52

FHE running on the encrypted data,

38:55

ZK proving that the computation was done

38:57

correctly, and the proof of the

39:00

ZK protocol will be used in a

39:02

functional encryption scheme for decrypting

39:05

the actual ciphertext with the result

39:07

of the computation. And

39:09

here, you would have a completely

39:11

trustless decentralized, no threshold,

39:14

no MPC

39:15

protocol.

39:16

And that would be a holy grail of,

39:20

you know, FHE. Super

39:21

cool. So let's talk about kind of the

39:24

ways that it can be deployed today. Say I'm

39:26

deploying an L2-1 Ethereum,

39:29

and I kind of I have all the necessary

39:31

precompileds to kind of enable the

39:33

FHE VM. Do I still

39:35

use like regular vanilla ZK

39:40

roll-up technology to kind of prove to Ethereum

39:42

that my state is correct?

39:45

Unfortunately, right now proving

39:47

an FHE computation is much more costly

39:49

than just redoing it. So it

39:51

doesn't really make sense to use a ZK roll-up

39:54

for scalability in FHE right now.

39:56

Doing an optimistic FHE roll-up

39:59

makes more sense. So

40:01

I think that's probably what we're going to see happening.

40:04

Okay. And how do you then

40:06

deal with fraud proofs that

40:09

you need for the optimistic rollup? I

40:11

mean, can everything that kind of I need to show

40:14

for fraud proof be done on

40:17

layer one without the pre-compiles?

40:20

That's a very good question. Not

40:24

with the optimism stack, the OP

40:26

stack, but with Arbitrum, you can

40:28

push a WASM executable, right?

40:31

And so theoretically, you could compile

40:33

your

40:35

contract, including our libraries,

40:38

into a WASM executable and

40:40

run that on the L1. That's

40:42

possible. How practical

40:45

it would be, I'm not sure. There are

40:47

people working on FHE rollups,

40:49

but it's not yet sold,

40:52

but it's doable, for sure doable, I think. So

40:54

I think Arbitrum style fraud proof

40:56

would work better than OP style ones. And

40:58

that's what Phoenix

41:01

is using, or how are

41:03

they setting this up?

41:06

So Phoenix just recently published

41:08

a white paper showing how you would actually

41:11

do FHE rollups. So they

41:13

have a prototype working, and

41:15

they're well on track to release that

41:18

at some point in 2024 in production. And

41:21

they use actually Arbitrum style

41:24

WASM fraud proof

41:25

for it.

41:27

If you kind of look at L2s

41:30

and L1s, obviously, there's

41:32

a huge spectrum of possibilities

41:35

how to configure this. So does this work with POS,

41:38

POW, POA?

41:41

Can I just set up an

41:44

arbitrary EVM chain as long as it kind

41:46

of has the right pre-compile?

41:50

It should work with any EVM.

41:53

There are some, I would

41:55

say, compromises if you're

41:58

using consensus protocol that does not work. doesn't

42:00

have instant finality. And

42:03

the reason is that if you don't have instant

42:05

finality and you're requesting to

42:07

decrypt something of the state,

42:10

you might be decrypting something that gets rolled

42:12

back at some point. So you might be leaking

42:14

information that isn't actually final

42:16

state. So that's why we think

42:19

that like, you know, final instant finality

42:21

protocols like Tendermint, you know,

42:23

Comet BFT and all these IBFT

42:26

sort of consensus are better

42:28

suited

42:28

for this. Okay, you don't want proof of lack

42:30

or anything. Yeah.

42:32

Proof of worth is probably not going to do it. But

42:35

proof of stake, proof of authority should

42:38

be fine.

42:39

Okay, yeah, super interesting. Do you expect

42:42

everyone with, do you expect

42:44

there to be specific chains that

42:47

kind of are FH

42:49

enabled? Or do

42:52

you think we will see the future of DAP

42:54

chains where kind of adapt decides that kind

42:56

of this is really what they need, and they kind

42:58

of release their own chain? Because then

43:01

obviously kind of interoperability becomes

43:03

a problem again that I mean, and I mean,

43:06

with optimistic roll ups, and this

43:08

is a much bigger problem than with them,

43:10

the key roll ups, I mean, I mean, bridging

43:12

between different L2s

43:15

is not satisfactorily solved

43:17

at the moment. But I mean, I can

43:19

see us getting there with different

43:21

decay roll ups. And in your

43:24

time where where's kind of on optimism,

43:27

optimistic roll ups generally much

43:29

harder.

43:32

I think most likely given

43:34

the constraints in terms of computational

43:37

power needed for FHE, I

43:40

don't think that an L1 would

43:42

be running FHE natively

43:45

right now, I think FHE will be at

43:47

the L2 or as a side

43:49

chain or as an app chain. So

43:52

I think you know, you're probably going to have Ethereum,

43:55

Solana, Polygonal, these guys

43:58

as same text unencrypted L1s.

43:59

ones

44:01

with FHL2 and L3 applications

44:03

on them. And whether you run

44:06

that as app chains, as POS

44:08

side chains, or as rollups, doesn't

44:11

really make much of a difference. In terms

44:13

of interoperability, it's actually not

44:15

that much of an issue. Because even though

44:18

every network has its own key,

44:20

you can re-encrypt from one

44:22

key to another. So when you're bridging,

44:25

all you have to do is re-encrypt

44:27

the value to bridge using

44:30

the public key of the network you're bridging into.

44:33

And that's perfectly fine. Like, this is, you can take

44:35

an existing bridging contract and just add that

44:37

particular feature and then you're done. So there shouldn't

44:40

be any more complexity for bridging

44:42

in FHE chains as you would

44:44

have in regular ones.

44:46

But that's only if you don't have

44:48

a contest mode, right? And on

44:50

optimistic chains, you

44:52

inherently have to have one. So

44:55

basically, you could only do this after a week

44:57

or so. I

45:00

guess

45:02

people still use optimistic

45:04

rollups, right? And then they basically

45:06

swap on those, you

45:08

know, FHE markets to, they

45:11

give away 10% of the value and then, you know, they

45:13

don't have to wait a week.

45:14

So

45:15

you could imagine that people might be okay with it. It

45:17

might not be the most secure

45:20

thing to do,

45:21

right?

45:22

But I think, you know, at the end of the day, the

45:24

user can choose if

45:26

they're fine and okay with the trade-off.

45:29

I guess the user is

45:31

taking a risk, not the protocol.

45:34

I agree, but kind of these bridge

45:37

liquidity providers, they

45:39

work by kind of by

45:42

verifying that the claim is

45:45

correct and then kind of paying out on this without

45:47

taking part. But the contest period is- You mean in terms of like

45:49

a bridge? Yes. Yes. Yes.

45:52

Yes. Yes. Yes. Yes.

45:56

Yes. Yes. Yes.

45:59

Yes. that weak period

46:01

in your optimistic roll-up could

46:04

roll back spending, which is way

46:06

worse than confidentiality

46:09

in some extent. I

46:11

don't think it would be any different, to be

46:13

fair. Or at least I don't

46:15

see any particular problem right now.

46:18

Okay. And let's talk about things that are

46:20

actually currently being built. So kind

46:22

of, we already talked about chains who

46:24

may use this imminently.

46:27

What kind of dApps run

46:30

on them that kind of make it

46:32

necessary to have this level of

46:34

privacy?

46:37

You know, it's a completely new design space. So

46:39

we don't know yet what people are going

46:41

to build. But I can tell you what I see

46:44

people asking us if

46:46

it's buildable. So

46:48

there's a very big use case around DeFi,

46:52

obviously. The ability to

46:54

have confidential DeFi, whether

46:57

it's preventing MEV by

46:59

having your transaction encrypted up

47:01

until the point that it's executed. So

47:03

it's encrypted in the mempool, it's executed during

47:05

execution in the contract, and it's only decrypted

47:08

in the public once a blog is finalized,

47:11

for example. That would be one example.

47:14

Confidential ERC20 tokens, keeping the balance

47:17

and the amounts transferred encrypted. So

47:20

you still have traceability. You still know that you and

47:22

I made a transfer. We just don't know for how much and

47:24

how much we both owe. And related

47:26

to that, you have governance. In a

47:28

DAO right now, everybody sees

47:31

who's voting on what and with how many tokens.

47:34

A lot of blackmailing, bribery,

47:36

social pressure. If your vote

47:39

was encrypted and people didn't know what

47:41

you voted for and how many tokens you voted with,

47:44

you would have a much, much, much better

47:47

system for governance that wouldn't be

47:50

subject to peer pressure and things like

47:52

that. One thing

47:54

I'm particularly excited about is

47:57

compliance applications.

48:00

If you want to be compliant, let's imagine

48:03

something very simple. You want to transfer tokens to

48:05

someone else. Maybe

48:07

that person is in a different country, so there might be

48:09

regulations around that. Maybe

48:13

the government in your country should be allowed

48:15

to see the details of the transfer. You have

48:17

a lot of like-dosing. With

48:19

a homomorphic encryption, with the FHEDM, you

48:22

could have your identity

48:24

encrypted in a smart contract. So

48:27

let's say you go through some KYC. The

48:29

KYC provider does

48:31

your facial scan, takes your passport. They

48:34

encrypt your age, they encrypt

48:36

your name, your address, your citizenship. They put

48:38

that in a smart contract. And so whenever

48:40

you want to use a DeFi protocol, let's

48:43

say you have to prove that you're not American,

48:46

you could do that on-chain. You wouldn't

48:48

need any kind of off-chain attestation. There

48:51

wouldn't be any kind of off-chain thing. You do the

48:53

KYC once, they put it on the blockchain, and then from

48:55

that point you can use it to prove things about yourself.

48:58

What that means is that you can have composability

49:01

between identities and

49:04

applications.

49:05

And that I think is huge, right?

49:08

And I'm pretty certain that

49:10

that's going to create a kind of like

49:12

white market and dark market on the blockchain,

49:15

where you're going to have like a layer of compliant applications

49:18

where everybody's like KYC, but confidential,

49:20

so people don't know if you are, right? That's

49:23

going to be most of the volume, and then there's going to be like

49:25

a dark net on blockchains where

49:28

people do things without KYC.

49:31

It's going to put a very

49:33

clear price on money laundering. Well,

49:38

yeah, and I mean, you know, look at the end of the day,

49:41

what people want is confidentiality.

49:44

They want privacy. They don't want

49:47

non-compliance. Oh, yeah. And

50:00

that I think is really the trick

50:02

to make this work.

50:03

Absolutely. So kind of if you look at

50:05

the wider context of privacy over

50:08

the last, say, 20 or 30 years,

50:13

the expectation that kind of things that we

50:15

do is inherently private,

50:18

it

50:19

has kind of eroded away,

50:22

right? So basically, it's kind of like, just

50:25

because there's, we

50:27

generate massive amounts of data that

50:30

are analyzable. I mean, I think had

50:33

the situation been different

50:35

earlier, I think we

50:37

may have seen the same thing.

50:40

But basically, there

50:42

was no data collection on almost all

50:45

of the things that people said and did and so on.

50:47

And this is different now. And there's

50:50

already been this cultural shift

50:53

to kind of expect

50:57

that kind of data

50:59

that's generated can

51:01

be used by

51:03

law enforcement and other

51:06

agencies and can be monetized.

51:08

So people are no longer willing

51:10

to pay for services that

51:14

previously would have

51:16

had to charge something just because their

51:18

business model is that, you know, they

51:21

use your data to kind of generate income,

51:23

right? How do you think privacy

51:26

solutions kind of fit into the space

51:28

where kind of a lot of the harm has already

51:30

been done in terms of kind of expectations?

51:34

I think there is a distinction between accessing

51:37

the actual data and doing something with

51:39

the data. For

51:41

example, let's imagine, you

51:44

know, you want a government to

51:46

be able to prevent

51:50

transfers from one country to another.

51:53

Right now, we need to see all of the data from everybody making

51:55

a transfer, even though not making a transfer to a blacklist.

51:59

country.

52:01

But imagine for a second

52:03

that the data is encrypted using homomorphic

52:05

encryption. The government

52:08

could still apply a filter

52:10

on the encrypted stream of

52:13

financial transactions. They could say, I want

52:15

you to homomorphically check where

52:18

those transactions are going. And

52:21

if they're going to a blacklisted country,

52:23

just make them zero, right? So

52:25

basically you send zero instead of sending whatever

52:28

amounts you're supposed to send. Effectively, you're blocking the

52:30

transfer by doing that, right?

52:32

That would work. The government would be able

52:34

to present

52:36

people from making transfers to, you

52:38

know, like Russia or whatever, without

52:41

seeing what the transfers

52:43

are. So you could still

52:45

argue, well, but you know, the government in that case

52:48

could apply any arbitrary filter. That's

52:50

true. And that's where the transparency of a blockchain

52:53

comes in.

52:54

If that filter

52:55

is a smart contract on a blockchain, everybody

52:58

can see which filters are being applied.

53:02

Right? So you can have transparency

53:04

of the regulation and the filter

53:06

the governments are applying on financial transactions

53:10

and still have confidential financial transactions.

53:12

That I think is like super powerful. Right?

53:15

And that I think is something that's uniquely enabled by

53:17

FHE because FHE

53:19

doesn't remove traceability. It doesn't

53:21

hide the application. It

53:24

hides the user data.

53:27

I think that's sad to a certain extent.

53:29

I mean, these rules can still be arbitrary.

53:32

And then I guess it's a policy fight.

53:35

But let's look at kind of business models

53:37

of kind of like big data companies,

53:39

right? So basically, what percentage

53:42

of the population do you think would be

53:44

willing to kind of pay for an

53:46

encrypted search, encrypted social

53:49

network, and so on?

53:52

Probably no one or very small percentage.

53:55

And I want to be clear, I don't think people care

53:57

about privacy. I don't think they will.

53:59

care about privacy.

54:01

But you see, that's exactly the goal. The

54:03

goal is that nobody cares, not

54:06

because it's not important, but because

54:08

it basically becomes something

54:10

that's guaranteed by design. Privacy

54:13

is something that people shouldn't think about because it

54:15

shouldn't be a problem.

54:17

And so our goal at Xamaa

54:18

is to make that happen, right? We were

54:21

not trying to change people's opinion on privacy.

54:23

We're trying to change, if anything,

54:25

developers' opinion on the importance of

54:27

making their application private by design.

54:30

But even in cases where

54:33

there are good alternatives that offer

54:36

ostensibly the same service as the

54:39

data mining company. Say, for instance, you

54:41

could use ProtonMail instead of

54:43

Gmail, or you could use DuckDuckGo

54:45

instead of Google. People, despite

54:48

the fact that the marginal cost

54:51

of switching to this is basically

54:53

zero, people still don't,

54:55

right? So basically, it's not even harder to

54:57

use or more expensive to use. It

55:00

just seems a lot of people actually

55:02

value their right to privacy

55:04

at literally zero. That's

55:08

exactly why the person who should care is Google.

55:11

Google should make Gmail private by

55:13

design. The user shouldn't have to

55:15

think about switching. It's Google who should think

55:17

about enabling that. And it's not

55:19

the first time we've seen this. WhatsApp

55:22

turned on and turned encryption to

55:24

a billion people overnight. So

55:27

it's possible for a large

55:30

company with a lot of users to go

55:32

from zero privacy model

55:35

to a privacy by design model.

55:38

The entire business model of Google is

55:40

kind of like data mining.

55:43

They can still do that with FHE encrypted

55:45

data. They just wouldn't know what

55:47

they're mining and what they're serving. You

55:50

could take an encrypted user profile and you

55:53

could run an encrypted

55:55

advertising matching algorithm

55:58

on it. The user would still see it. an

56:00

ad is just Google wouldn't

56:02

know who the user is, what

56:05

the profile is, or what ads

56:07

were served.

56:08

So it's possibly, that's the thing that's

56:10

a little bit like,

56:11

is that it doesn't prevent any usage

56:13

of your data. It just prevents

56:16

the visibility of the data.

56:19

Okay.

56:20

Yeah, fair. I think I get the distinction.

56:23

So how does Xamarin

56:25

monetize all of this technology?

56:27

So how are you guys funded?

56:31

So we've been very lucky. We've raised a

56:33

lot of funding, a lot. So

56:35

we have run away for

56:37

multiple years. So

56:39

we don't have to think too much about any short term

56:41

issue. Having said that, you know, we are a

56:43

business. So clearly we're not a nonprofit,

56:46

we're not working for free. And even though everything

56:48

we do is open source, we do offer

56:51

commercial licenses. So

56:53

typically for a blockchain, we

56:55

would take a percentage of the token supply plus

56:57

a percentage of the block fees generated

57:00

by the network. If

57:02

there is a token, if there isn't any token, then

57:04

it would basically be some kind of FIAT based

57:06

licensing model. You know, we're not reinventing

57:08

the wheel, right? We're doing something super vanilla. We

57:11

do have a few ideas

57:13

of like hosted services long term, but

57:16

for now, effectively you can use

57:18

our technology as long as you get a license to use

57:20

it commercially. So the philosophy

57:22

is very simple. It's completely free, it's

57:25

completely open, but if you're going to make

57:27

money with our technology, we

57:29

should make money too.

57:30

That's it.

57:32

Okay, that's fair. So where can people

57:34

stay in touch with you guys,

57:36

kind of follow the news, join the community,

57:39

learn, you know, what can be built

57:40

on top of Zama or with Zama

57:43

protocols?

57:45

Our Twitter handle for Zama

57:48

is at Zama underscore FHE.

57:51

We also have a very active community called

57:53

FHE.org where people can learn

57:56

about FHE. There's a Discord server

57:58

as well that is very active with people. excited

58:00

about FHE. I'm also

58:02

very easy to find and reach out online.

58:05

My Twitter is at RandHindeet. I

58:08

try to answer as many DMs as possible, although

58:10

to be fair sometimes it's a little bit overwhelming. But

58:14

in general, if you're interested in building something

58:16

with FHE, we're sure companies are doing some

58:18

really cool science. Get in touch with

58:20

me or with my team and we'd love to help.

58:23

Fantastic. It's been a pleasure having you on Rand.

58:28

Thank you for joining us on this week's episode.

58:30

We release new episodes every week. You

58:32

can find and subscribe to the show on iTunes, Spotify,

58:35

YouTube, SoundCloud, or wherever you

58:37

listen to podcasts. And if you have a Google

58:39

Home or Alexa device, you can tell it to

58:41

listen to the latest episode of the Epicenter podcast. Go

58:44

to epicenter.tv slash subscribe for

58:46

a full list of places where you can watch and listen. And

58:49

while you're there, be sure to sign up for the newsletter so

58:51

you get new episodes in your inbox as they're released.

58:54

If you want to interact with us, guests, or other podcast

58:57

listeners, you can follow us on Twitter. And

58:59

please leave us a review on iTunes. It helps me to

59:01

find the show and we're always happy to read this. Thanks

59:04

so much and we look forward to being back next week.

Unlock more with Podchaser Pro

  • Audience Insights
  • Contact Information
  • Demographics
  • Charts
  • Sponsor History
  • and More!
Pro Features