Episode Transcript
Transcripts are displayed as originally observed. Some content, including advertisements may have changed.
Use Ctrl + F to search
0:01
At Grainger, we're for the ones who specialize
0:03
in saving the day, and
0:05
for the ones who've mastered the art of
0:07
keeping business moving. We offer
0:09
industrial grade supplies for every industry, with
0:12
same-day pickup and next-day delivery on most
0:14
orders, all backed by real people ready
0:16
to help, so you can get
0:19
the right answers and products right when you need
0:21
them. Call, click Grainger dot
0:23
com, or just stop by. Grainger,
0:25
for the ones who get it done. Psst,
0:31
how'd you like to listen to Dot Net Rocks with
0:33
no ads? Easy, become
0:35
a patron. For just
0:37
$5 a month, you get access to
0:40
a private RSS feed where all the
0:42
shows have no ads. $20
0:44
a month will get you that and a special
0:46
Dot Net Rocks patron mug. Sign
0:49
up now at
0:51
patreon.dotnetrocks.com. Welcome
1:05
back to Dot Net Rocks. This is Carl Franklin.
1:07
And this is Richard Gamble. And this is the,
1:10
coming out on the 21st, so this
1:12
is the last Dot Net Rocks show
1:15
published before Christmas. Right. We've
1:17
got a few more to record. Just geek outs. How you
1:19
been, man? You know, I've been working on the scripts
1:22
for the geek outs. Turns out, you know, it's a
1:24
lot of stuff. It's been a busy year. Yeah,
1:26
sure has. How about
1:29
you? What are you
1:31
up to? Well, last night I did my first recording of
1:34
one of my songs in the studio, in
1:36
the new studio. Ooh, a new track. A
1:38
new, well, it's an older track that nobody's
1:41
heard yet. But
1:44
the first recording I did was all kind
1:46
of discombobulated. It's slower. It's
1:48
at 100 BPM. And
1:51
with acoustic strumming and picking and stuff.
1:54
And therefore it's very easy to get
1:56
off time. So I want to tell
1:58
you about the cool stuff that I did. that
2:00
I did this with. I recorded
2:02
the acoustic guitar in the bass first
2:04
and then I used a Studio 1's
2:06
bend tool to quantize
2:09
the audio. What? Quantize
2:11
the audio. It basically
2:14
worked right out of the box and
2:16
I had tried it before but it
2:18
hadn't really worked well. Maybe
2:20
the default settings are good now but what
2:23
it does is it finds the transients
2:26
according to your quantized value, eighth notes
2:28
or whatever. Then it
2:31
essentially moves the transients and
2:33
either stretches or compresses
2:37
the audio between the transients and
2:39
basically turns it into quantized
2:42
like it's right on the beat. Okay, so
2:44
it's just like making the beat perfect. Yeah,
2:47
and it does not sound artifacted
2:50
at all to me
2:52
anyway. Then I did the same thing
2:54
with the drums. For the
2:57
drums I used my iPad at
2:59
the drum set because it's all the way
3:01
across the room, right? Right. The iPad I
3:03
installed this thing called Studio 1 remote. I
3:07
set markers in the three places where I
3:09
wanted to record drums and
3:11
I could just go back, undo,
3:13
record. I did all
3:16
the recording from the drum
3:18
set with my iPad on
3:20
a music stand. I did two
3:22
sections, one with brushes and one without. It
3:26
took a while to record but it was great because
3:28
I could just take another take, do another take until
3:31
it was right. And then I quantized the drums. Those
3:35
drummers are never on beat anyway. I
3:37
was pretty close but I wasn't
3:39
perfect and I really wanted this
3:41
to be tight and it really
3:43
is. So I'm very excited, everything
3:45
worked and yeah, what
3:48
can I say? Very happy. The
3:50
new studio. Yeah. Fully
3:52
operational. Nice. Hey,
3:55
let's get started with Better Know Framework. All
4:01
right, man. What
4:03
do you got? Well, I
4:05
found this post on boardpanda.com.
4:08
Have you ever seen that
4:10
crazy site? Yeah. Yeah. So
4:12
this is 30 of
4:15
the worst Christmas gifts people ever
4:17
received as shared in
4:19
this online group. One
4:23
of my favorites is a dish
4:25
towel. I was eight. I
4:33
was eight years old and my parents
4:35
gave me a dish towel. Nice. And there
4:37
wasn't even a new dish towel. Time for
4:39
you to learn. It was just like they
4:41
went to the kitchen, found a dish towel,
4:43
wrapped it up in paper and then gave
4:45
it to this eight year old kid. Yeah.
4:52
Bad Christmas presents. So hours
4:54
and hours of fun and
4:57
yucks and silliness. Yeah,
4:59
which we need once in a while, right?
5:01
Especially around the end of the year. It's
5:03
like we've been working hard all year. Now
5:06
let's take some time. Awesome. Anyway, that's what
5:08
I got. Who's talking to us, Richard?
5:10
Grab a comment on the show, 1873, the
5:13
one we just published a little while ago
5:15
with Leah Milanino from NDC
5:17
in Porto. We were talking about
5:20
sustainable development. No, trust me. Thinking about the
5:22
energy. We got a great comment here from
5:24
Jackie. You said, yeah, hi, Carl and Richard.
5:26
I want to express my thanks for tackling
5:28
this important subject. This conversation resonated with me
5:30
a lot as I'm a software engineer working
5:32
with Azure cloud technologies. And I'll
5:34
relate it note. I'd like to share my recent experience
5:36
with an iPhone. Yeah. I've always
5:38
been an Android user primarily due to
5:40
the perception that iPhones are overpriced in non-standard
5:42
devices. I think Apple pretty much
5:44
dictates the standard. Just look at what they did to RSS.
5:48
Often seen as more suitable for users seeking
5:50
opinionated UX like my mother or a symbol
5:53
of social status like my brother. Jackie,
5:55
like you're dissing the fan. I know, right?
5:58
It's Christmas. However,
6:01
when my google pixel broke I had to
6:03
let it in for service. I was left
6:05
with no choice but to use a backup.
6:07
I phone Five as a decade old model
6:09
offered by my brother see he may be
6:12
a social status seeker the least we'll give
6:14
you a spare. I thought it's funny, I
6:16
can is not a bad guy. Now you
6:18
have to reassess this relationship and to my
6:21
surprise the I Phone Five ass was still
6:23
receiving security updates and all my sense of
6:25
mobile apps function flawlessly on It's the something
6:27
I could busy busy about some five year
6:29
old low. End Android device that that
6:32
any I phones a low and device
6:34
are. The only drawback was battery life
6:36
yeah well below five S batteries not
6:38
going to be great spirit meet made
6:41
me realize that even though I phones
6:43
the not be considered entirely sustainable due
6:45
to lack of adherence certain that is
6:47
like Us B, C or Playful batteries
6:50
their longevity is great right up till
6:52
you drop and therefore I've decided to
6:54
offer an I phone model with us
6:57
be sea port thank you he you
6:59
demanded that Apple's start. Using you as
7:01
Pcs plus allow me to utilize most my
7:03
Android gadgets that he said I spent a
7:05
the phone and thanks again for your dedication
7:07
Is done! Iraq on Yes! It
7:10
has been a valuable source of knowledge
7:12
inspiration for see during my career transition
7:14
to.net see Sharks. Looking forward to future
7:16
episodes in the I can add to
7:18
that. That moves
7:20
on to I Phones are inherently
7:23
more secure than Android phones. Just.
7:25
Because Apple is such a closed system,
7:27
that's one of the benefits actually of
7:29
having and sort of security by obscurity.
7:32
Nobody wants to try a knack. I'm
7:34
really follow know about obscurity but security
7:36
by By I insist control over everything
7:38
everything that goes on that phone. So
7:41
yeah that I like I. That's that's one
7:44
of the reasons why haven't I phone in
7:46
the guise of security this week? Patrick Robinson
7:48
twins yeah, say the same thing and get
7:50
the I phone is more secure state Android
7:53
phone mean under the hood. Androids Linux every
7:55
knows. when he says terrible certainly are
7:57
set up a scan of poke
8:00
at the Linux guy the whole day. That's what I'm going
8:02
to do. Is that what we're going to do? We're
8:04
going to poke at each other at Christmas time? Come
8:07
on. Hey, Jackie, thank you so
8:09
much for your comment. Glad you really liked this show
8:12
and a copy of Musicobuy is on its way to
8:14
you. And if you'd like a copy of Musicobuy, write
8:16
a comment on the website at dotnetrock.com or
8:18
on the Facebook. We publish every show there. And if you
8:20
comment there and it reads on the show, we'll send you
8:22
a copy of Musicobuy. And you can follow us on Twitter
8:24
if you like, but the real fun happens on Mastodon. I'm
8:28
at CarlFranklin at techhub.social. And I'm
8:30
Rich Campbell at Mastodon.social. Send us
8:32
a two. You might get
8:35
a mug if we read it on the show. I
8:37
don't think so. Pretty sure you won't. Yeah. Pretty
8:39
sure you get a copy of Musicobuy. Kind of how that
8:41
works. Did I say mug? You did. All
8:43
right. Let me say that again, Brandon. All right.
8:46
Send us a... Although that was kind of funny.
8:48
We might want to leave it in. It was kind
8:51
of funny. I'm pretty sure you won't. Let's leave it
8:53
in. It's Christmas time. What the hell? All right. Let
8:57
me bring on our guest, Daniel Marbach, as
9:00
a distinguished Microsoft MVP and
9:02
software maestro. At particular software,
9:05
Daniel knows a thing or two about code. By
9:08
day, he's a devoted .NET
9:10
Crusader espousing the virtues of
9:12
message-based systems. By night, he's
9:15
racing against his own mischievous router
9:17
hack, committing a bevy
9:19
of performance improvements before the
9:21
clock strikes midnight and he
9:23
turns into a pumpkin.
9:27
Yes, exactly that. What's
9:33
a router hack? What are you going to do? Load
9:35
WWRT? No. It's
9:39
a very simple sort of trick because I've
9:42
been contributing to open source and various
9:45
things. I
9:47
just like to spend some time with code because
9:49
I feel like it sharpens my understanding of
9:51
the fact that I'm working with. I
9:54
had a period in my life where I just
9:56
couldn't stop. Then
9:58
At first, it was like, why? One am too
10:00
am, three am in the morning and
10:03
then luckily always. Basically my only one
10:05
rule was said I will not extend
10:07
my alarm clock till later point in
10:09
the day. Some advice: my days would
10:12
have sisters but at some point I
10:14
was like okay that's it, I need
10:16
to change something in my life. So
10:18
basically I'm switching off the internet around
10:21
the nights at my oh and that's
10:23
when you have to ask your cel
10:25
sister's ex activists said usually when you're
10:27
when you're like in the middle of.
10:30
Something and as Isis, Google, Bing, this or
10:32
whatever you're using right you're like ah the
10:34
internet doesn't work anymore I had as time
10:36
until tomorrow and then Us which often Cisco
10:38
Tibet size yard and work that way for
10:41
me I did during practise or or to
10:43
the same rule in place because I had
10:45
teenage daughters. Know
10:47
you hear the audible groans midnight nights
10:49
ago. You know everybody's bed. Sure they
10:52
are spies Me it's like just gonna
10:54
have you know for the ships as
10:56
go to push the code fails. Yeah
10:59
ah ah it's. Me: I can
11:01
fix router so you're a performance
11:03
monk or u. S
11:06
Well actually I work for the
11:08
company called Particular Softer as I
11:10
guess you had to do the
11:12
on on on the polluting. oh
11:14
shoot for right so you'll advance
11:16
you know we have you particular
11:18
rights and a few others and
11:20
yeah so I basically my day
11:22
to day chopper some busy building
11:24
a robust and reliable frameworks and
11:26
for people that sort of want
11:28
to sort of bill. To see
11:30
be to systems primarily based on
11:32
a message saying stuff like as
11:34
service boss that Sks Sns storage
11:37
to use and old days god
11:39
forbid Msm que right? But it's
11:41
still. it's still out there thriving.
11:43
Surprise. surprise me quite heavily. Use
11:45
Death is Goodman in the three
11:47
Yeah, and it's at the one
11:49
of suits great things. His speech
11:51
a complete tried it A says
11:54
it's just there if you're still
11:56
running on windows and other on
11:58
linux like I do you. And
12:00
HDCOM. And
12:05
one of the things that we do
12:08
there is we want to make sure that
12:10
the customers that they're using into response can
12:12
focus on their just writing their business code.
12:14
Don't need to write any plumbing codes.
12:17
And that stuff should run as
12:19
efficiently as possible. Right. So I
12:21
guess performance throughput
12:24
was always sort of right front and
12:26
center, sort of in my day to
12:28
day job. But I also care a
12:30
lot about it because I
12:32
believe especially it was
12:35
came out in the comment that you read
12:37
out, Richard. There's like if you're targeting the
12:39
cloud or if you're shifting to the cloud
12:41
or if you're already are in the cloud,
12:44
sort of you are basically built
12:46
by the amount of resources that you're
12:48
using in the cloud. Yeah. There's
12:52
a direct revenue relationship to that
12:54
consumption, which at least gives us
12:56
some kind of incentive. Yeah,
12:59
correct. You put down your credit card and then you
13:01
get surprises at the end of the month, especially
13:05
if you surprise yourself is one
13:07
thing. Surprising the CFO is another.
13:10
That's a very loud noise from a
13:12
large office. Yeah,
13:14
thanks. The question I mean, there's so
13:16
many ways to tweak performance. One is
13:18
just by using the latest dotnet stack.
13:20
Correct. And then, you
13:23
know, keeping your NuGet packages
13:25
updated. But on top of that,
13:27
you know, what kind of
13:29
knobs are you pulling? Are you pulling software
13:31
knobs, hardware knobs, all of the above? So
13:34
let me let me quick before I answer
13:36
your question, let me quickly go to what
13:39
you said about updating the dotnet version. That's
13:42
actually really interesting because Microsoft has
13:45
this block series where they essentially
13:47
talk about their teams migrating to,
13:49
for example, from dotnet framework to
13:51
newer dotnet versions or from
13:54
dotnet 6 to dotnet 8. And one of the
13:56
cool things they did there is that they
14:00
the Microsoft Teams infrastructure team,
14:02
they basically migrated from .NET framework
14:05
to .NET 6. Just
14:08
by basically migrating to that LTS
14:10
version of .NET, they were actually
14:12
able to by almost
14:14
24 percent reduce the
14:16
monthly cost and expenditure in Azure,
14:19
which is pretty amazing if you
14:21
think about that. That's definitely one
14:23
way to do it. I always
14:25
encourage people to if they can
14:27
stay up to date with
14:30
the latest .NET versions, definitely. Yeah. I'm
14:32
a big fan of that series on
14:34
the .NET blog, just because you
14:37
talk about the Teams guys migrating to
14:39
new version of .NET and the benefits
14:42
they got from it, and also the
14:44
things they struggled with all that, like
14:46
what problems they had. But
14:48
to me more than anything, it's like, hey, these
14:51
guys got this benefit and we're able
14:53
to move that big an app. You're
14:56
going to be okay. You
14:58
can do it. Yeah, absolutely. But
15:01
to come back to your question, Carl, I
15:03
think one of the things that I try to
15:05
apply to in my thinking is,
15:07
I want to make explicit trade-off as
15:10
I'm going with things. That
15:13
means I want to be aware of, is
15:15
this code going to be executed on the
15:18
hot path or at scale? How
15:20
many times a second is that going to be, or
15:23
is it just something that runs on a
15:25
background shop once a day, or twice
15:27
a day? Because then usually
15:29
if it's just executed once or twice
15:31
a day, it doesn't really
15:34
matter that much whether it's
15:36
super fast or not. But
15:39
then when it's executed on the hot path, potentially
15:42
hundreds and thousands of times per second,
15:44
then it's usually good to
15:46
become more of performance aware.
15:50
But performance aware, unfortunately, this is
15:52
also something that gets thrown around
15:54
quite a lot in the industry.
15:56
Then everyone assumes, everyone knows what
15:58
performance awareness means. But one
16:01
of the things that I struggled with
16:03
was, where should I even
16:05
get started to become performance aware? Because
16:07
apparently, if you go down to the
16:10
literature of looking
16:12
at benchmarking performance optimizations,
16:15
you can actually go from just doing
16:17
little things up to setting
16:19
up your entire CI CD
16:21
pipeline with dedicated hardware, doing
16:24
regression testing. But
16:26
usually, we don't start there. Usually
16:30
we start somewhere else. That's
16:32
one of the things that I'm trying to apply.
16:34
Usually I ask myself a bunch of questions when
16:36
I look at the code. For
16:38
example, I go and look for, well,
16:41
what could be the CPU memory characteristics?
16:43
What could that be for the specific
16:45
line that I'm looking at that I
16:47
know is on the hot path? And
16:50
then I usually start thinking about, are
16:52
there any sort of low hanging fruits
16:54
that I can apply to this? Maybe
16:56
do some more efficient string splitting options
16:58
and stuff like that that I know
17:00
from reading the performance blog post and
17:02
I can apply there. And
17:06
I presume that's Steven Taub's post.
17:09
Yes, of course. The
17:11
book of the Taub, right? The book
17:13
of Taub, yeah. Yeah, exactly.
17:15
And then most of the time, there
17:19
are a few tricks you can apply. For
17:21
example, if you're allocating a byte array, then
17:23
you know that you actually
17:25
just need it for every iteration. What
17:27
you can do is you can move
17:30
that away from the hot path, allocate
17:32
it once and clear it and then
17:34
you're no longer having allocations that you're
17:36
executing on the hot path.
17:38
And then the garbage collection doesn't have
17:40
to clean up a lot of things
17:43
and there are things that get better as well. But
17:45
then... How
17:48
do you measure that, Daniel? How
17:50
do you know it got better? That's
17:52
a tricky one, that the impact of
17:54
garbage collection, like, ooh, how would I
17:56
measure that I reduce the amount of
17:58
garbage collected? Yes. That's
18:01
a very good question. So
18:03
usually what I do is before I,
18:07
so I call this performance loop. So
18:10
for lack of a better term, I
18:12
call it performance loop. So what I
18:14
usually start with is when I have,
18:16
I have hypothesis about the piece of
18:18
code that needs to, that it creates
18:22
garbage, then what I do
18:24
is I write a test harness. And
18:27
essentially what that harness sort of does, it
18:29
sort of takes whatever I'm looking
18:32
at, takes it into a specific context
18:34
of my suspicions. And then it executes
18:37
it in that specific
18:39
scenarios. And then I attach
18:41
profilers to that piece of code.
18:44
And what I usually do is I take
18:46
at least a memory snapshot
18:48
and also create a CPU
18:51
snapshot. Because, and
18:53
of course, if you have an IO bound system,
18:56
like you have a database in place or
18:58
you have HTTP calls, stuff like that, you
19:00
also want to look at your IO, you
19:03
want to do IO based profiling
19:05
as an example, right? Because usually
19:07
when you look at your IO
19:09
system, like the database, that's, you
19:11
can basically achieve orders of magnitude
19:13
of performance improvement by tweaking your
19:15
SQL queries, right, before you start
19:17
thinking about memory allocations and stuff like
19:19
that. But assuming you
19:21
have sort of removed that part,
19:23
then essentially those sort of profilers
19:26
snapshots give you an indication
19:28
of where could you focus
19:30
on. And here comes the next problem.
19:33
And when you attach your profiler is that
19:35
you might see lots and lots of
19:38
allocations from different subsystems and
19:40
component on that specific call tree. And
19:42
where should you even start? So
19:46
I usually try to sort of
19:48
apply a combination of,
19:50
I call it the 1% improvement philosophy,
19:53
versus deliberate contextual based
19:55
optimizations that I want
19:57
to do. So
20:00
because for example, I believe that if
20:03
you do enough little performance optimizations
20:05
over time, that's the 1% improvement
20:07
sort of philosophy, then eventually they
20:10
will end up making a big
20:12
impact. And we can see that
20:14
Microsoft applies it as well to
20:16
the .NET runtime, right? They're doing
20:18
lots and lots and lots and
20:20
lots of small changes all
20:22
over the place. And the compounding effect of
20:25
these changes, they essentially are
20:27
massive when you look at them in
20:29
sort of the greatest scheme of things.
20:32
Right. And what profiling tools are
20:34
you using here? Is this just like the built-in
20:36
profiler? Okay. You
20:38
made a joke about me running on Linux, right? So I
20:40
– I knew it
20:42
wasn't a joke. Well,
20:46
so I'm a big fan of the
20:48
sort of chat brains tools I've been
20:50
using for years. You're not alone. So
20:53
I'm using usually .trace
20:56
and .memory, sort of those
20:58
two tools. And Rider
21:00
also has some built-in analysis. So for
21:03
example, they do – when you execute
21:05
your tests or you execute your solution,
21:07
they also do some dynamic program analysis
21:09
where they show you sort
21:12
of the allocations that your stuff had,
21:14
or the CPU that it wasted. So
21:17
I try to use those tools, but primarily .trace
21:20
and .memory to get an
21:22
overview of what is actually
21:24
going on. Right.
21:27
I mean, yeah, there are built-in profilers, but
21:29
if you're willing to pay for one, there
21:31
are better ones. Yeah.
21:34
I mean, to be fair, Visual Studio
21:36
is great, right? There's like Visual Studio
21:38
has, depending on the license. I'm not
21:40
entirely familiar with the licensing terms there,
21:43
but it has great tooling built-in. Or
21:46
if you are sort of very
21:48
advanced and mostly on Windows, you
21:50
can also use PerfView, right? So
21:53
PerfView is a very powerful tool that
21:56
you can use. Although I struggle with
21:58
it a bit, I must say. Every time,
22:00
it's like using WinDPG. Every
22:03
time I use these tools, I have to sort of
22:06
get some cheat sheets onto my machine
22:08
in order to remember the complex commands.
22:12
That's one of the benefits. There
22:14
are tools you need to learn. And I
22:16
mean, I suspect you use
22:18
them more than most people. And if you can't
22:21
keep them in your head, then nobody can.
22:24
I mean, I've done
22:27
a lot of performance tuning over the years and people are
22:29
always surprised. It's like, why are you reading the docs?
22:31
Like, don't you know this? It's like, listen, there's
22:33
a lot of knobs on these things. And if
22:36
you don't go through the steps, you can waste
22:38
a lot of time. Yes. And
22:40
actually wasting a lot of time,
22:42
that's actually a very good sort
22:45
of comment that you made there.
22:47
Because I think even if you
22:49
are more familiar with performance optimization
22:51
and benchmarking, it's like, and
22:53
profiling, at the end of the day, that's
22:55
not my day job. My
22:58
day job is building robust and reliable
23:01
messaging frameworks and middlewares and the
23:03
platform in particular, and
23:06
not doing performance optimizations all day long.
23:08
That's not my job description. And
23:10
I guess that many people that are
23:13
also listening to that podcast have the
23:15
same thing. They would like to dive
23:17
into profiling, benchmarking, performance optimizations,
23:19
but they only have a
23:21
limited budget in order to
23:24
spend on those types of things. And that's
23:26
why I always recommend start
23:28
with a test harness, reproduce
23:31
a scenario, attach your profiler, and then
23:33
use your domain knowledge of the things
23:36
that you're working to basically sift through
23:38
the noise of allocations and
23:40
CPU on the call stack, and
23:42
then figure out, okay, probably here
23:44
is where we can make the
23:46
biggest impact on sort of reducing
23:48
the numbers of CPU cycle spend
23:50
or reducing the numbers of garbage
23:53
allocations that are happening there. But
23:56
sometimes, like I said before, it's also
23:58
where you sift through the noise of the data you
24:00
have the most knowledge in and then applying
24:02
the 1% improvement over
24:04
time in order to make things
24:07
better and better and not trying
24:09
to gold plate everything
24:12
out of an existing
24:14
code path. But
24:16
I guess we now have touched a
24:19
little bit on the how would I
24:21
even know where to get started. We
24:23
talked about profiling, we talked about doing
24:25
CPU and memory at least, always get
24:28
two views on the code base. But
24:31
the next thing is then, I mean,
24:33
improvements, of course, that goes
24:36
more into the territory of knowing your
24:38
stack, knowing your language, knowing the libraries
24:40
that you work with, like Carl said,
24:42
also looking for has the library and
24:44
you release that we can pull in
24:46
or maybe reach out to the maintainers
24:49
and say, hey, by the
24:51
way, we did the profiling snapshots and
24:53
we found out that this library allocates
24:55
that much amount of memory. And
24:58
guess what? When you reach out
25:00
with profilers snapshot to
25:02
third party tooling
25:05
providers, they're like super happy because you're
25:07
then in the 1%
25:10
sort of customers and then
25:12
it's like, hey, now we have data from
25:14
the customer, we can see what's actually going
25:16
on. You've now
25:18
described a workload in a meaningful
25:20
way to them. Correct, correct. Right.
25:24
And so, for example, I myself
25:26
have done that as well. I,
25:29
at some point, I stumbled over
25:31
sort of memory inefficiencies in
25:33
the Azure Service Bus SDK. And
25:37
then I saw I had a
25:39
hunch, I wrote the test harness,
25:41
attached profiler was able to show
25:43
that when you access the bodies
25:45
of a service bus message, that
25:47
allocates unnecessary memories every time you
25:49
essentially access the body, but
25:52
able to sort of show the profilers snapshots,
25:54
show the memories. And then I even
25:57
because I was lucky, I already knew the library
25:59
a little bit. I guess, and I
26:01
was able to contribute a fix for
26:04
that memory allocation problem
26:07
to the Azure service bus.
26:10
Yeah, that's a good way to get an email from Clemens. Yeah,
26:21
I've had comments from Clemens originally
26:23
on some of my pull requests
26:25
as well. Right. And
26:29
therein lies the point, like you're getting into it. And
26:31
of course, the kicker, of course, is that it's open
26:33
source, so you could just contribute a fix. Yes,
26:35
yes, yeah. But I guess, I
26:37
mean, nobody expects you to, right? But
26:40
at least have a memory profile or a
26:42
CPU profile. And like you
26:44
said, Richard, showing what's going on
26:46
in production is a good thing. Yeah, I just
26:48
think it's always challenging to get down to the
26:50
brass tacks like that. To
26:53
me, most of my profiling
26:55
experience has been trying to optimize an e-commerce
26:57
site, where it's like we're
26:59
just, you know, we're running, we're now looking
27:01
at a buy of more servers because the
27:03
site's so busy. Like an optimization can mean
27:05
a lot of money. And
27:08
the pro, I was a Nance
27:10
guy at the time. And
27:12
that was the tool that showed me, I mean, this may
27:14
not have been, it was always that
27:17
balance between this is a very complicated method. And
27:19
so it's consuming a lot of resources. And it's
27:21
a very simple method, but it's called hundreds of
27:23
thousands of times. And so
27:25
the fact that the tool would help sort
27:28
out that weight of often
27:31
called and so
27:33
worth minute optimizations will make big
27:36
differences versus rarely called, but complex
27:38
enough that you will get some
27:40
return on that. I
27:43
never worried about optimizing admin calls
27:45
because they just didn't call that often.
27:48
But all of that mainstream shopping
27:50
cart recommendation engine,
27:53
custom render pieces, ad pieces, like those
27:55
are all the things where it's like
27:57
these get called a lot. even
28:00
though they don't look that big and just
28:03
playing with string compilation, like those kinds
28:05
of things made a huge difference in
28:07
the end. But
28:09
the challenge, I think for a lot of folks is they just
28:11
want to get into the code. In this idea
28:13
of you snap the harness on first and
28:16
get a baseline set of profiles in
28:18
place. Then as you said, imagine you
28:20
were the hypothesis says, if
28:22
we do an optimization here, it'll make a
28:24
difference. Now you go tinker, then
28:28
run the benchmarks again, and then say,
28:30
did we make a difference? And King,
28:33
and if you didn't revert.
28:35
Yes. Because there's
28:37
no performance code I've ever written
28:39
that was easier to read than
28:41
the original. Yes. Ever. Absolutely true.
28:44
I think what you said is
28:46
super crucial. That is the performance
28:48
loop that I apply. When
28:52
you have the harness, and then usually
28:55
that reproduces the scenario. Then like I
28:57
said, you do the improvements. That might
28:59
be several iterations of ideas that you
29:01
tinker around with. Then
29:03
you might execute several benchmarks
29:06
to look at those optimizations
29:08
that you're doing. Maybe
29:10
even several micro-benchmarks measuring little
29:12
improvements in the call stack
29:15
that you came up with
29:17
during that tinkering phase. Then
29:19
at the end, what you do
29:21
is you bring it back into
29:23
that harness. Then you look at
29:26
the end-to-end profile again, where you
29:28
look at again, the CPU and
29:30
memory profile at least, to actually
29:33
see the before and after. Then
29:35
you see on your graph, oh,
29:37
we spent 650 megabytes
29:40
of memory on that specific
29:42
scenario before. Now we are at 600. Now
29:45
you know that you actually have gained
29:47
something. You also have
29:49
the numbers from the benchmarks that
29:52
you run against each individual
29:54
part of the call stack that you
29:57
try to optimize. I
30:00
have one question to you Richard, you said
30:02
you used the ANS profiler. Do
30:04
you also happen to sort of, because I had a
30:06
period where I used several
30:09
tools because I had sort of, I
30:11
know it when I see it type
30:13
of investigations. So I used several tools
30:16
that had different sort of dashboards and
30:18
overviews that sometimes gave me sort
30:20
of a slightly different view based
30:22
on the preferences of the tooling against the test
30:24
harness and then I was like, oh, there it
30:26
is. Yeah, I
30:28
mean, different problems
30:31
of different spaces. And
30:33
granted, a lot of my experiences are from a while
30:35
ago where there wasn't as many tools as there are
30:37
today. But yeah,
30:40
definitely there's a difference
30:42
between tweaking a piece of code that,
30:44
you know, sits in a call stack
30:46
for a web page and
30:48
understanding a sort of end to end run where
30:51
it's like, oh, the real
30:53
problem here is that there's a repeated
30:56
call to a database enough that it's
30:58
doing a re-authenticate in the middle or
31:00
it's forcing a recompile of a stored
31:02
procedure. By the
31:04
way, on the day you find one of those from
31:06
a method call and you got all the way down
31:08
to, but we call it this many times and so
31:10
it forces recompile and that creates this overhead. Like
31:13
that's a very good day
31:15
because those are hard to
31:17
find. Like just a
31:19
tough list to get to, but you know, your point's well
31:21
taken. Each
31:24
tool provides its own view to
31:26
that and we ended up, I
31:28
think it was a dying trace where we were finally able
31:30
to see, oh, this is
31:32
a multiple database interaction problem
31:35
before we really saw the behavior correctly.
31:39
And guys, I want to pause for just a
31:41
few moments for these very important messages. And
31:48
we're back. It's .NET Rocks. I'm Carl
31:50
Franklin. That's Richard Campbell. Hey.
31:52
That's Daniel Marbach. And we're talking about
31:54
performance, squeezing performance
31:58
out of our applications. And
32:00
Daniel, right before the break, you were going
32:02
to make a point about memory
32:05
allocations. Memory allocations, yeah.
32:07
So what I wanted to say is
32:10
I feel like I need to sort
32:12
of clarify one thing, because I talked
32:14
a lot about sort of memory allocations.
32:16
I also sort of highlighted a little
32:18
bit the CPU stuff, right? But people
32:21
might get sort of the message that all
32:24
that matters is memory allocations, and
32:26
I definitely don't want to say
32:28
it that way. Because I just
32:30
feel like for me, I've always
32:32
sort of started looking at memory
32:34
allocations, because I've seen that these
32:36
are the areas in the applications that I
32:38
worked with, in the systems that I worked
32:41
with, where I can make sort of the
32:43
biggest impact to reduce the GC overhead without
32:46
sort of going into sort of
32:48
the algorithmic complexity and stuff like
32:50
that, that sometimes comes with tweaking
32:53
algorithms where CPU cycles are spent.
32:55
And I remember, I don't know the
32:58
exact quote, but David Fowler once sort
33:00
of tweeted, or is it still
33:02
called tweet? I don't know. But he
33:06
shouted into the interwebs that essentially
33:08
apparently memory stream 2RA and other
33:10
sort of 2RA are still the
33:13
biggest source of memory allocations in
33:15
.NET systems out there, which
33:18
kind of shows how important sort
33:20
of thinking about memory
33:22
allocations actually. And certainly in this
33:24
case of scale, that
33:26
when we're dealing with lots of iterations, again, I
33:28
come from the e-commerce space, the other thing I
33:31
ran into was we typically
33:33
had to build our load tests
33:35
to run for longer, because
33:37
you needed to get into
33:39
multi-generational memory to
33:42
actually understand behavior
33:44
in production. That
33:46
lighting up a load test that ran for 10
33:49
minutes and wrapped up didn't
33:51
give you the same results
33:53
as what was actually happening with your server, which
33:55
was two days in, because the...
34:00
The way that memory gets allocated over multiple
34:02
generations became a huge part of the problem.
34:04
Not that we had to wait two days,
34:06
but often we had to go a couple
34:08
of hours before you actually
34:10
get that Gen 2, Gen 3 reshuffling of
34:13
memory and not to say, this is fully
34:16
fragmented and restacked memory a few times. And
34:18
that's when those orphan long
34:20
duration objects created problems for us. Like
34:22
there's this chunk of memory in the
34:24
middle of the pool and it's been
34:26
there for two hours and what the
34:28
hell is that? Because it's screwing up
34:31
every GC. But
34:33
you only found that from these
34:35
longer runs. Daniel, is there anywhere
34:37
that replacing a task with a
34:39
value task will be a
34:41
problem? I mean, that's a common
34:43
way to reduce Gen 0 allocations
34:45
is to use value tasks. Value
34:48
task is definitely an
34:50
interesting sort of a
34:52
newer-ish addition to .NET.
34:56
I know that this is a little
34:58
bit of a controversial topic because some
35:00
people are like, yeah, you should be
35:02
using value task everywhere. I'm
35:04
more sort of in the camp of use
35:06
it where it was designed for, which is
35:09
for IO bound paths where you
35:11
essentially have the maturity of the
35:13
calls or sort of getting a
35:15
cached value. And
35:18
then only like, I don't know, out
35:20
of 10 calls, maybe one
35:22
or two are actually doing the actual
35:24
i-operation. That's where I feel like value
35:27
task is sort of a neat sort
35:29
of approach to sort of make
35:31
sure that you don't have that
35:33
many allocations anymore. But
35:35
to be honest, I think
35:38
in most systems, when you have
35:40
to sort of do that type
35:42
of optimizations, then you're already super
35:44
far because I bet many
35:46
sort of applications, systems that
35:48
are running on top
35:50
of maybe asp.net Core or some others,
35:53
they have other problems like memory stream,
35:56
two array, unnecessary byte
35:58
array allocation, stringifying. stuff
36:00
that doesn't need to be stringified, all
36:03
sorts of that stuff, instead of really thinking
36:05
about switching from task to value
36:08
task. How about switching
36:10
from Web API to gRPC? That's
36:17
a good one. Yeah,
36:20
I mean, even for example,
36:22
switching from Newton's of JSON
36:24
to system text JSON, right?
36:26
Or using source generated... I'm
36:29
sorry, which one's faster? Are you going to touch that?
36:32
No, let's not go there. Let's not. Well,
36:36
I mean, you open the door. You need
36:38
to walk through it. I
36:41
would much rather talk about, I don't know, Swiss
36:43
cheese or something like that than talking about that.
36:45
No. Well, I
36:47
mean, we don't want the listeners to
36:50
get the impression that just by
36:52
switching from Newton's of to system
36:54
text JSON, you're going to
36:56
find a performance improvement. Did
36:58
you really mean that? So
37:00
we actually have seen
37:03
when we switched from Newton's
37:05
of JSON across the board,
37:08
quite hefty improvements,
37:10
especially when combined, the system
37:14
text JSON, to your last
37:16
combined with source generated approaches,
37:18
where you also sort of
37:20
are more AOT friendly, as
37:22
an example, you have faster
37:24
startup times on Azure Functions
37:26
or AWS Lambda. So
37:29
there is definitely merit
37:31
to that. But
37:33
I don't want to say Newton's
37:35
of JSON is bad. I think it
37:37
has its place. Well, the same guys
37:39
working on system text JSON. I mean,
37:41
it's James Newton King. Yeah, exactly. And
37:44
gRPC web, right? Yeah, exactly. Yeah.
37:46
I don't think being part of that newer
37:48
version of .NET and integrated with that team,
37:51
there's a lot of performance people there. You
37:53
just have the resources. If
37:55
the eye of Soron that is Steven Tao
37:57
is paying attention to your code. Your
38:00
coach going to be faster. Yeah.
38:03
That guy's amazing. But again, I mean, if
38:06
you have your harness and you
38:08
attach it and do some profiling and you find
38:10
out that the serialization subsystem
38:12
is actually the problem that
38:14
makes everything so much slower, then that
38:17
change makes sense. But I guess there
38:19
are also other areas of
38:21
improvement that you can sort of leverage. But
38:24
I would like to, if you don't mind, I
38:26
would like to switch a little bit into sort
38:28
of the benchmarking stuff if we still have some
38:30
time. Sure. Is that okay for
38:32
you? Yeah, go for it. Because Richard and I,
38:34
we talked about this as well recently
38:38
in Warsaw and in
38:40
Porto about benchmarking stuff. I think that was the
38:42
point where I said, all right, we need to
38:44
make a show about this. Yeah.
38:49
So one of the things that I
38:52
found really interesting is when the
38:54
first time I sort of got
38:56
into contact with benchmarking was I
38:59
read lots of blog posts about benchmarks.net
39:01
and sort of I was looking at
39:03
sort of these benchmarks out there.
39:05
It's like, yeah, it's easy. It's like
39:08
a unit test, right? It's like I've
39:10
written plenty of unit tests with X
39:12
units, God forbid, MS tests or whatever.
39:14
Right. So it's like, I know this. It's
39:16
not going to be difficult. But
39:18
I was quite surprised that essentially
39:21
if you're required
39:23
to have a different understanding in order
39:25
to have a good benchmark and one
39:28
is a unit test has two
39:30
states, right? It's either red or it's
39:32
green. So it passed or
39:34
failed. Passed or failed, right? But a
39:36
benchmark is something really different because what
39:38
a benchmark does, it essentially, especially when
39:41
you use a benchmark on that, by
39:43
the way, excellent tool. Shout
39:45
out to all the people that have
39:47
been involved there. Indeed.
39:49
It's like you are executing
39:52
a given scenario under hundreds
39:54
and thousands of iterations. All
39:57
right. And so what it means is
39:59
there is no path. or fails, you
40:01
basically get sort of standard deviations, you
40:03
get GC, you measure
40:06
the GC involvement and stuff like
40:08
that. So that's what the
40:10
benchmark is, right? But it
40:13
doesn't just start with understanding what
40:15
the benchmark is, it also goes
40:17
to how should I even
40:19
put my code on their benchmark harness. And
40:23
that turned out to me the really tricky
40:26
part because I was only reading about this.
40:28
Oh, here is a before and
40:30
after comparison between string concatenation, a
40:33
string builder and the value string
40:35
builder, which is the fastest, right?
40:37
Super easy scenario. But when you're
40:40
actually taking your production
40:43
systems cold that you had under a test
40:45
harness, and you sort of filtered in and
40:47
say, okay, there is a bunch of things
40:49
that we need to improve, and
40:52
then you want to measure the before and
40:54
after, how do you take this cold, which
40:56
is probably not just a public method on
40:58
a static class that you can take and
41:01
then put into a benchmark, how do you
41:03
actually take all the, sort of make sure
41:05
that you can measure what's going on, remove
41:07
all the side effects that you don't want
41:09
to have in your code, so
41:12
that you have reliable sort of benchmarking
41:14
results and that you can compare the
41:16
before and after. And that was the
41:18
thing that struggled tremendously with it.
41:21
And I found my way sort
41:24
of to do it, especially sort of
41:26
when I was still sort of growing
41:28
on the before, becoming performance aware. And
41:32
that was essentially I went with a very
41:34
simple simple approach. I essentially took sort of
41:36
the components that were sort of on that
41:38
cold path. I usually copy
41:40
pasted the code into a
41:42
sort of a dedicated source
41:44
repository, stripped away all
41:47
the unnecessary stuff. For example, when I had
41:50
an IOC container in place, I
41:52
removed it and I added a new of the
41:54
things that I wanted to new up or
41:56
if I had I or bound stuff and they didn't want
41:58
to measure the I or bound stuff. I
42:00
removed it with a task complete task stuff
42:03
like that, right and then I had sort of
42:05
a dedicated Code base
42:07
that was in a specific state
42:09
in a controllable state where I knew
42:11
all the noise that is That I
42:13
don't want to sort of measure is
42:15
gone and now I can focus on
42:18
that specific sort of Benchmark
42:20
is scenario that I'm looking at But
42:24
it is the unit test tile test
42:26
effectively. It was like just bench this
42:28
piece So that I know
42:30
I've gotten results from that. I'd
42:32
also say it as a corollary to this That
42:36
you do get decreases in performance
42:38
with later versions because you're doing
42:40
more Mm-hmm. And
42:42
so often it's like, you know the the whole conversation
42:44
with the p.m If we
42:46
because I've gotten a situation where we literally
42:48
had benchmarks as part of the CI CD
42:50
pipeline Now they're coming back and
42:52
saying hey the old the new version slower
42:55
than the old versions Like the old version
42:57
didn't do all these things you asked for
42:59
like yeah, this is the overhead for the
43:01
feature you asked And
43:03
that was getting into SLA rules and things where
43:05
it's like the customer expects us to deliver this
43:08
in X mini Fractions of a
43:10
second. It's like well, we're getting closer to the limit
43:12
because the eyes were asked to do
43:14
more update the SLA but
43:16
it's interesting that you mentioned that because I
43:18
think one of the sort of Benefits
43:21
of the approach that just described is that
43:23
you can easily get start with not even
43:26
thinking about What is how
43:28
can we actually capture regression or
43:31
where should we execute those tests?
43:33
What is a reliable CI CD
43:35
environment in order to have
43:37
sort of measurable? And
43:40
statistically relevant results from
43:42
from this environment Right and
43:44
but what you're essentially talking about this
43:46
sort of more regression testing as well
43:49
And there's actually and there
43:51
is actually a lot of great guidance
43:53
a little bit hidden in the dotnet
43:55
performance repository I think
43:57
it was also driven by Adam Sitnik and
44:00
some other people from his team. I
44:03
actually talked to him a little
44:06
bit over the course of, I was preparing
44:08
for a talk about this specific topic. I
44:10
talked to him a lot about it
44:12
and they have a tool that
44:14
essentially allows you to when using
44:16
benchmark.net. What you can do is
44:18
you can actually execute benchmark.net against
44:20
a specific version of a benchmark
44:22
and a specific version of the
44:25
code. You can store the artifacts
44:27
and then you can execute it again against
44:30
the changed version, store the artifacts
44:32
and then you can use the compare tool
44:34
that they have to sort of create
44:37
a diff between the before and the
44:39
after version and then you can define
44:42
the threshold that sort of determines when
44:45
it was an unacceptable sort of
44:47
regression. Then for example, you
44:49
can fail your CI CD
44:52
environment but because of
44:54
performance. But CI CD is also interesting
44:58
because Andre Akenshin wrote
45:00
an excellent blog post about
45:03
this topic because he did this part of
45:05
sort of investigations. He looked for example at
45:08
GitHub action runners and
45:11
the result there is that
45:13
you essentially cannot use
45:15
GitHub action runners to
45:18
actually do regression testing
45:20
for performance because they're
45:22
just so unreliable. You
45:25
basically have up to three
45:28
times different memory and CPU
45:30
difference between builds. It's
45:41
insane. Wow, that's crazy.
45:43
Yeah, it's
45:46
quite fascinating. It just means if you're
45:48
going to benchmark this way, you have
45:50
to control your CI CD pipeline. Correct.
45:53
Then you have to have dedicated hardware
45:55
that you sort of put somewhere or
45:57
rent basically bare metal hardware.
46:00
where you and have run their infrastructure
46:02
that sort of allows you to sort of
46:05
put those tests to that reliable hardware
46:07
and you can't use. Sure I mean
46:09
you're saying the cloud's not an option here?
46:12
Well it is definitely it is an option
46:14
right? Yeah you just you're gonna you can't
46:16
use the built-in infrastructure for this you have
46:18
to set it up you have to write
46:20
your own YAML. I've
46:25
done this like we one of the one of the issues we
46:27
were dealing with is we were up to I don't know
46:29
three or four thousand different web tests we
46:31
wanted to do and they took a day
46:34
and we needed under 15 minutes and so
46:37
we would light up 20 instances of the
46:39
site in the cloud and parallelize all the
46:41
tests. My goal was always by the time
46:43
you got back to your desk from coffee
46:46
the test results were back and you had
46:48
failed. Yeah right
46:50
that was you because instead it's
46:52
still in your head like what
46:55
we realized was that the every
46:57
minute that goes by after they've pushed the
47:01
codes leaking from their mind and
47:03
if it's and if it's a day it could be
47:05
anybody like they're gonna have to start over picking it
47:07
back up again but if we could
47:09
get it to them in under an hour
47:12
like 50 minutes was the magic number they
47:14
knew exactly what that oh I know what
47:16
that error is and off they go again
47:18
like it just saves so much remediation. Yeah
47:21
so I jacked up the test bill because it
47:23
saved the dev bill. Amazing
47:25
yeah so and again I think
47:27
it's all when we talk about
47:29
Azure DevOps Runners or GitHub Action
47:31
Runners usually there's a shared sort
47:33
of infrastructure that you
47:35
have there and they did. Yeah so
47:37
you just it's not good for benchmarking
47:39
you don't know what performance is getting
47:41
at. Yeah no repeatable results which oddly
47:43
are necessary. But
47:46
if you're talking paths fail that's fine you
47:48
don't care if it ran twice as long
47:50
half as long it was just pass fail
47:52
for functionality that's just fine. Correct. But
47:54
again my message here is I
47:57
want to hammer this home I think Becoming.
48:00
Performance aware. I think it's Journey and
48:02
you don't need to basically end up
48:04
where we just. A while
48:06
to use you will end up eventually their
48:08
what were we just talked about with having
48:10
but that's own hardware is you need to
48:13
do progressive testing but already friends having your
48:15
harness and place understand the profile us to
48:17
so that you consume in where it should
48:19
actually make those sort of performance improvements than
48:21
using a to like benchmarks the net which
48:24
saves you a lot of headaches because it's
48:26
sort of said of his or it decides
48:28
to sort of mitigate the most on the
48:30
stuff you didn't have ready for yourself and
48:33
correct So diseases and a bunch of smart
48:35
people. Are working correct? Yeah, and Medicaid
48:37
Thousand Canadians. And then you can
48:39
start. There may be copy paste so
48:41
cold that beginning isolate the things
48:43
that you wanna do, get started there
48:46
and then at a later point
48:48
time where when you're companies already
48:50
have more performance aware, you can start
48:52
slowly, introducing sort of more sort
48:54
of mature ways of actually doing performance
48:57
testing and and regression testing all
48:59
the way along. Yeah, but you read
49:01
me know you would your pioneers
49:03
like you. Pretty far down the
49:05
path. At that point to yes. To
49:08
me, the big thing here is
49:10
when does performance creep into the
49:12
requirements? Because a
49:14
lot of folks you know prayer Early days.
49:16
A project to study. been on the radar.
49:19
right? But he are. You know to actually
49:21
file performance problems as a bot. To.
49:23
Get them on the sprint. I
49:26
to be part of the conversation at all.
49:28
like that's already. That's arguably the starting point
49:30
of any of that pass. is
49:33
that it's like it's bubbled up to
49:35
the point where business cares about it
49:37
in other the line i used to
49:39
do and i did these talks was
49:42
performances like air you only care about
49:44
it when you don't have an air
49:46
i i'm a big believer in having
49:49
nonfunctional requirements in in move in the
49:51
design or and the architecture's sort of
49:53
built in and having explicit discussions about
49:55
nonfunctional requirements is also prioritize them with
49:58
your business yeah cold there in
50:00
order to make the right trade-offs. Well, and
50:02
the sneaky part about that is let them
50:05
tell you it isn't important. So later when
50:07
they decide it is important, but you said...
50:10
Because again, performance
50:12
is one of those things where nobody cares about
50:14
it until they do. If
50:17
I suck the air out of the room, you're suddenly really interested
50:19
in air. Absolutely. But...
50:22
And it's the same thing. It's like you never
50:24
thought about... You can debate all day our render
50:26
time of two seconds versus four seconds. I'm sorry
50:28
I'm so web-centric on this stuff. And
50:31
that's not a big deal. Everybody knows that 30
50:33
seconds is bad. And
50:37
so that's sort of these kinds of thresholds. And
50:39
it's hard to talk about that out of
50:43
context. You kind of have to
50:45
make a slow page for everybody
50:47
to start getting, hey, slow page
50:49
bad. So that requires for minimum
50:51
performance and figure out where they are. And the
50:53
good news is, of course, there's lots of written
50:55
documentation on this. You don't have to invent the
50:57
field. And one of the things that I also
51:00
really appreciate by doing these types
51:02
of investigations is... And
51:04
small improvement is you learn a ton
51:06
about the code path in question. And
51:09
that gives you a lot of insights
51:11
into a potential redesign in the future. Because
51:13
so many people are just throwing out there, I'll
51:15
just rewrite this. But they have to make it
51:17
faster. You're not going to make it
51:19
better if you don't know what they like. I
51:23
also find that most folks who spent time
51:25
in the tuning part understand the
51:27
execution path, like the behavior of software in a
51:29
deeper level than a lot of folks
51:31
that wrote it in the first place. Because often you're
51:34
just trying to get to the deliverable. Does
51:36
the feature do the feature requirements that were there?
51:38
I know we only have a few
51:41
minutes left. But at what point in
51:43
your investigation do you
51:45
consider re-architecture, which
51:47
is obviously the
51:50
most expensive and risky thing,
51:52
way to improve performance? But
51:56
as you're going through a project and
51:58
looking... or a tool
52:00
or something and looking at every
52:02
little thing that you can squeeze out. And
52:05
something jumps out at you, oh, well, this
52:08
should be refactored or maybe even
52:11
completely re-architected. How
52:15
often does that happen? It's a difficult
52:17
question to answer generically, but I can
52:19
give you a concrete example. So I
52:21
worked a lot, contributed a
52:23
lot to the Azure Service Post-it on Net SDK.
52:26
And essentially, I think it was down sort
52:29
of 20 pull requests on
52:31
sort of the path where the sort of
52:34
the Azure Service Post sort of takes, you
52:36
get the byte arrays from Azure Service Post
52:38
and hand it over to Azure Functions or to
52:40
your code that is running. Where I
52:43
did lots of tiny improvements until
52:46
I actually understood sort of how
52:48
the body management of the bytes
52:50
payloads actually really, really work. And
52:53
then, only then, I came up with a
52:56
better idea how to sort of manage
52:58
that body work from different aspects.
53:00
And that then led to sort
53:02
of even more orders of magnitude
53:05
of improvement, how the body is
53:07
sort of managed and less
53:09
allocations, more efficient in CPU cycles.
53:11
But it was like, I
53:14
think, okay, I contributed in my free
53:16
time, whatever that means these days when
53:18
you're constantly online, right? But I contributed
53:20
in my free time, I guess, over
53:23
a year to this code base until
53:26
together with the team, we realized, oh, there
53:30
are actually things we can sort
53:32
of really refactor and make things even faster.
53:34
So I get, I'm actually, I have the
53:36
tendency to go a very long time on
53:39
a specific code path before
53:41
I even reconsider re-architecting or
53:43
redesigning. Yeah. Of course, small improvements. That's
53:45
what I figured too. Can also sort
53:47
of sometimes mean you're not
53:49
newing up something, you're making something a
53:51
singleton that previously wasn't a singleton, something
53:53
like that, right? But... And
53:56
if it's still not performing enough,
53:58
that's when you think about... Rearchitecture.
54:00
Correct. Because what's also great
54:03
is when it's running in production, it's
54:05
making money, right? And it gives you
54:07
insight about what it's doing. And
54:09
when you're architect and redesigning, for that
54:12
period of time, you have no validation
54:14
whether the stuff that you're changing towards
54:17
will work. And with the small
54:19
improvements, you have constant feedback loop. And
54:22
I think I feel that's super, super
54:24
important. So
54:26
what's next for you, man? What's in your inbox? Well,
54:30
Christmas time, of course. Drinking
54:32
way too much beer probably over Christmas
54:34
time. A delicious stout or something like
54:36
that. What is this
54:39
too much? You speak out. But
54:45
next year, I will be in .NET Day,
54:49
Romania at IRINAS conference. I
54:51
will be delivering a workshop
54:54
about reliable messaging
54:56
in Azure. Deep diving into Azure
54:58
Service Bus, storage queues, event hubs
55:00
and event grids. That's going to
55:02
be really interesting. And
55:05
what else? I don't know yet from
55:07
a conference perspective. But definitely
55:10
increase a little bit more my contributions
55:12
to open source stuff. Because I still
55:14
have a few ideas. Good.
55:19
As you open source library. Yeah.
55:22
Yeah, it sounds good. Well, Daniel, thanks for spending this
55:24
hour with us. Great.
55:26
Thank you. All right. We'll
55:29
see you next time on .NET. .NET
55:52
Ross is brought to you by Franklin's Net
55:54
and produced by Plop Studios. A
55:57
full service audio, video and post production
55:59
facility. located physically in New
56:01
London, Connecticut, and of course in the
56:03
cloud. Online at
56:06
pwop.com. Visit our
56:08
website at dotnetrocks.com for RSS
56:10
feeds, downloads, mobile apps, comments,
56:13
and access to the full
56:15
archives going back to show
56:17
number one recorded in September
56:19
2002. And make sure you
56:21
check out our sponsors. They
56:23
keep us in business. Now
56:26
go write some code. See you next time.
Podchaser is the ultimate destination for podcast data, search, and discovery. Learn More