Remix.run Logo
Sesse__ 7 months ago

> E.g. you could have done 10s or 100s of "small optimizations" but yet there is no measurable impact on the E2E runtime performance.

My experience actually diverges here. I've had cases where I've done a bunch of optimizations in the 0.5% range, and then when you go and benchmark the system against the version that was three months ago, you actually see a 20% increase in speed.

Of course, this is on a given benchmark which you have to hope is representative; it's impossible to say exactly how every user is going to be in the wild. But if you accept that the goal is to do better on a given E2E benchmark, it absolutely is possible (and again, see SQLite here). But you have to sometimes be able to distinguish between hope and what the numbers are telling you; it really sucks when you have an elegant optimization and you just have to throw it in the bin after a week because the numbers just don't agree with you. :-)

menaerus 7 months ago | parent [-]

> My experience actually diverges here. I've had cases where I've done a bunch of optimizations in the 0.5% range, and then when you go and benchmark the system against the version that was three months ago, you actually see a 20% increase in speed.

Yeah, not IME really. First, I don't know how to measure at 0.5% resolution reliably. Second, this would imply that YoY we should be able to see [~20, ~20+x]% of performance runtime improvement of software we are working on and this doesn't resemble my experience at all - it's usually vice-versa and it's mostly about "how to add new feature without making the rest of this ~5 MLoC software regress". Big optimization wins were quite rare.

Amdahl's law says that "overall performance improvement gained by optimizing a single part of a system is limited by the fraction of time that the improved part is actually used" so throwing a bunch of optimizations at the code does not, for most cases, result with the overall improvement. I can replace my notorious use of std::unordered_map with absl::flat_hash_map and still see no improvement at all.

> it really sucks when you have an elegant optimization and you just have to throw it in the bin after a week because the numbers just don't agree with you. :-)

It really does and I've been there many times. I however learned to understand this as "I have a code of what I thought it should improve our runtime but I found no signal that will support my theory". This automatically makes such changes difficult to merge especially considering that most optimizations aren't of "clean code" practice.

Sesse__ 7 months ago | parent [-]

I see Amdahl's Law as an opportunity, not a limit. :-) If you optimize something, it means the remainder is now even more valuable to optimize, percentage-wise. In a way like compound interest.