▲ | RedNifre 4 days ago | |||||||||||||||||||||||||||||||||||||
Okay, but how about checking the 99.9% policy first and if it doesn't match, run Ken's solution? | ||||||||||||||||||||||||||||||||||||||
▲ | fifilura 4 days ago | parent | next [-] | |||||||||||||||||||||||||||||||||||||
You'd be better off with some stupid code that junior devs (or Grug brained senior devs) could easily understand for the 0.1% cases. | ||||||||||||||||||||||||||||||||||||||
| ||||||||||||||||||||||||||||||||||||||
▲ | TrapLord_Rhodo 4 days ago | parent | prev | next [-] | |||||||||||||||||||||||||||||||||||||
That's what he said they did. that's the joke. >so they changed the existing code to just check that predicate first, and the code sped up a zillion times, much more than with Ken's solution. but since you've now spent all this time developing some beautiful solution for .01% of the actual data flow. Not the best use of dev time, essentially. | ||||||||||||||||||||||||||||||||||||||
▲ | lispitillo 4 days ago | parent | prev | next [-] | |||||||||||||||||||||||||||||||||||||
If a case doesn't match, then speeding up the remaining 0.1% is not going to change much the overall speed. Hence, a non optimized algorithm maybe enough. But when speed and speed variance is critical, then optimization is a must. | ||||||||||||||||||||||||||||||||||||||
▲ | hinkley 4 days ago | parent | prev [-] | |||||||||||||||||||||||||||||||||||||
The devil is in the details. A pair of my coworkers spent almost two years working on our page generation to split it onto a static part and a dynamic part. Similar to Amazon’s landing pages. Static skeleton, but lots of lazy loaded blocks. When they finally finished it, the work was mediocre for two reasons. One, they ended up splitting one request into several (but usually two), and they showed average response times dropping 20% but but neglected to show the overall request rate also went up by more than 10%. They changed the denominator. Rookie mistake. Meanwhile, one of my favorite coworkers smelled a caching opportunity in one of our libraries that we use absolutely everywhere, and I discovered a whole can of worms around it. The call was too expensive which was most of the reason a cache seemed desirable. My team had been practicing Libraries Over Frameworks, and we had been chipping away for years at a NIH framework a few of them and a lot of people who left had created in-house. At some point we lost sight of how we had sped up the “meat“ of workloads at the expense of higher setup time initializing these library functions over, and over, and over again. Example: the logging code depended on the feature toggle code, which depended on a smaller module that depended on a smaller one still. Most modules depended on the logging and the feature toggle modules. Those leaf node modules were being initialized more than fifty times per request. Three months later I had legitimately cut page load time by 20% by running down bad code patterns. 1/3 of that gain was two dumb mistakes. In one we were reinitializing a third party library once per instantiation instead of once per process. So on the day they shipped, they claimed 20% response reduction (which was less than the proposal expected to achieve), but a lot of their upside came from avoiding work I’d already made cheaper. They missed that the last of my changes went out in the same release, so their real contribution was 11%, but because of the extra requests it only resulted in a 3% reduction in cluster size, after three man years of work. Versus 20% and 7% respectively for my three months of work spread over five. Always pay attention to the raw numbers and not just the percentages in benchmarks, kids. |