▲ | vlovich123 5 days ago | |||||||
Hyperoptimizing for the fast path today and ignoring that hardware and usage patterns change is the reason modern software is so slooow :) A more robust strategy would be at least be to check if the rule was the same as the previous one (or a small hash table) so that the system is self-healing. Ken’s solution is at least robust and by that property I would prefer it since it’s just as fast but doesn’t have any weird tail latencies where the requests out of your cache distribution are as fast as the ones in. | ||||||||
▲ | Jean-Papoulos 4 days ago | parent | next [-] | |||||||
You were shown an example of exactly why this thinking is incorrect but you still insist... Also, it's trivial to keep Ken's implementation as the slow path. If request patterns change, dig up the new fast path and put the old one in Ken's slow path code. Most of the performance will still come from the initial `if`. | ||||||||
| ||||||||
▲ | necovek 5 days ago | parent | prev [-] | |||||||
Nobody is hyperoptimizing the fast path today. Ken's solution was stated to have been slower than the alternative optimization. | ||||||||
|