| ▲ | cogman10 6 hours ago | |||||||
Uhh, I knew I wasn't going to like this one when I read it. > Premature Optimization (Knuth's Optimization Principle) > Another example is prematurely choosing a complex data structure for theoretical efficiency (say, a custom tree for log(N) lookups) when the simpler approach (like a linear search) would have been acceptable for the data sizes involved. This example is the exact example I'd choose where people wrongly and almost obstinately apply the "premature optimization" principles. I'm not saying that you should write a custom hash table whenever you need to search. However, I am saying that there's a 99% chance your language has an inbuilt and standard datastructure in it's standard library for doing hash table lookups. The code to use that datastructure vs using an array is nearly identical and not the least bit hard to read or understand. And the reason you should just do the optimization is because when I've had to fix performance problems, it's almost always been because people put in nested linear searches turning what could have been O(n) into O(n^3). But further, when Knuth was talking about actual premature optimization, he was not talking about algorithmic complexity. In fact, that would have been exactly the sort of thing he wrapped into "good design". When knuth wrote about not doing premature optimizations, he was living in an era where compilers were incredibly dumb. A premature optimization would be, for example, hand unrolling a loop to avoid a branch instruction. Or hand inlining functions to avoid method call overhead. That does make code more nasty and harder to deal with. That is to say, the specific optimizations knuth was talking about are the optimizations compilers today do by default. I really hate that people have taken this to mean "Never consider algorithmic complexity". It's a big reason so much software is so slow and kludgy. | ||||||||
| ▲ | krust 3 hours ago | parent | next [-] | |||||||
> Another example is prematurely choosing a complex data structure for theoretical efficiency (say, a custom tree for log(N) lookups) when the simpler approach (like a linear search) would have been acceptable for the data sizes involved. To be fair, a linear search through an array is, most of the time, faster than a hash table for sufficiently small data sizes. | ||||||||
| ||||||||
| ▲ | bigfishrunning 5 hours ago | parent | prev [-] | |||||||
Yeah, for every Knuth there are 10000 copies of schlemiel the painter | ||||||||