▲ | kristopolous 6 days ago | |||||||||||||||||||||||||
This was the same interview where some guy was asking me about "big-o" - like the thing that you teach 19 year olds and I was saying that parallelization matters, i/o matters, quantization matters, whether you can run it on the GPU, these all matter. The simple "big-o" number doesn't account for whether you need to pass terabytes over the bus for every operation - and on actual computers moving around terabytes, I know, shockingly, this affects performance. And if you have a dual epyc board with 1,024 threads, being able to parallelize a solution and design things for cache optimization, this isn't meaningless. It's a weak classifier - if you really think I'm going to be doing a lexical sort in like O(n^3) like some kind of clown, I don't know what you're hiring here. Found out later he scored me "2/5". Alright, cool. | ||||||||||||||||||||||||||
▲ | kiitos 6 days ago | parent [-] | |||||||||||||||||||||||||
"big o" usually refers to algorithmic complexity, which is something entirely orthogonal to all of the dimensions you mentioned obviously all of this stuff matters in the end but big-o comes before all of those other things | ||||||||||||||||||||||||||
|