Remix.run Logo
godelski 3 days ago

I think you're right. Early on I did HPC and scientific computing. No one talked about Big O. Maybe that's because a lot of people were scientists, but still, there was a lot of focus on performance. Really how people were optimized is with a profiler. You talk about the type of data being processed and how and looked for the right way to do things based on this and people didn't do the reductions and simplifications in Big O.

Those simplifications are harmful when you start thinking about parallel processing. There's things you might want to do that would look silly in serial process. O(2n) can be better than O(n) because you care about the actual functions. Let's say you have a loop and you do y[i] = f(x[i]) + g(x[i]). If f and g are heavy then you may want to split this out into two loops y[i] += f(x[i]) and y[i] += g(x[i]) since these are associative (so non-blocking).

Most of the work was really about I/O. Those were almost always the bottlenecks. Your Big O won't help there. You gotta write things with awareness about where in memory it is and what kind of memory is being used. All about how you break things apart, operate in parallel, and what you can run asynchronously.

Honestly, I think a big problem these days is that we still operate as if a computer has 1 CPU and only one location for memory.