▲ | matheusmoreira 4 days ago | |
> too often people dont consider big O because it works fine with their 10 row test case.... And then it grinds to a halt when given a real problem The reverse also happens frustratingly often. One could spend a lot of time obsessing over theoretical complexity only for it to amount to nothing. One might carefully choose a data structure and algorithm based on these theoretical properties and discover that in practice they get smoked by dumb contiguous arrays just because they fit in caches. The sad fact is the sheer brute force of modern processors is often enough in the vast majority of cases so long as people avoid accidentally making things quadratic. Sometimes people don't even do that and we get things such as the GTA5 dumpster fire. | ||
▲ | janalsncm 4 days ago | parent [-] | |
Slightly related, in ML I write a lot of code which will be executed exactly once. Data analysis, creating a visualization, one-off ETL tasks. There are a lot of times where I could spend mental energy writing “correct” code which trades off space for time etc. Sometimes, it’s worth it, sometimes not. But it’s better to spend an extra 30 seconds of CPU time running the code than an extra 10 minutes carefully crafting a function no one will see later, or that someone will see but is harder to understand. Simpler is better sometimes. What Big O gives you is an ability to assess the tradeoffs. Computers are fast so a lot of times quadratic time doesn’t matter for small N. And you can always optimize later. |