| ▲ | Sankozi 6 days ago |
| Yes, lots of these problems assume fantasy infinite world. Big O notation also suffers from this - it's almost useless for real world problems. |
|
| ▲ | virgilp 6 days ago | parent | next [-] |
| > Big O notation also suffers from this - it's almost useless for real world problems. This is quite a bold claim. Yeah "in the real world" you often need to take into consideration the constant factors... but to go this far to say that Big O is useless? It's very useful to figure out what would likely work (or not) for your problem, given that you know the data size - I used it for exactly that, many many times. |
| |
| ▲ | Sankozi 6 days ago | parent [-] | | Let's say you have 2 algorithms. Constant time O(1) and exp time 2^n. It seems constant time is better. But in fact it runs always 42 years. While exp one starts to run longer than millisecond for sizes bigger than number of particles in universe. Big O says how fast complexity grows, but it doesn't say what that complexity exactly is. Does something like that happens in real world? Yes, but of course not in such drastic ways. | | |
| ▲ | forrestthewoods 6 days ago | parent | next [-] | | The fact that Big O notation is sometimes misleading or not helpful is not evidence that it is not generally useful. https://www.tumblr.com/accidentallyquadratic | | |
| ▲ | qludes 6 days ago | parent [-] | | As a concept it is pretty useful for me in a handwavy astrophysics math way because I otherwise wouldn't necessarily know how to make my naive solution fit real world constraints. |
| |
| ▲ | jhanschoo 6 days ago | parent | prev | next [-] | | If you are writing everyday programs your constant factors are going to be very comparable and proportional to the size of your program, unless you have somewhere in your program hardcoded numerical constants like, run this loop 1 trillion times. | |
| ▲ | hnfong 5 days ago | parent | prev | next [-] | | If we're talking about "fantasy world" vs "real world", the "always run 42 years algorithm" surely is closer to fantasy world than most reasonable Big-O analyses... If you need to pull an example from fantasy world to illustrate your point about Big-O notation not being useful "in the real world" you're probably committing the same alleged mistakes as computational theorists... | |
| ▲ | virgilp 5 days ago | parent | prev | next [-] | | > While exp one starts to run longer than millisecond for sizes bigger than number of particles in universe. You don't really know how this works, do you? I can guarantee that thera are more than 1M particles in the universe. And if you iterate 2^1M times, doing nothing, on a current computer... that's basically indistinguishable from an infinite loop. | |
| ▲ | qazxcvbnm 6 days ago | parent | prev | next [-] | | A very exaggerated example to use exp time… Whatever your constant, for exp time, it’s going to run for times longer than the life of the universe on probably any n > 10… Since you know, log time is practically constant | |
| ▲ | pyfon 6 days ago | parent | prev [-] | | Most of the time the constants are going to be similar. And by similar I mean < 3 orders of magnitude say. So 2^9 or adding 9 to N eats that up pretty quick. |
|
|
|
| ▲ | MattPalmer1086 6 days ago | parent | prev | next [-] |
| You find Big O notation useless for real world problems? I find it useful to characterise algorithmic performance, e.g. O(1), O(n) O(n^2), etc. What do you find useless about it - and do you know of something that works better? |
| |
| ▲ | MrBuddyCasino 6 days ago | parent | next [-] | | Simpler algorithms are usually faster for small N, and N is usually small. Big O assumption is fantasy world where N is always large, and constant factors that slow down an algorithm can be ignored. | | |
| ▲ | nottorp 6 days ago | parent | next [-] | | https://news.ycombinator.com/item?id=26296339 Small N right? | | | |
| ▲ | MattPalmer1086 6 days ago | parent | prev | next [-] | | Yeah, constant factors are not represented and often make a big real world difference. Good point. | |
| ▲ | coldtea 6 days ago | parent | prev [-] | | >Simpler algorithms are usually faster for small N, and N is usually small. This mentality is how we ended up with cpu and memory hogging Electron apps... | | |
| ▲ | JohnKemeny 6 days ago | parent | next [-] | | That's not an accurate description. For example, it is common in sorting algorithms to do an n² algorithm like bubble sort when the list to sort is small (e.g. <50), whereas when it's larger, we do an n log n algorithm like merge sort. The issue is that merge sort does recursion, which comes with some extra cost, so that an n² algorithm beats an n log n algorithm, provided n is small. It has nothing with your criticism to do. | | |
| ▲ | Al-Khwarizmi 5 days ago | parent [-] | | You can (and probably should) add a threshold to recursive algorithms like mergesort so that they don't end up doing recursive calls on very small arrays. For arrays smaller than the threshold, insertion is faster than bubble sort. And if you really don't want recursion for large arrays to begin with, you have heapsort or Shell sort. I don't think there is any practical case where using bubble sort makes sense, except "efficiency doesn't matter for my use case and this is the code I already have". | | |
| ▲ | graycat 5 days ago | parent [-] | | As in one of volumes of Knuth's The Art of Computing, Heap sort is in place and achieves the Gleason bound so can be regarded as not beaten by quick sort. |
|
| |
| ▲ | MrBuddyCasino 6 days ago | parent | prev [-] | | Electron a) isn’t really slow for what it does b) introduces many layers of abstraction, leading to larger memory consumption compared to „native“ apps c) is certainly not algorithmically unoptimized, it runs on a software stack that has been tuned like few others using billions of dollars You just loosely associated words and concepts that occupy a similar emotional niche. |
|
| |
| ▲ | Sankozi 6 days ago | parent | prev [-] | | Better - estimate complexity so things like "this algorithm will take 1234 + 2n + 500m operations " for our use case Much better - research (not always possible) Best - benchmark it using realistic data from your use case | | |
| ▲ | bawolff 6 days ago | parent [-] | | > Better - estimate complexity so things like "this algorithm will take 1234 + 2n + 500m operations " for our use case I feel like that is never the level of detail you want. Either that is way too much detail and big-oh notation abstracts away the meaningless stuff. Or, if this level of detail matters, then you really need to go much further. Think about cache hit ratios, memory layout issues, different speed of different operations, etc. Calculating number of operations is a weird middle ground that seems pretty useless. |
|
|
|
| ▲ | suddenlybananas 6 days ago | parent | prev | next [-] |
| >Big O notation also suffers from this - it's almost useless for real world problems. It's not the only way to optimize things but there's a reason no one sorts with bubble sort. |
| |
| ▲ | Sankozi 6 days ago | parent | next [-] | | Yes bubble sort usually will take more time, but big O notation does not say that quick sort will be better for your real world problem. Complexity estimations or just benchmarks are much better at that. You should never limit yourself to big O notation when comparing algorithms. | | |
| ▲ | suddenlybananas 6 days ago | parent [-] | | >You should never limit yourself to big O notation when comparing algorithms. Sure, but it's still an incredibly useful tool for considering how your algo will scale. |
| |
| ▲ | LegionMammal978 5 days ago | parent | prev [-] | | Not quite bubble sort, but many recursive sorting implementations will switch to an O(n^2) algorithm like insertion sort once they reach a small number of elements. Simple but poorly-scaling methods usually have a place as base cases of hybrid algorithms. In other news, there's a reason no BLAS implementation multiplies matrices with Strassen's algorithm, and it's not that the people writing those are too dumb. | | |
| ▲ | Al-Khwarizmi 5 days ago | parent [-] | | Still, in quicksort, big O notation analysis is what tells you that you can further improve efficiency by leaving small subarrays unsorted and then performing a single call to insertion on the whole array at the very end, rather than one insertion call to sort each individual small subarray. This result is far from obvious without big O analysis. So still useful, even there. |
|
|
|
| ▲ | nottorp 6 days ago | parent | prev | next [-] |
| Please read accidentally quadratic from time to time... |
|
| ▲ | mkleczek 5 days ago | parent | prev [-] |
| Which is evidenced everyday when optimizing database performance by creating indexes /s |