| ▲ | ogogmad 6 hours ago | |
In the most extreme case of this, you can use Counting Sort. Tangential to this, Spaghetti Sort makes me wonder about which parts of CS (especially the data structures, like arrays) are objective or just accidental. The transdichotomous model looks interesting. | ||
| ▲ | qsort 5 hours ago | parent [-] | |
If we're taking the pure math view, complexity analysis is agnostic to this. Strictly speaking you'd have to define how your abstract machine works: O(n log n) is (again, strictly speaking) literally just a set of functions. In practice we usually hand-wave this away (not without problems: arithmetic taking time O(log n) and hash table lookups taking time O(1) are not consistent with each other!) The unstated implication is that the theory tracks with real world behavior. This is more or less the case for simple problems: your O(n^2) algorithm won't scale, your database query without an index will take forever and so on, it's definitely a useful and high-signal way of thinking about computational problems. Obviously modern hardware isn't much like a 70s machine and things like "all memory accesses are O(1)" are so wrong it's not even funny. It's a pure thought experiment with no possible counterfactual, but I think if you tried to develop basic CS theory from a purely mathematical angle (e.g. consider a machine defined so and so with basic operations costing 1, let's run with this ball without caring much about the real world) you'd naturally end up with some (but not all) of the concepts we're used to. For example, arrays and buffers are very natural. We need more space than our basic unit of memory can accomodate, let's use several in a row. Pointers also follow very nicely, and with them structures like linked lists. Other stuff like B-trees and to some extent hash-tables and friends very much less so, they're definitely "imported" from real-world usage. | ||