Remix.run Logo
qsort 8 hours ago

I struggle to see the point. The paper in question doesn't claim to be practically faster or to want to "replace" Dijkstra, they are just saying "we got a better big-O" and I don't see any reason to doubt they're wrong about that.

It's actually common for algorithms with a lower asymptotic complexity to be worse in practice, a classic example is matrix multiplication.

Also please, please, can we stop with the "eww, math" reactions?

> The new approach claims order (m log^(2/3) n) which is clearly going to be less for large enough n. (I had to take a refresher course on log notation before I could even write that sentence with any confidence.)

I'm sure the author is just exaggerating, he's clearly very competent, but it's a sentence with the vibes of "I can't do 7x8 without a calculator."

yborg 7 hours ago | parent | next [-]

The Quanta article on the paper was considerably more breathless in describing a fine piece of work in mathematics. The author here points out that one of the things that makes Dijkstra's result iconic is that it could be used practically in a straightforward way. As an engineer, beautiful mathematics is useless if I can't convert it to running code.

tialaramex 6 hours ago | parent [-]

Actually there's a whole bunch of mathematics which I find useful as an engineer because it tells me that the perfection I have vaguely imagined I could reach for is literally not possible and so I shouldn't expend any effort on that.

e.g Two body gravity I can just do the math and get exact answers out. But for N> 2 bodies that doesn't work and it's not that I need to think a bit harder, maybe crack out some graduate textbooks to find a formula, if I did hopefully the grad books say "Three body problem generally not amenable to solution". I will need to do an approximation, exact answers are not available (except in a few edge cases).

tzs 4 hours ago | parent [-]

> Actually there's a whole bunch of mathematics which I find useful as an engineer because it tells me that the perfection I have vaguely imagined I could reach for is literally not possible and so I shouldn't expend any effort on that

That's actually given as a reason to study NP-completeness in the classic 1979 book "Computers and Intractability: A Guide to the Theory of NP-Completeness" by Garey & Johnson, which is one of the most cited references in computer science literature.

Chapter one starts with a fictional example. Say you have been trying to develop an algorithm at work that validates designs for new products. After much work you haven't found anything better than exhaustive search, which is too slow.

You don't want to tell your boss "I can't find an efficient algorithm. I guess I'm just too dumb".

What you'd like to do is prove that the problem is inherently intractable, so you could confidently tell your boss "I can't find an efficient algorithm, because no such algorithm is possible!".

Unfortunately, the authors note, proving intractability is also often very hard. Even the best theoreticians have been stymied trying to prove commonly encountered hard problems are intractable. That's where the book comes in:

> However, having read this book, you have discovered something almost as good. The theory of NP-completeness provides many straightforward techniques for proving that a given problem is “just as hard” as a large number of other problems that are widely recognized as being difficult and that have been confounding the experts for years.

Using the techniques from the book you prove the problem is NP-complete. Then you can go to your boss and announce "I can't find an efficient algorithm, but neither can all these famous people". The authors note that at the very least this informs your boss that it won't do any good to fire you and hire another algorithms expert. They go on:

> Of course, our own bosses would frown upon our writing this book if its sole purpose was to protect the jobs of algorithm designers. Indeed, discovering that a problem is NP-complete is usually just the beginning of work on that problem.

...

> However, the knowledge that it is NP-complete does provide valuable information about what lines of approach have the potential of being most productive. Certainly the search for an efficient, exact algorithm should be accorded low priority. It is now more appropriate to concentrate on other, less ambitious, approaches. For example, you might look for efficient algorithms that solve various special cases of the general problem. You might look for algorithms that, though not guaranteed to run quickly, seem likely to do so most of the time. Or you might even relax the problem somewhat, looking for a fast algorithm that merely finds designs that meet most of the component specifications. In short, the primary application of the theory of NP-completeness is to assist algorithm designers in directing their problem-solving efforts toward those approaches that have the greatest likelihood of leading to useful algorithms.

shermantanktop 7 hours ago | parent | prev | next [-]

I read it as a musing on the folly of improvements that don’t deliver benefits within the practical bounds of actual problems. Which is a lesson seen everywhere in physical systems and manufacturing. Is it an amazing insight? No, but it’s a lesson that is relearned by everyone several times.

gowld 7 hours ago | parent | prev | next [-]

More important is that the new algorithm has a multiplicative factor in m (edges), so it's only efficient for extremely sparse graphs.

If m > n (log n)^{1/3}

Then this algorithm is slower.

for 1 Million nodes, if the average degree is >3.5, the new algorithm has worse complexity (ignoring unstated constant factors)

bee_rider 7 hours ago | parent | next [-]

Yeah, just based on this article that really stood out. It seems to be for a different use-case than Djikstra’s. An average degree of 3.5 seems like an extremely practical a useful use-case in real life, I just don’t see any reason to put it and Djikstra’s against each-other in a head-to-head comparison.

usrusr 7 hours ago | parent | prev [-]

"Any sufficiently sparse graph is indistinguishable from a linked list" comes to mind ;)

gowld 5 hours ago | parent [-]

A linked list is sparse by the metric of minimum maximum degree (2).

A maximally sparse connected graph by mean (degree edge/node ratio) is any tree (mean degree ~ 1), not necessarily a linked list.

mightyham 7 hours ago | parent | prev [-]

> I struggle to see the point. The paper in question doesn't claim to be practically faster...

I struggle to see the point of your comment. The blog post in question does not say that the paper in question claims to be faster in practice. It simply is examining if the new algorithm has any application in network routing; what is wrong with that?