| ▲ | gowld 7 hours ago | |||||||
More important is that the new algorithm has a multiplicative factor in m (edges), so it's only efficient for extremely sparse graphs. If m > n (log n)^{1/3} Then this algorithm is slower. for 1 Million nodes, if the average degree is >3.5, the new algorithm has worse complexity (ignoring unstated constant factors) | ||||||||
| ▲ | bee_rider 7 hours ago | parent | next [-] | |||||||
Yeah, just based on this article that really stood out. It seems to be for a different use-case than Djikstra’s. An average degree of 3.5 seems like an extremely practical a useful use-case in real life, I just don’t see any reason to put it and Djikstra’s against each-other in a head-to-head comparison. | ||||||||
| ▲ | usrusr 7 hours ago | parent | prev [-] | |||||||
"Any sufficiently sparse graph is indistinguishable from a linked list" comes to mind ;) | ||||||||
| ||||||||