No, this is not clickbait (maybe). Researchers Ran Duan, Jiayi Mao, Xiao Mao, Xinkai Shu and Longhui Yin recently published a paper showing their deterministic $$$O(m\log^{2/3} n)$$$ algorithm for the directed Single-Source Shortest Path problem on sparse directed graphs — breaking almost 70 years of Dijkstra's reign in this field. I have yet to read it (looking forward to it, hopefully I can understand it) — but this shows that the common misconception that all interesting algorithms have been discovered is very wrong.
Also, if the name Xiao Mao sounds familiar — it did to me as well. It's the real name of matthew99, former LGM and ICPC World Champion — not sure if that's him, but it would be cool. Also found a GM Jiayi Mao on CF — AbstractKangaroo. Again — I have no idea how common these names are, but it wouldn't surprise me if it's them :)








Considering their email address is is matthew99a@gmail.com, almost certainly him.
Great catch, now I can remove the (maybe) from my clickbait title
Really nice! I feel like most of the recent algorithmic breakthroughs are randomized, so seeing a deterministic one like this makes it all the more special. Would be cool try this algorithm on some of the problems to see how it compares to Dijkstra in practice.
P.S. New candidates for interviews for Olympicode?
It would certainly be fun to talk to a CPer turned researcher — especially one so successful in both
This is a very genius idea but I'm concerned about the actual use case
Wouldn't a testcase that is large enough get TLE 15 seconds for just the I/O?
Not necessarily, no. This algorithm outperforms Dijkstra on sparse directed graphs specifically. Given a sparse directed graph with, say, $$$1e8$$$ nodes, Dijkstra (with $$$O(m+n\log{n})$$$) would almost certainly result in TLE. However, assuming you're not I/Oing for the nodes and only for the edges between them, and provided the graph is sparse enough, then using this algorithm which works in $$$O(m\log^\frac{2}{3}{n})$$$ allows you to efficiently handle much larger numbers of nodes than Dijkstra can.
a little correction, but the value of $$$n$$$ here is practically at most $$$O(m)$$$, as each edge is associated by at most $$$2$$$ new nodes. This means that you can just "discard" all nodes without edge association.
But still, in a sparse directed graph (where $$$n$$$ is very close to $$$O(m)$$$), Dijkstra's run in $$$O(m \log m)$$$, which is indeed asymptotically slower than this new algorithm. although, i don't really know how much more efficient is this new algorithm, since i don't know the constant factor of this algorithm relative to Dijkstra's.
Oh yeah, it seems I had a severe lapse in judgement. My message is very wrong, reading back what I wrote. I would delete it, but apparently you can't do that on CF.
These things are invented not so much to actually find shortest paths faster, but to understand the fundamental nature of the problem better.
Happy to see our work being mentioned here! It will be interesting to see if the algorithm is useful for competitive programming. Notice that our algorithm is the first breakthrough in the comparison-addition model, namely a model where we only allow comparisons and additions on edge weights. For the more general RAM model (which is more similar to the model we take for granted in competitive programming), there is already an algorithm known for 20 years in $$$O(m + n\log \log C)$$$ time for integer weights at most $$$C$$$.
Since the core of that algorithm is an $$$O(\log \log C)$$$-time integer heap, do you think it's possible to combine it with your procedure to improve the shortest path problem on word RAM?
It would be very challenging. Our algorithm does not rely on heaps.
I'm sure this is a cool algorithm, but I am a bit disappointed that cs researchers still haven't found out how to deal with negative edge weights.
There are some faster algorithms for negative weights (as mentioned in our paper):
yall seem to have a really loose definition of near-linear
Yeah this is some jargon in this field: I think usually "near-linear" is understood as $$$n \operatorname{polylog} n$$$ and $$$n^{1+o(1)}$$$ is referred as "almost-linear".
would you consider this time complexity good enough? :)
What will Tarjan say?
Knuth would say that if you don't use sorting then make meme 'One can't simply sort an array and find SSSP in sparse graph', Dijkstra would say — oh that's fast and without goto here we go again, Tarjan would say the things recently Hinton told — you guys so long in CP and legends of that be afraid of superintelligence and make results because without results you are simply legends
is this an impersonation of gpt-2?
Well, as a beginner, my classmates and I are always curious about how Tarjan pronounces. Because it seems like that the letter j pronounces /dʒ/, but I've heard that it should be pronounced /j/.