A friend shared this paper with me. It was published today, and seems to suggest that max-flow/min-cut can be exactly computed in almost linear time in the number of edges for general graphs.

I haven't gotten much time to read through it, but I thought it would be interesting to post it here for discussion.

Am I correct if I rephrase that as follows? For any $$$\alpha > 1$$$, this algorithm is strictly faster asymptotically than $$$O(m^\alpha)$$$.

It's not something that can be just done from scratch during a contest so I expect an optimised library code. I looked through the paper for performance comparisons of this implementation to standard algorithms but didn't find any. That prompts a question: how fast is it in practice? It's like matrix multiplication: nobody uses weird-ass asymptotically best matmul because it's super complex and not that fast on normally sized matrices and modern computers.

Re matmat: it's possible to ask multiplications of 10^4 sized matrices in ways that require Strassen... although you do need hardware dependent optimizations / parallelization to truely compare.

For max-flow, existing codes are all really fast (about 10^6 edges a second). Looking at the code of HiPr (hierarchical push-relabel, anyone got link?) was actually a major motivation for this type of approach (lots of trees, no linear systems)

For experimental comparisons, the 'right' problem is probably min-cost flow. My understanding is that current codes get quite slow around 2 * 10^4 edges already. However, I have not systematically investigated benchmarks there, does anyone know what's the latest on that?

ilyaraz, xyz2606, Saurabh, Yihe, and I did look a bit at the performance of optimal transport a while back: OPOT. If you are interested in reviving that it would be awesome.

Elaborating a bit on what rpeng is saying. First, Strassen's

time matrix multiplication algorithm is (to my understanding) used in practice. This is because it's still pretty explicit.

And yes, maxflow has great heuristic algorithms in practice, just as various forms of push-relabel, possibly with tree-based heuristics. On the other hand, this seems to work worse with min-cost flow.

An amazing part of this numerical / linear-algebraic approach to designing outer loops for flow algorithms is that we never see the difference between max-flow and min-cost flow in our analysis at all! In fact, our algorithm works for

all flow problems: you can put your favorite convex function on each edge, we can still give a working algorithm (i.e. mincost flow is a linear cost function on each edge, but you can put any cost function you want and our algorithm still works in linear time).Correct me if I'm wrong, but I think the matrix multiplication algorithms Xellos is talking about are ones like this which runs in $$$O(n^{2.37})$$$, so it's asymptotically faster than Strassen, but the constant is so large it's unusable in the real world. Quoting the authors, "We nevertheless stress that such improvements are only of theoretical interest, since the huge constants involved in the complexity of fast matrix multiplication usually make these algorithms impractical."

Yeah, the ones beyond Strassen are purely theoretical (the algorithms are much less explicit than Strassen).

Yes, that's the type I'm talking about. It's really the point at which theoretical complexity is irrelevant in practice but we instead focus on things like cache misses, pipeline stalls and (sometimes) parallelisation costs incurred by our code on modern HW.

The practical picture with flow algorithms is quite complex. Even for the somewhat less-studied min-cost flow problem, the competing methods are:

My suspicion is for most real world data, you'd want to try one of the three above first. Such data are far far from worst case: for max-flow, variants of Dinic do great in practice.

Then there is this tree based OI meme build. I can imagine some kind of low-stretch tree based network simplex doing well, but one probably need to get a benchmark/data repo for mincost flow going first.

While most of us CPers are still using Dinitz’s $$$\mathcal O(n^2 m)$$$ max-flow algorithm and $$$\mathcal O(n m f)$$$ min-cost max-flow algorithm, it’s stunning that the academia has made such great progress from $$$\tilde{\mathcal O}(m^{1.5 - \frac{1}{328}})$$$ (gg I misspelled it) to $$$\mathcal O(m^{1 + o(1)})$$$!

rpeng, can you show us the technical details of the algorithm?

Lol I was wondering how 328 and 58 (=(3+2)8) were going to get misspelled...

The build is basically dynamic trees adapted to tree-based graph approximators. Brief guide to technical details from an OI data structures perspective:

Elaborating a bit on what rpeng said. To me, the main unintuitive parts of this algorithm to CPers is that the problems we solve during each iteration are

undirectedmostly. This contrasts with more classical approaches to maxflow where we store directed graphs all the time.The subproblems to solve are: given an undirected graph G with edge gradients (these are signed gradients, directed along an edge), and edge lengths (these are positive lengths), find a cycle with approximately maximized

ratio. Here, the ratio of a cycle is its total gradient divided by its total length. When calculating the total gradient, if the cycle goes opposite to the direction of the edge, we count it as negative the gradient, and if it goes in the same direction, we count it as positive gradient.The other possibly unintuitive part is that if the ratio of the cycle returned is a polylog(n) multiplicative off the maximum that's still fine. Now, there is (a nontrivial but) standard algorithm for doing this in linear time. Now... using a lot of data structures / trees / etc. (as rpeng described) we show how to do this dynamically where the gradients and lengths change slowly over the course of the algorithm.

Is it useful in the competition?

Most likely no.

Tagging akaiNeko and desert97.

Please note that this is a preliminary version, and has not been thoroughly verified yet. Any questions/thoughts about it will be greatly appreciated though.

In the meantime, maybe we (the authors with CF ids) should go do some team contests and get rekt by flows / trees ... (in my experience, one is surprisingly bad at topics that one has written papers on...)

But I haven't written any papers...

Why write papers when you can print them :p?

On a more serious note, if you'd like to try some research, now might actually be a good time to get into these graph algorithms, especially in the overlap with ML / network science.

Uhoh this blog is now on Twitter...

Why is there no open source implementation with benchmarks?

https://paperswithcode.com should really be a requirement for all CS research.

Welcome to the world of theoretical computer science. No code necessary. So TCS research does not always have an experiment part..the important part is the math is correct

Nah, not sure how much are you into research, but for majority of algorithmic papers it doesn't really make sense to implement things. For some it does though.

As long as they are theoretically sound, you probably don't need. However in AI/Data science field there are tons of experiments reported in papers and not reproducible in any universe lol

Yes, that's true. For ML you definitely need experimental results

Great paper!

~~Waiting for someone to implement it so the rest of us mortals can use it in competitions.~~May I ask what level is qualified for scientific research work? It must be cool to be like the author!

As long as you know the material you can do research, it's not like a degree is required for the material, since nowadays everything is available on the internet. Same applies for every field.

Tangentially related: According to Donald Knuth (see the TAoCP fascicle and some slides from one of his talks), it is an open problem to determine whether Hopcroft-Karp (or Dinic) is an $$$O(m)$$$ algorithm for bipartite matching. This was really surprising to me!

Doesn't the example described here require about $$$\sqrt N$$$ rounds for the standard Hopcroft Karp? https://cs.stackexchange.com/questions/116476/an-example-where-the-algorithm-of-hopcroft-and-karp-performs-poorly

The example graph basically has many connected componentes, each connected component being "a path of length $$$i$$$" for $$$i$$$ from $$$1$$$ to $$$k$$$.

At least, I am pretty sure that the standard version of Dinitz / Hopcroft-Karp I've seen in various team notebooks could be fooled by such a test cases with adversarial ordering of the vertices. Such a test cases is trivially sped up by almost any extra pruning (like separating in connected components first), but I don't think that many team notebook implementations take such a special extra care.

I have just skimmed it, but this text claims to actually construct worst-case graphs for the Hopcroft Karp bound (and for other standard algorithms as well): https://people.mpi-inf.mpg.de/~mehlhorn/AlgEng/bipartite_card_matching_ext.ps

The test case seems to be similar to the previous one since it incorporates many chains of length $$$i$$$ for all $$$i$$$, but made more robust by adding extra nodes to the graph connecting everything.

Wow, this is amazing, congratulations to the authors. I'll have to try to implement this, seems like the kind of data structure bash I like :P

I haven't read that far yet, but it seems that a part from the end of "Informal theorem 1.4" on page 3 is missing. It might be just my pdf reader, but it randomly ends with ", and", which seems unintended.

Hi, Is there any open-source implementation for this paper?