[Tutorial] Graph Potentials, Johnson's Algorithm, and Hungarian Algorithm

Revision en1, by Monogon, 2021-10-09 20:03:55

In this tutorial I'll introduce potentials as it applies to directed graphs and discuss a few of its applications. I haven't seen many resources about this topic, so I decided to write one.

Prerequisites:

Introduction

For notation, let's say there is a weighted, directed graph $$$G=(V,E)$$$. Let $$$n=|V|$$$ and $$$m=|E|$$$. For simplicity, let's assume there are no loops or multiple edges between the same pair of nodes in the same direction. For an edge $$$u\to v$$$, we will use $$$w(u\to v)$$$ to denote its weight.

Let's motivate the idea of a potential, so the definition doesn't seem to come out of nowhere. In shortest paths, you may be aware that negative edges create a lot of issues. First, it's possible that negative cycles exist. In that case, it's possible for a path to have arbitrarily small weight, and we say that $$$\mathrm{dist}(u\to v)=-\infty$$$. (By the way, we allow a path to revisit vertices. If we required a simple path, then it's NP-complete.)

Even if there are no negative cycles, the negative edges are still problematic for us. Dijkstra's algorithm can give incorrect results, and we typically use Bellman-Ford instead, which has a worse complexity. Dijkstra's is $$$O(m\log n)$$$ while Bellman-Ford is $$$O(mn)$$$. Technically, Dijkstra's algorithm can be improved to $$$O(m+n\log n)$$$ with Fibonacci heaps, but it's not very important for the purposes of this blog.

One possible idea to deal with negative edges is to "reweigh" the graph so that all weights are non-negative, and it still encodes the information of shortest paths in the original graph. Obviously, we can't just replace the weights with whatever we want. One way we can modify weights is to pick a vertex $$$v$$$, increase the weights of incoming edges to $$$v$$$ by some real value $$$x$$$, and decrease the weights of outgoing edges from $$$v$$$ by the same amount $$$x$$$. This way, any path that goes through $$$v$$$ will have the same weight as before. Therefore, the only affected paths are the ones that begin or end at $$$v$$$. But we can observe that any path beginning or ending in $$$v$$$ will just be offset by $$$x$$$, and a shortest path will remain the shortest.

Now, we have the motivation to define a potential. The idea is that we want to apply this kind of vertex offset for all vertices independently in such a way that all the new weights are non-negative.

Definition: A function $$$p:V\to \mathbb{R}$$$ is called a potential of graph $$$G$$$ if for every edge $$$(u\to v)\in E$$$, it holds that $$$w(u\to v)+p(u)-p(v)\ge 0$$$.

Johnson's Algorithm

Given our graph $$$G$$$, we would like to compute length of the shortest path from $$$u$$$ to $$$v$$$, for all pairs $$$(u, v)$$$. For simplicity, let's assume there are no negative cycles in the graph. Perhaps the best algorithm you know about for this problem is Floyd-Warshall, which works in $$$O(n^3)$$$. It works correctly for negative edges, which you might not have known about.

Johnson's Algorithm solves this problem more efficiently for sparse graphs, and it uses the following steps:

  1. Compute a potential $$$p$$$ for the graph $$$G$$$.
  2. Create a new weighting $$$w'$$$ of the graph, where $$$w'(u\to v)=w(u\to v)+p(u)-p(v)$$$.
  3. Compute all-pairs shortest paths $$$\mathrm{dist}'$$$ with the new weighting.
  4. Convert the distances back to the original graph, with the equation $$$\mathrm{dist}(u\to v)=\mathrm{dist}'(u\to v)-p(u)+p(v)$$$.

We need to go into more detail about how steps 1 and 3 can be done. Step 3 is easier to understand, so let's address it first. The weighting $$$w'$$$ is all non-negative, so we can use Dijkstra's algorithm to get correct distances. We simply run Dijkstra's algorithm once for every vertex $$$u$$$, to get the distances from $$$u$$$ to every other vertex. Since we run Dijkstra's algorithm $$$n$$$ times, and each time takes $$$m\log n$$$, the complexity for this stage is $$$O(nm\log n)$$$. Technically, it can be improved to $$$O(n(n\log n+m))=O(n^2\log n+nm)$$$ if you use Fibonacci heaps.

Now, how do we do step 1? The key idea is that shortest path values themselves satisfy the requirement of a potential! In fact, let's fix some vertex $$$s$$$, and assign $$$p(v)=\mathrm{dist}(s\to v)$$$ for all vertices $$$v$$$. Assume for contradiction that $$$p$$$ is not a potential. Then there is some edge $$$u\to v$$$ such that $$$w(u\to v)+p(u)-p(v)<0$$$. This means that $$$\mathrm{dist}(s\to v)>\mathrm{dist}(s\to u)+w(u\to v)$$$. This means that if we take the shortest path from $$$s$$$ to $$$u$$$, then take the edge from $$$u$$$ to $$$v$$$, the total weight will be smaller than the shortest distance from $$$s$$$ to $$$v$$$. Clearly, this is impossible, so $$$p$$$ is in fact a potential.

Actually, I lied to you. The above proof is only correct if the chosen vertex $$$s$$$ can reach all vertices $$$v$$$. Otherwise, we would have infinite potentials which is not allowed by the definition. But we're almost there. Let's simply create a brand new vertex $$$s$$$, and create an edge $$$s\to v$$$ with weight $$$0$$$ for every vertex $$$v$$$. We will define the potential based on the shortest distance from $$$s$$$, and it will be correct. Note that the property of a potential is still valid if we delete some vertices and edges, so it's not a problem to remove $$$s$$$ after we compute the potential.

Now, you might be thinking, "wait, I thought the point of a potential is to help us compute shortest paths, and now you're telling me we need to use shortest paths to compute the potential? I want my money back!" But remember, we're trying to compute all-pairs shortest paths. Running an algorithm like Bellman-Ford from every starting vertex would be expensive. But here, we're only running such an algorithm once, from the new vertex $$$s$$$. Then we use a more efficient algorithm (Dijkstra) for the all-pairs part. So for this step 1, we just run Bellman-Ford once on the graph with the new vertex $$$s$$$. The complexity of step 1 is $$$O(nm)$$$.

Overall, step 3 is the bottleneck and Johnson's algorithm has complexity $$$O(nm\log n)$$$ (or $$$O(n^2\log n + nm)$$$ if you use Fibonacci heap).

Tags flow, mincostflow, tutorial, graphs, shortest path

History

 
 
 
 
Revisions
 
 
  Rev. Lang. By When Δ Comment
en4 English Monogon 2021-10-10 00:53:39 4 Tiny change: '- [Johnson's Algorith' -> '- [Johnsons Algorith'
en3 English Monogon 2021-10-10 00:46:23 1654 Tiny change: '95989.jpg)\n\n![ ](/pred' -> '95989.jpg) ![ ](/pred' (published)
en2 English Monogon 2021-10-10 00:11:21 9390
en1 English Monogon 2021-10-09 20:03:55 6443 Initial revision (saved to drafts)