Hi everyone!
Today I want to write about the Gale-Ryser Theorem and some of its applications.
Gale-Ryser Theorem
We have an array of $$$n$$$ non-negative integers $$$a_1, a_2, \ldots, a_n$$$ and $$$m$$$ positive integers $$$b_1, b_2, \ldots, b_m$$$ up to $$$n$$$. The array $$$b$$$ is a sequence of operations, in $$$i$$$-th operation we need to decrease $$$b_i$$$ positions by $$$1$$$, formally pick $$$b_i$$$ unique indices $$$j_1, j_2, \ldots, j_{b_i}$$$ and decrease $$$a_{j_p}$$$ by $$$1$$$ for $$$1 \le p \le b_i$$$. We want to know if it's possible to have all $$$a_i \ge 0$$$ after the operations.
Without loss of generality $$$b_1 \ge b_2 \ldots \ge b_m$$$. The Theorem says that it's possible iff $$$\sum \limits_{i=1}^k \min(a_i, k) \ge \sum \limits_{j=1}^k b_j$$$ for $$$\forall 1 \le k \le m$$$.
Proof by induction also suggests a strategy for the construction, which turns out to be the first greedy that comes to mind — always selecting $$$b_i$$$ maximum positions. Now you know the proof behind this greedy. While it doesn't follow from this proof, it's actually not needed to sort $$$b$$$ to run the greedy. Later in the blog it will also be proven, but for now just remember that we can pick maximum positions in any order we want.
An important special case, all $$$b_i = t$$$. That is, $$$m$$$ times $$$t$$$ positions are decreased. Then it is only needed to check that $$$\sum \limits_{i=1}^n \min(t, a_i) \ge mt$$$
Problem from AtCoder ABC which uses the Theorem
Another example is 1774B - Coloring. Statement: we have $$$n$$$ cells and $$$m$$$ colors, each cell must be colored. For each color there must be exactly $$$a_i$$$ cells painted with that color ($$$\sum a = n$$$). Also, every window of given size $$$k$$$ can't have cells of one color.
In this problem the same idea is used 1893D - Colorful Constructive
There is a symmetrical case to this problem. Suppose on each operation we decrease \textbf{at most} $$$b_i$$$ elements by $$$1$$$, but the goal now is to get all $$$a_i = 0$$$. Just swap $$$a$$$ and $$$b$$$ and we'll get the same problem as before! Originally all $$$b_j$$$ required $$$b_j$$$ positions in $$$a$$$ and each position $$$a_i$$$ could've been picked at most $$$a_i$$$ times. And here each $$$a_i$$$ must be picked exactly $$$a_i$$$ times (we need to pick exactly $$$a_i$$$ positions in $$$b$$$) and each $$$b_j$$$ can be picked at most $$$b_j$$$ times.
Here are a couple of special cases, which I've seen.
All operations are $$$2$$$, we want to find the minimal number of operations needed to make all $$$a_i = 0$$$. This one I see sometimes but right now I can't remember a single problem. For proofs assume that $$$a$$$ is non-increasing.
All operations are $$$n - 1$$$, we want to find the minimal number of operations needed to make all $$$a_i = 0$$$. Can be used here 2181G - Greta's Game
Application of this idea in Meta Hacker Cup
Lastly, there is a problem from Open Olympiad 24-25 XIX Open Olympiad in Informatics - Final Stage, Day 2 (Unrated, Online Mirror, IOI rules), which motivated me to understand this theorem. I will only state facts from it without providing proof.
This greedy gives lexigographically largest sorted array possible among all arrays which are achievable with the operations. The proof also doesn't use the fact that the operations are done in non-increasing order.
If all of the operations are equal ($$$m$$$ operations, each decreases $$$k$$$ positions) it is possible to find any prefix sum in $$$O(\log m \cdot $$$(time to get certain sums)$$$)$$$




