Hi everyone!
Today I want to write about the Gale-Ryser Theorem and some of its applications.
Gale-Ryser Theorem
We have an array of $$$n$$$ non-negative integers $$$a_1, a_2, \ldots, a_n$$$ and an array of $$$m$$$ positive integers $$$b_1, b_2, \ldots, b_m$$$, $$$b_i \le n$$$. The array $$$b$$$ describes a sequence of operations, in the $$$i$$$-th operation we need to decrease the values at $$$b_i$$$ positions by $$$1$$$, formally pick $$$b_i$$$ unique indices $$$j_1, j_2, \ldots, j_{b_i}$$$ and decrease $$$a_{j_p}$$$ by $$$1$$$ for $$$1 \le p \le b_i$$$. We want to know if it's possible to have all $$$a_i \ge 0$$$ after the operations.
Without loss of generality $$$b_1 \ge b_2 \ge \ldots \ge b_m$$$. The Theorem says that it's possible iff $$$\sum \limits_{i=1}^k \min(a_i, k) \ge \sum \limits_{j=1}^k b_j$$$ for $$$\forall 1 \le k \le m$$$.
Proof by induction also suggests a strategy for the construction, which turns out to be the natural greedy strategy — always selecting $$$b_i$$$ maximum positions. Now you know the proof behind this greedy. While it doesn't follow from this proof, it's actually not needed to sort $$$b$$$ to run the greedy. Later in the blog it will also be proven, but for now simply note that we can pick maximum positions in any order we want.
An important special case, all $$$b_i = t$$$. That is, in each of $$$m$$$ operations times $$$t$$$ positions are decreased. Then it is only needed to check that $$$\sum \limits_{i=1}^n \min(t, a_i) \ge mt$$$
Problem from AtCoder ABC which uses the Theorem
Another example is 1774B - Coloring. Here's the short statement: we have $$$n$$$ cells and $$$m$$$ colors, each cell must be colored. For each color there must be exactly $$$a_i$$$ cells painted with that color ($$$\sum a = n$$$). Also, every window of given size $$$k$$$ cannot have cells of one color.
Here is a problem where you can try applying the idea yourself 1893D - Colorful Constructive
Dual
There is also a dual version of this problem. Suppose on each operation we decrease at most $$$b_i$$$ elements by $$$1$$$, but the goal now is to get all $$$a_i = 0$$$. Just swap $$$a$$$ and $$$b$$$ and we'll get the same problem as before! Originally all $$$b_j$$$ required $$$b_j$$$ positions in $$$a$$$ and each position $$$a_i$$$ could've been picked at most $$$a_i$$$ times. And here each $$$a_i$$$ must be picked exactly $$$a_i$$$ times (we need to pick exactly $$$a_i$$$ positions in $$$b$$$) and each $$$b_j$$$ can be picked at most $$$b_j$$$ times.
Sometimes all of the operations are the same. Usually in this case we want to find the minimum number of operations $$$m$$$ which we need to make all $$$a_i = 0$$$. Suppose all operations decrease at most $$$t$$$ positions. How do we find the $$$m$$$?
Right now I can only remember this problem 2181G - Greta's Game, which uses this fact, but this idea appears sometimes, usually where $$$t = 2$$$.
Also an application of this idea in Meta Hacker Cup, problem B
Lastly, there is a problem from Open Olympiad 24-25 XIX Open Olympiad in Informatics - Final Stage, Day 2 (Unrated, Online Mirror, IOI rules), which motivated me to understand this theorem. The fact below is crucial for the solution, so it's under a spoiler as well.






