Recently, while I have been working on setting problems for various competitions (including CerealCodes), I ran into a problem about how to compute PIE weights in $$$O(n^2)$$$.
Here's the problem:
Given values $$$count_k$$$ for all $$$k$$$ from $$$0, 1, \cdots, n$$$, if $$$count_k = \sum_{i = 0}^n \binom{k}{i}ans_i$$$, compute $$$ans_i$$$ for all $$$i$$$ from $$$0, 1, \cdots, n$$$. (You can probably see the applications of such a method when you are using PIE to solve a problem, hence why I would call these pie weights).
Now, we can develop a system of equations. From the premise of the problem, we have that
\begin{alignat*}{5} \binom{0}{0}ans_0 & {}+{} & \binom{0}{1}ans_1 & {}+{} & \cdots & {}+{} & \binom{0}{n}ans_n & {}={} & count_0 \newline \binom{1}{0}ans_0 & {}+{} & \binom{1}{1}ans_1 & {}+{} & \cdots & {}+{} & \binom{1}{n}ans_n & {}={} & count_1 \newline \cdots \newline \binom{n}{0}ans_0 & {}+{} & \binom{n}{1}ans_1 & {}+{} & \cdots & {}+{} & \binom{n}{n}ans_n & {}={} & count_n \end{alignat*}
Then, we can write
\begin{align} \begin{pmatrix} 1 & 0 & 0 & \cdots & 0 \newline 1 & 1 & 0 & \cdots & 0 \newline 1 & 2 & 1 & \cdots & 0 \newline \vdots & \vdots & \vdots & \ddots & \cdots \newline 1 & n & \frac{n(n-1)}{2} & \cdots & 1 \end{pmatrix} \begin{pmatrix} ans_0 \newline ans_1 \newline ans_2 \newline \vdots \newline ans_n \end{pmatrix} = \begin{pmatrix} count_0 \newline count_1 \newline count_2 \newline \vdots \newline ans_n \end{pmatrix} \end{align}
Thus, we can write
\begin{align} \begin{pmatrix} ans_0 \newline ans_1 \newline ans_2 \newline \vdots \newline ans_n \end{pmatrix} = \begin{pmatrix} 1 & 0 & 0 & \cdots & 0 \newline 1 & 1 & 0 & \cdots & 0 \newline 1 & 2 & 1 & \cdots & 0 \newline \vdots & \vdots & \vdots & \ddots & \vdots \newline 1 & n & \frac{n(n-1)}{2} & \cdots & 1 \end{pmatrix}^{-1} \begin{pmatrix} count_0 \newline count_1 \newline count_2 \newline \vdots \newline ans_n \end{pmatrix} \end{align}
So it suffices to calculate
\begin{align} \begin{pmatrix} 1 & 0 & 0 & \cdots & 0 \newline 1 & 1 & 0 & \cdots & 0 \newline 1 & 2 & 1 & \cdots & 0 \newline \vdots & \vdots & \vdots & \ddots & \vdots \newline 1 & n & \frac{n(n-1)}{2} & \cdots & 1 \end{pmatrix}^{-1} \end{align}
in $$$O(n^2)$$$ time.
To find this, let's break it down into its elementary row operations. Notice that because of pascal's identity, subtracting each row by the previous row will result in an extremely similar matrix. In fact, we can see that
\begin{align} \begin{pmatrix} 1 & 0 & 0 & \cdots & 0 \newline 1 & 1 & 0 & \cdots & 0 \newline 1 & 2 & 1 & \cdots & 0 \newline \vdots & \vdots & \vdots & \ddots & \vdots \newline 1 & n & \frac{n(n-1)}{2} & \cdots & 1 \end{pmatrix}^{-1} = \begin{pmatrix} 1 & 0 & 0 & \cdots & 0 & 0 \newline -1 & 1 & 0 & \cdots & 0 & 0 \newline 0 & -1 & 1 & \cdots & 0 & 0 \newline \vdots & \vdots & \vdots & \ddots & \vdots & \vdots \newline 0 & 0 & 0 & \cdots & -1 & 1 \end{pmatrix} \begin{pmatrix} 1 & 0 & 0 & \cdots & 0 \newline 0 & 1 & 0 & \cdots & 0 \newline 0 & 1 & 1 & \cdots & 0 \newline \vdots & \vdots & \vdots & \ddots & \vdots \newline 0 & 1 & n-1 & \cdots & 1 \end{pmatrix}^{-1} \end{align}
Thus, we can break the desired matrix up into a bunch of elementary row operations the following way:
\begin{align} \begin{pmatrix} 1 & 0 & 0 & \cdots & 0 \newline 1 & 1 & 0 & \cdots & 0 \newline 1 & 2 & 1 & \cdots & 0 \newline \vdots & \vdots & \vdots & \ddots & \vdots \newline 1 & n & \frac{n(n-1)}{2} & \cdots & 1 \end{pmatrix}^{-1} = \begin{pmatrix} 1 & 0 & 0 & \cdots & 0 & 0 \newline -1 & 1 & 0 & \cdots & 0 & 0 \newline 0 & -1 & 1 & \cdots & 0 & 0 \newline \vdots & \vdots & \vdots & \ddots & \vdots & \vdots \newline 0 & 0 & 0 & \cdots & -1 & 1 \end{pmatrix} \begin{pmatrix} 1 & 0 & 0 & \cdots & 0 \newline 0 & 1 & 0 & \cdots & 0 \newline 0 & -1 & 1 & \cdots & 0 \newline \vdots & \vdots & \vdots & \ddots & \vdots \newline 0 & 0 & 0 & \cdots & 1 \end{pmatrix} \cdots \begin{pmatrix} 1 & 0 & 0 & \cdots & 0 & 0 \newline 0 & 1 & 0 & \cdots & 0 & 0 \newline 0 & 0 & 1 & \cdots & 0 & 0 \newline \vdots & \vdots & \vdots & \ddots & \vdots & \vdots \newline 0 & 0 & 0 & \cdots & -1 & 1 \end{pmatrix} \end{align}
Now, we just need to optimize this to $$$O(n^2)$$$. Let $$$M_k$$$ be the product of the last $$$k$$$ matrices. Then, the following properties are satisfied:
The only cells that differ from the identity matrix are in the bottom-right most $$$k \times k$$$ submatrix.
If you remove the last row and the last column of $$$M_{k+1}$$$, the bottom-right most $$$k \times k$$$ submatrix is the same as the bottom-right most $$$k \times k$$$ submatrix of $$$M_k$$$.
Using these two observations, we can solve the problem: we can iterate over $$$k$$$: we will be storing the bottom-most $$$k \times k$$$ submatrix of $$$M_k$$$. Then, to compute $$$M_{k+1}$$$ from $$$M_k$$$, we just need to compute the last row. Because there are $$$O(1)$$$ transitions for each element within the last row, this takes $$$O(k)$$$ time, for a total of $$$O(n^2)$$$ time.
A sample code for this:
pie[0][0] = 1;
for(int i = 1; i < n; i++){
pie[i][0] = sub(0, pie[i-1][0]);
for(int j = 1; j <= i; j++) pie[i][j] = sub(pie[i-1][j-1], pie[i-1][j]);
}
In fact, the solution code to the CerealCodes problem Cereal Trees uses this idea.
Disclaimer: This is not the only solution; there is a much simpler solution by manipulating chooses to find the answer, but I thought this method was much cooler since it does not require any choose functions, just addition and subtraction, and can also be generalized further.
I think the method of manipulating chooses is just directly performing Gaussian Elimination on the inverse matrix?
I think this is better known as forward substitution for a triangular matrix (https://en.wikipedia.org/wiki/Triangular_matrix#Forward_and_back_substitution)
Yes, this is a special case of forward triangulation. However, using this method does not actually calculate the inverse matrix, but rather calculates the answer vector.
The inverse matrix (which I use to eventually find the answer) is actually extremely useful; if you look at the CerealCodes problem, directly finding the answer will tle, but finding the inverse matrix allows you to find each answer in $$$O(n)$$$.
The sequence $$$count$$$ is the binomial transform of $$$ans$$$. This problem is called finding the inverse binomial transform, and can be computed if you use the same coefficients as in the binomial transform (but multiplied by 1 and -1 alternately).
This transform behaves very nicely under composition: if you define the $$$q$$$-analog for this transform as the $$$q$$$-fold composition of it, you get $$$F^{(q)}(a)_n = \sum_{i = 0}^n q^{n - i} \binom{n}{i} F_i$$$. The property $$$F^{(a)} \circ F^{(b)} = F^{(a + b)}$$$ holds for arbitrary reals $$$a, b$$$ if you define $$$F^{(q)}$$$ by the previous definition.
It becomes much easier to deal with the binomial transform if you realize that it is the Riordan array corresponding to $$$x \mapsto \left(\frac{1}{1 - x}, \frac{x}{1 - x}\right)$$$.
Generally, if we just look on EGF, we'll see that
If we write $$$C(x) = \sum\limits_{k=0}^n \frac{count_k}{k!} x^k$$$ and $$$A(x) = \sum\limits_{i=0}^n \frac{ans_i}{i!} x^i$$$, the identity rewrites simply as
Of course, retrieving $$$A(x)$$$ from $$$C(x)$$$ may then be done as
hence alternating $$$1$$$ and $$$-1$$$ from the expansion of $$$e^{-x}$$$ instead of $$$e^x$$$:
What the heck is that