https://mzhang2021.github.io/cp-blog/berlekamp-massey/
Special thanks to alien_lover and gabrielwu for proofreading.
# | User | Rating |
---|---|---|
1 | tourist | 4009 |
2 | jiangly | 3823 |
3 | Benq | 3738 |
4 | Radewoosh | 3633 |
5 | jqdai0815 | 3620 |
6 | orzdevinwang | 3529 |
7 | ecnerwala | 3446 |
8 | Um_nik | 3396 |
9 | ksun48 | 3390 |
10 | gamegame | 3386 |
# | User | Contrib. |
---|---|---|
1 | cry | 167 |
2 | Um_nik | 163 |
3 | maomao90 | 162 |
3 | atcoder_official | 162 |
5 | adamant | 159 |
6 | -is-this-fft- | 158 |
7 | awoo | 157 |
8 | TheScrasse | 154 |
9 | Dominater069 | 153 |
9 | nor | 153 |
https://mzhang2021.github.io/cp-blog/berlekamp-massey/
Special thanks to alien_lover and gabrielwu for proofreading.
Name |
---|
Thank you for the blog! The text is very easy to read in my opinion. I would like to add something.
You provided one algorithm of finding the $$$k$$$-th term of a linear recurrence, which works in $$$O(M(n)\log{k})$$$, where $$$M(n)$$$ is the time complexity of polynomial multiplication. It is actually not the fastest algorithm (anymore), check out this paper: what you described seems to be Fiduccia's algorithm, and it takes $$$3M(n)\log{k} + O(n\log{k})$$$ time to evaluate (according to this paper). Its authors propose another algorithm, which runs in $$$2M(n)\log(k+1) + M(n)$$$. It works as follows:
Let $$$Q(x)$$$ be the characteristic polynomial of our recurrence, and $$$F(x) = \sum_{i=0}^{\infty}a_ix^i$$$ be the generating formal power series of our sequence. Then it can be seen that all nonzero terms of $$$F(x)Q(x)$$$ are of at most $$$(n-1)$$$-st power. This means that $$$F(x) = P(x)/Q(x)$$$ for some polynomial $$$P(x)$$$. Moreover, we know what $$$P(x)$$$ is: it is basically the first $$$n$$$ terms of $$$F(x)Q(x)$$$, that is, can be found in one multiplication of $$$a_0 + \ldots + a_{n-1}x^{n-1}$$$ and $$$Q(x)$$$, and then trimming to the proper degree.
So what remains is to find $$$[x^k]\dfrac{P(x)}{Q(x)}$$$ (recall that $$$[x^k]G(x)$$$ is the $$$k$$$-th coefficient of a formal power series or a polynomial $$$G(x)$$$). We do this recursively:
Here $S(x)$ is $$$P(x)Q(-x)$$$, $$$S_0(x)$$$ and $$$S_1(x)$$$ are polynomials composed of only even/odd-indexed terms of $$$S(x)$$$, and we notice that $$$Q(x)Q(-x)$$$ is an even function, thus its polynomial representation contains only even-indexed terms, which represent a polynomial $$$T(x)$$$.
Now, if $$$k$$$ is even, then
otherwise
Thus we divided $k$ by $$$2$$$ with the cost of only two multiplications. Moreover, one can probably even use fewer FFTs to compute $$$Q(x)Q(-x)$$$ and maybe apply other tricks to speed this up.
Woah that's a neat trick, thanks for sharing!
Interestingly, this approach (per Schönhage's article, inspired by Graeffe's method) also applies to reciprocals.
If we want to calculate the result $$$\bmod x^{n+1}$$$, only first $$$\lfloor n/2\rfloor$$$ coefficients of $$$\frac{1}{T(x)}$$$ matter, which allows half-sized reduction for calculating $$$\frac{1}{Q(x)}$$$. Seemingly no significant constant-time improvements here, but might be simpler to comprehend than typical Newton iteration.
In the first code shown you write that if $$$n$$$ is $$$1$$$ (which means $$$f(x) = x - c_0$$$) then $$$a = [c[0]]$$$ without mentioning it. For some of us it might not be trivial that $$$x^k \equiv c_0^k \; mod(x - c_0)$$$. If someone else didn't get that, notice that $$$x^k - r^k = (x-r)(x^{k-1} + x^{k-2}r + x^{k-3}r^2 + \dots + xr^{k-2} + r^{k-1})$$$. It can be extended to arbitrary polynomials to get $$$f(x) \equiv f(k) \; mod (x-k)$$$, which is known as polynomial remainder theorem.
Good catch, I did forget to explain that part of the code. You can also think of it as $$$x^k \bmod (x - c_0) = (x \bmod (x - c_0))^k \bmod (x - c_0) = c_0^k \bmod (x - c_0)$$$. $$$x \bmod (x - c_0) = c_0$$$ by doing one round of long division.
In the end of proof shouldn't $$$L_i = i - m + L_m$$$ be $$$L_i = i - m + L_{m - 1}$$$ ?
because we are taking the previous version of linear recurrence relation, right?
It's been a while since I've looked at this proof and I've forgotten a lot of the fine details, but I think you're right since I wrote earlier that $$$L_{m-1} = m + 1 - L_{i-1}$$$, but then I substitute that for $$$L_m$$$ instead of $$$L_{m-1}$$$. I'll make the change. Indexing in this proof is a pain D:
btw, to prove a linear recurrence for $$$A^m$$$ signifies a linear recurrence for a specific entry of $$$dp[m]$$$ (let say we want $$$dp[m][i]$$$), maybe we can just manipulate Cayley-Hamilton theorem a bit like this?
($$$A$$$ is a n times n matrix, $$$b_i$$$ is a n times 1 vector whose entry are all 0 except i-th row is 1)
Hi, I saw your other blog earlier containing an explanation for how we get a linear recurrence of the $$$dp$$$ from a linear recurrence on the transition matrix, and I just realized you commented the same explanation under this blog.
Anyways, the math seems to check out to me. Not sure how I didn't think of just doing more algebra on top of the result from Cayley-Hamilton when I wrote this article a while back. Do you mind if I link to your comment in the article for future reference?
Sure, it's my pleasure:D