Блог пользователя Lain

Автор Lain, история, 3 года назад, По-английски

Consider the $$$n$$$ degree polynomial $$$P(x) = \sum_{i=0}^n p_ix^i$$$. I call the polynomial $$$Q(x)$$$ it's "prefix sum polynomial" if it satisfies the following condition:

$$$ Q(x) = \sum_{y=0}^x P(y) $$$

Essentially, $$$Q(x)$$$ is a prefix sum on the values of $$$P(x)$$$. So,

$$$ \begin{align} Q(x) &= \sum_{y=0}^x P(y) \\ &= \sum_{y=0}^x \sum_{i=0}^n p_i y^i \\ &= \sum_{i=0}^n p_i \sum_{y=0}^x y^i \end{align} $$$

Now, all we need is a formula for the prefix sum of $$$y^i$$$. This is given by Faulhaber's Formula:

$$$ \sum_{y=0}^x y^i = \frac{1}{i+1} \sum_{j=0}^i \binom{i+1}{j} B_j x^{i+1-j} $$$

where $$$B_j$$$ is the $$$j$$$th Bernoulli number. The formula only applies for positive values of $$$i$$$, so we have to handle $$$[x^0]P(x)$$$ separately. Moreover, we can tell from this formula that the degree of $$$Q(x)$$$ will be $$$n+1$$$.

Just using the formula above is enough to find $$$Q(x)$$$ in $$$O(n^2)$$$, but let us go further. From what we know so far, the coefficients of $$$Q(x)$$$ would be:

$$$ \begin{align} [x^i]Q(x) &= \sum_{j=i-1}^n \frac{p_j}{j+1} B_{j+1-i} \binom{j+1}{j+1-i} \\ &= \sum_{j=i-1}^n \frac{p_j}{j+1} B_{j+1-i} \frac{(j+1)!}{i! \cdot (j+1-i)!} \\ &= \frac{1}{i!} \sum_{j=i-1}^n (p_j j!) \cdot \frac{B_{j+1-i}}{(j+1-i)!} \\ \end{align} $$$

This looks suspiciously like a convolution. Let us define $$$A(x)$$$ and $$$B(x)$$$ as:

$$$ A(x) = \sum_{i=0}^n \frac{B_{n-i}}{(n-i)!} x^i $$$
$$$ B(x) = \sum_{i=1}^n (p_i \cdot i!)x^i $$$

Then,

$$$ [x^i] Q(x) = \frac{[x^{i+n-1}](A \cdot B)}{i!} $$$

We can include the contribution of $$$[x^0]P(x)$$$ after this by adding it to $$$[x^0]Q(x)$$$ and $$$[x^1]Q(x)$$$. This way, we can calculate $$$Q(x)$$$ in $$$O(n \log n)$$$ using FFT.

Finally, we need to tackle the problem of calculating the Bernoulli numbers. We could use the recursive formula on the Wikipedia page, but that's $$$O(n^2)$$$. Instead, we can use the exponential generating function of the Bernoulli numbers:

$$$ \frac{x}{e^x-1} = \left(\sum_i \frac{x^i}{(i+1)!} \right)^{-1} $$$

Now, we can solve the entire problem in $$$O(n \log{n})$$$.

I'd like to thank neal since I only found this technique by looking at his submission for ARC 104 E. I later found out that this same idea is used in the problem SERSUM by chemthan, but I decided to make the blog anyway to introduce it to more people.

Edit: I've also put this on my personal blog. I'll probably put less CF blog-worthy things there in the future.

  • Проголосовать: нравится
  • +111
  • Проголосовать: не нравится

»
3 года назад, # |
Rev. 3   Проголосовать: нравится +16 Проголосовать: не нравится

This is closely related to polynomial input-shift calculation in $$$O(n \log{n})$$$, since of course $$$P(x) = Q(x) - Q_{-1}(x)$$$ where $$$Q_t(x) := Q(x + t)$$$ is a shifted version of $$$Q$$$. And that leads to a couple of very nice intuitive explanations for why this works out to a cross-correlation in the $$$\frac{x^i}{i!}$$$ basis, as well as a nice intuitive explanation for where the EGF of the Bernoulli numbers comes from.

Notice that $$$Q_0(x) = Q(x)$$$ and $$$\frac{\partial}{\partial t} Q_t(x) = \frac{\partial}{\partial x} Q_t(x) = \mathrm{D}(Q_t)(x)$$$, where $$$\mathrm{D}$$$ is the linear operator that takes a polynomial to its derivative. And since $$$\mathrm{D}$$$ is linear, this is just a linear ODE initial-value-problem for the polynomials $$$Q_t$$$, and the solution to this kind of problem is well-known:

$$$ \displaystyle Q_t = e^{t \mathrm{D}} \cdot Q = \left( \sum_{i=0}^{\infty} \frac{(t \mathrm{D})^i}{i!} \right) \cdot Q $$$

Now, working in the $$$\frac{x^i}{i!}$$$ basis makes perfect sense: It turns differentiation into left-shift, making the sums-of-derivatives that comprise this series into a cross-correlation.

Returning to the original problem, we can reason that $$$P = Q - Q_{-1} = (1 - e^{-\mathrm{D}}) \cdot Q$$$. We can't now solve for $$$Q$$$ by applying the inverse $$$(1 - e^{-\mathrm{D}})^{-1}$$$ because the operation $$$Q \mapsto Q - Q_{-1}$$$ isn't invertible: It can't distinguish $$$Q(x) = 0$$$ from $$$Q(x) = 1$$$. But we can solve for every higher-order term in $$$Q$$$, which is the same as solving for $$$\mathrm{D}Q$$$, which gives this satisfyingly succinct description of the algorithm:

$$$ \displaystyle \mathrm{D}Q = \left( \frac{\mathrm{D}}{1 - e^{-\mathrm{D}}} \right) \cdot P $$$
  • »
    »
    3 года назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится

    You missed $$$t$$$ in your expansion of $$$e^{tD}$$$.

    The formula is nice, but how to compute the result of applying $$$\frac{D}{1-e^{-D}}$$$ to a polynomial?

    • »
      »
      »
      3 года назад, # ^ |
        Проголосовать: нравится +16 Проголосовать: не нравится

      Right; thank you. The key observation for calculating this efficiently is that differentiation $$$\mathrm{D}$$$ is a left-shift in the $$$\frac{x^i}{i!}$$$ basis, so any formal power series in $$$\mathrm{D}$$$ is a cross-correlation in this basis. To be more explicit, if $$$\frac{\mathrm{D}}{1 - e^{-\mathrm{D}}} = \sum_{i=0}^{\infty} c_i D^i$$$ as a formal power series in $$$\mathrm{D}$$$ and $$$P(x) = \sum_{i=0}^{N} p_i \frac{x^i}{i!}$$$, then

      $$$ \begin{array}{rl} \displaystyle \left(\frac{\mathrm{D}}{1 - e^{-\mathrm{D}}}\right) (P)(x) & \displaystyle = \sum_{j=0}^{N} c_j \sum_{i=0}^{N - j} p_{i+j} \frac{x^i}{i!} \\ & \displaystyle = \sum_{i=0}^{N} \left( \sum_{j=0}^{N-i} c_j p_{i+j} \right) \frac{x^i}{i!}. \end{array}$$$

      These coefficients are just the cross-correlation between $$$c$$$ and $$$p$$$.

      • »
        »
        »
        »
        3 года назад, # ^ |
          Проголосовать: нравится 0 Проголосовать: не нравится

        Yeah, it's enough to compute

        $$$R(D) = \left(\frac{1-e^{-D}}{D}\right)^{-1} \pmod{D^N},$$$

        because $$$D^N P(x)=0$$$, so it's safe to work modulo $$$D^N=0$$$... And what's going on next would work to compute $$$R(D) P(x)$$$ for arbitrary $$$R(D)$$$. That's interesting.

»
3 года назад, # |
  Проголосовать: нравится +36 Проголосовать: не нравится

Kinda related: https://mirror.codeforces.com/blog/entry/60756

Also, it is much simpler to work in the basis of falling/rising factorials instead of just powers: for example, the prefix polynomial for $$$(x)_n$$$ is $$$(x+1)_{n+1}/(n+1)$$$.

  • »
    »
    3 года назад, # ^ |
      Проголосовать: нравится +10 Проголосовать: не нравится

    it is much simpler to work in the basis of falling/rising factorials instead of just powers

    How do you do this in $$$O(n \log n)$$$? Even calculating the required basis seems like it would need $$$O(n^2)$$$.

    • »
      »
      »
      3 года назад, # ^ |
      Rev. 2   Проголосовать: нравится +18 Проголосовать: не нравится

      Let

      $$$P(x) = a_0 + a_1 x + \dots + a_n x^n = b_0 + b_1 (x)_1 + \dots + b_n (x)_n,$$$

      where $$$(x)_k = x(x-1)\dots(x-k+1)$$$. Note that $$$(x)_{k} = (x)_t \cdot (x-t)_{k-t}$$$, thus

      $$$P(x) = A(x) + B(x-t) \cdot (x)_t,$$$

      where

      $$$\begin{gather} A(x) = b_0 + b_1 (x)_1 + \dots + b_{t-1} (x)_{t-1},\\ B(x) = b_t + b_{t+1}(x)_1 + \dots + b_n (x)_{n-t}. \end{gather}$$$

      This essentially allows you to divide and conquer to make a transition in any direction. Though, this way it would probably take $$$O(n \log^2 n)$$$...

      • »
        »
        »
        »
        3 года назад, # ^ |
          Проголосовать: нравится +18 Проголосовать: не нравится

        I do not understand why it allows us to calculate $$$b_i$$$ using $$$a_i$$$, so I explain the way I know to do it.

        Consider the formal power series $$$\exp(t) = \sum_{i=0}^{\infty}\frac{t^i}{i!}$$$. Taking its $$$k$$$-th derivative, we get:

        $$$\exp(t) = \exp^{(k)}(t) = \sum_{i=0}^{\infty}\frac{(i)_kt^{i-k}}{i!}.$$$

        Therefore,

        $$$x^k = \exp(-t)\cdot x^k\exp(t) = \exp(-t)\cdot\sum_{i=0}^{\infty}\frac{(i)_kt^i}{i!}.$$$

        Taking some linear combination of such equalities, we obtain:

        $$$\sum_{k=0}^na_kx^k = \exp(-t)\cdot\left(\sum_{k=0}^na_kx^k\exp(t)\right) = \exp(-t)\cdot\sum_{i=0}^{\infty}\frac{P(i)t^i}{i!}.$$$

        In other words, one can calculate all $P(i)$ for $$$0\leq i\leq n$$$ (in $$$O(n\log^2{n})$$$, then construct both power series of the right hand side (obviously, it doesn't make sense to compute their coefficients after the $$$n$$$-th) and multiply them.

        That being said, both transitions can be made in $$$O(n\log{n}^2)$$$, and I do not know if any of them can be performed faster. But the initial meaning of my first comment was also about maintaining the polynomial in the falling factorials basis in the first place instead of transforming it back and forth. Say, if in your problem you need to transform a polynomial by turning it into the prefix sum polynomial and do some linear transformations, then it is probably more convenient to do it in the falling factorials form.

        • »
          »
          »
          »
          »
          3 года назад, # ^ |
            Проголосовать: нравится 0 Проголосовать: не нравится

          If you precompute $$$(x)_t$$$ for key $$$t$$$ values, you can compute

          $$$A(x) \equiv P(x) \pmod{(x)_t}$$$

          and

          $$$B(x-t) = \frac{P(x) - A(x)}{(x)_t}$$$

          in $$$O(n \log n)$$$, then switch from $$$B(x-t)$$$ to $$$B(x)$$$ and solve recursively.

          Interpolation and evaluation in $$$0,\dots,n$$$ is also possible, I was a bit late with my comment below...

    • »
      »
      »
      3 года назад, # ^ |
        Проголосовать: нравится 0 Проголосовать: не нравится

      Another way to go is to compute $$$P(0), \dots, P(n)$$$ and interpolate polynomial from it. For $$$a_0,\dots,a_n$$$, it's classical polynomial evaluation and interpolation. For $$$b_0,\dots,b_n$$$, we can see that

      $$$P(k) = \sum\limits_{i=0}^k b_i (k)_i = \sum\limits_{i=0}^k b_i \frac{k!}{(k-i)!}.$$$

      Let $$$Q(x) = \sum\limits_{k=0}^n \frac{P(k)}{k!} x^k$$$, then

      $$$Q(x) = \sum\limits_{k=0}^n x^k \sum\limits_{i=0}^k \frac{b_i}{(k-i)!} \equiv e^x \cdot R(x) \pmod{x^{n+1}}$$$

      Where $$$R(x) = b_0 + b_1 x + \dots + b_n x^n$$$. Of course, the reverse formula stands

      $$$R(x) \equiv e^{-x} \cdot Q(x) \pmod{x^{n+1}},$$$

      thus interpolation and evaluation in $$$\{0,\dots,n\}$$$ in falling factorial basis is computable in $$$O(n \log n)$$$ by simply multiplying with $$$e^x$$$ or $$$e^{-x}$$$ and taking first $$$n+1$$$ coefficients of the result.