A problem collection of ODE and differential technique
This problem might be well-known in some countries, but how do other countries learn about such problems if nobody poses them.
For those who are interested in well-known problems in China.
Thank Elegia and djq_cpp for developing this technique. Thank Rewinding for reviewing this article.
Chain Reaction in UOJ Round 3 By vfleaking
Statement
You are given a set $$$A$$$, you need to compute $$$g_{i} = \frac{1}{2} \sum_{j,k}{i-1 \choose j}{i-1-j \choose k} g_jg_k$$$ where $$$i-1-j-k \in A$$$.
Solution
Let the EGF of $$$g$$$ be $$$x(t)$$$ and EGF of $$$A$$$ be $$$a(t)$$$. Thus $$$x'(t)=\frac{1}{2} a(t) x^2(t)+1$$$. We can solve this equation by D&C and FFT in $$$O(n\log^2 n)$$$. But there is a slower solution in $$$O(n\log n)$$$.
For a polynomial equation $$$f(x(t))=0$$$, we can use the Newton's method to solve it. If we find a polynomial $$$x_n$$$ that $$$x \equiv x_n \pmod {t^n}$$$. We can derive the following equation by the Taylor series:
Thus $$$x_{2n} = x_n - \frac{f(x_n)}{f'(x_n)} \pmod {t^{2n}}$$$.
For an ODE $$$\frac{d}{dt} x =f(x)$$$, we have
Let $$$r= e^{-\int f'(x_n) dt}$$$.
In this problem, $$$f(x)=\frac{1}{2} a(t) x^2(t)+1$$$. Since exp, multiplication and division takes $$$O(n \log n)$$$ time, the we have the time complexity $$$T(n)=T(n/2) + O(n\log n)$$$,which is $$$O(n\log n)$$$.
Note: I think we can generalize this method to solve high order ODE or system of ordinary differential equations. But the constant factor will be much more terrible.
Gnutella Chessmaster in MIPT contest Ptz camp 2018, Summer
Statement
I think the author of this problem is Endagorion.
This is a simplified version of this problem: We define the weight of a sequence $$$a_1, a_2, \dots, a_k$$$ as $$$\prod_{i=1}^k (a_i+i)$$$. You are given a sequence $$$c_1, c_2, \dots, c_n$$$. Compute the sum of the weight of all the subsequences of $$$c$$$ with length $$$k=1, 2, \dots, n$$$.
You can submit this problem here Little Q's sequence. However, you only need to compute the sum of all the subsequences here.
Solution
First, we consider the naive dp: $$$f_{i,j}=f_{i-1,j} + f_{i-1,j-1} \cdot (j+a_i)$$$. Let $$$g_{i,j} = f_{i,i-j}, b_i=a_i+i$$$, we have $$$g_{i,j}=g_{i-1,j-1} + g_{i-1,j} (b_i-j)$$$.
Let $$$g_i(x)$$$ be the OGF of $$$g_i$$$, we have $$$g_i(x)=x g_{i-1}(x)+b_i g_{i-1}(x) - x g'_{i-1}(x)=x(g_{i-1}(x)-g'_{i-1}(x))+b_i g_{i-1}(x)$$$.
Let $$$h_i=g_i(x) e^{-x}$$$, $$$h_i = x {h'_{i-1}}+b_i h_{i-1}$$$, where $$$h_{i,j}=j h_{i-1,j}+b_i h_{i-1,j}=(b_i+j) h_{i-1,j }$$$. Thus $$$h_{n,j}=h_{0,j} \prod_{i=1}^n (b_i+j)$$$. It's a multipoint evaluation of the polynomial $$$P(x)=\prod_{i=1}^n (x+b_i)$$$, which can be solved in $$$O(n\log ^2 n)$$$.
If we only need to compute the sum, there is also an alternative way to solve it. $$$g_i=(x - x \frac{d}{dx} +b_i) g_{i-1}(x)$$$. Let $$$P(x) = \prod_{i=1}^n (x+b_i)=\sum_{k=0}^n p_k x^k$$$. Then $$$g_n=\sum_{k=0}^n p_k (x-x\frac{d}{dx})^k(1)$$$. By induction, $$$(x-x\frac{d}{dx})^k (1)=\sum_{j=0}^k \lbrace ^k_j \rbrace (-1)^{k-j} x^j$$$. Let $$$q_k=\sum_{j=0}^{k} \lbrace ^k_j \rbrace (-1)^{k-j}$$$. According to OEIS, $$$\sum_{k\geq 0} q_k x^k=e^{1-x-e^{-x}}$$$. Thus we can compute $$$q_k$$$ in $$$O(n\log n)$$$.
Note: Due to the special property, the original version can also be solved in $$$O(n \log n)$$$ by a combinatorial method.
Exp in Gennady Korotkevich Contest 5 By tourist
Statement
You are given a polynomial $$$P(x)$$$, you need to find the first $$$m$$$ coefficients of $$$Q=P^n (x)$$$ in $$$O(m|P|)$$$.
Solution
There are many similar problems like Lucky Tickets, or computing Catalan numbers, Large Schröder numbers. Also, we will discuss the relation between ODE and P-recursive sequence further in the latter part.
We have
Thus $$$n Q P'= Q'P$$$. Consider the coefficient of $$$x^i$$$ in both parts, we have
.
We can compute $$$q_{i+1}$$$ from $$$q_{i-k+1}, q_{i-k+2}, \dots, q_{i}$$$ in $$$O(k)$$$.
BTW, a more intuitive way to derive this formula is we take $$$\ln$$$ of both sides and then take the derivative. $$$\ln G=n\ln F \Rightarrow G'/G=nF'/F$$$.
Note: This method also works for $$$n$$$ is not an integer. For example, you can also compute $$$\sqrt{P(x)}$$$ in the same manner. More example: compute $$$G=\sum_{i=0}^k \frac{F^i}{i!}$$$ . $$$G'=F'(G-\frac{F^k}{k!})$$$. We can compute $$$F^k$$$ using the above method and then solve this equation in a similar way.
Dirichlet $$$k$$$-th root in EC Final By jqdai0815
Statement
You are given a number theory function $$$g$$$ and $$$k$$$, you need to find a function $$$f$$$ such that $$$g=f^k$$$. The multiplication is Dirichlet convolution here.
Solution
The intended solution is $$$O(n\log^2 n)$$$. This improved solution credits to _rqy and Elegia.
Let $$$F(s)=\sum_{n\geq 1} \frac{f(n)}{n^s}$$$, which is the Dirichlet generating function. We also denote the Dirichlet generating function of $$$g_i$$$ by $$$G(s)$$$. $$$F'(s)=\sum_{n\geq 1} \frac{ f(n) \ln n}{n^s}$$$, thus $$$f'(n)=f(n) \ln n$$$.
Using the argument of the previous problem, we have $$$k G F'=F'Q$$$. Consider the coefficient of $$$n$$$-th term, we have $$$\sum_{d|n} f'(d) g(\frac{n}{d})=\frac{1}{k} \sum_{d|n} f(d) g'(\frac{n}{d})$$$. Since $$$g(1)=1, g'(1)=0$$$, we can compute $$$f'(n)$$$ and then $$$f(n)$$$ in $$$O(n\log n)$$$.
The remaining problem is how to deal with $$$\ln n$$$ in the modular arithmetic. Intuitively, we can replace $$$\ln n$$$ with $$$\Omega(n)$$$, which is the number of prime factors of $$$n$$$ counted with multiplicities. Since it's completely additive like $$$\ln n$$$. A rigorous proof can be like that we define a transformation $$$T$$$ of $$$f$$$ such that $$$(Tf)(n) = \Omega(n) f(n)$$$, and we can find $$$T (fg)=(Tf)g+f(Tg)$$$. So we can just replace the derivative of the DGF in the above argument to this transformation.
Rhyme By Elegia
Statement
Compute the sum of $$$\frac{n!}{\prod_{i=1}^k x_i!}$$$ where $$$\sum_{i=1}^k x_i=n$$$ and $$$d|x_i$$$ for all $$$i (1\leq i\leq k)$$$.
$$$n\leq 10^9, k\leq 2000, d=6$$$
Solution
The EGF of each term is $$$\sum_{j\geq 0} \frac{x^{jd}}{(jd)!}=\frac{1}{d}\sum_{j=0}^{d-1} e^{\omega^j x}$$$. Let $$$f(x)= \left(\frac{1}{d}\sum_{j=0}^{d-1} e^{\omega^j x}\right)^k$$$.
Since $$$w^2=w-1$$$, we have $$$f(x)=\sum c_{a,b} e^{(a+b\omega)x}$$$ where $$$-k\leq a,b\leq k$$$. And then we can sum over $$$c_{a,b} (a+b\omega)^n$$$ to get the answer. The question is that how to compute $$$c_{a,b}$$$ effectively. Since $$$1$$$ and $$$\omega$$$ are independent, we can regard $$$\frac{1}{d}\sum_{j=0}^{d-1} e^{\omega^j x}$$$ as a bivariate polynomial $$$F(u,v)$$$ where $$$u=e^x, v=e^{\omega x}$$$. We need to compute $$$G(u,v)=F(u,v)^k$$$. If we use FFT directly, the time complexity is $$$O(k^2 \log k)$$$, which might be not fast enough. However, we have $$$F \frac{\partial G}{\partial u}=k \frac{\partial F}{\partial u}G$$$. Thus we can compute $$$G$$$ in $$$O(k^2)$$$ in the same manner. We omit the details here.
Discussion about ODE and P-recursive sequence
It is known that every algebraic generating function is D-finite, and the coefficient is P-recursive.
First, there are some known results from Enumerative Combinatorics Volume 2. If $$$w$$$ be a formal power series over $$$K$$$, let $$$V_w$$$ denote the vector space over $$$K(x)$$$ spanned by $$$w, w', \dots$$$. Since it's D-finite, the dimension of $$$V_w$$$ is finite.
If $$$u,v \in \mathcal{D}$$$, then $$$w=au+bv \in \mathcal{D}$$$, $$$\dim V_w \leq \dim V_u+\dim V_v$$$
If $$$u,v \in \mathcal{D}$$$, then $$$w=uv \in \mathcal{D}$$$, $$$\dim V_w \leq \dim V_u\dim V_v$$$
If $$$u,v \in \mathcal{D}$$$, then $$$w=u * v \in \mathcal{D}$$$, $$$\dim V_w \leq \dim V_u\dim V_v$$$, $$$u* v$$$ is the Hadamard product here.
If $$$u \in \mathcal{D}$$$, $$$v$$$ is algebraic and $$$v(0)=0$$$, then $$$w=u(v(x)) \in \mathcal{D}$$$
Thus we can construct ODE for the generating function of P-recursive sequences. Here are some example:
$$$g_1(x)=\sum_{i\geq 0} \frac{x^i}{i!} \Rightarrow g_1 = g'_1$$$
$$$g_3(x)=\sum_{i\geq 0} x^i i! \Rightarrow g_3=g'_3x^2+g_3x+1$$$
$$$g_4(x)=\sum_{i\geq 0} \frac{x^i}{i!(i+k)!} \Rightarrow g_4=g"_4x+(k+1)g'_4$$$
$$$g_5(x)=\sum_{n\geq 0} \frac{1}{(k-1)n+1} {kn \choose n} x^{(k-1)n+1} \Rightarrow g_5=\frac{k x g'_5}{1+(k-1) g'_5}$$$
$$$g_6(x) = (1+x)^a (1-x)^b \Rightarrow g'_6 = \frac{ag_6}{1+x} + \frac{bg_6}{1-x}$$$
$$$g_8(x)=f^n \Rightarrow ng_8 f'= f g'_8$$$
Other examples are we can construct the ODE for the truncation of $$$P$$$-recursive sequences.
$$$g_2(x) = \sum_{i=0}^k \frac{x^i}{i!} \Rightarrow g_2=g'_2+\frac{x^i}{i!}$$$
$$$g_7(x)=\sum_{i=0}^k {n\choose i} a^i b^{n-i} \Rightarrow ng_7(a'+b')-g'_7(a+b)=n{n-1 \choose k} (a' a^k b^{n-k}-b'a^{k+1}b^{n-k-1})$$$
Universal Judge Aircraft in UOJ Round 19 By [user:ridiculos]
Statement
There are $$$n$$$ variables $$$x_1, x_2, \dots, x_n$$$ and two given parameters $$$a,b (a>b)$$$. Initially, all the variables are set to $$$0$$$.
Every time, you will choose a variable $$$x_i$$$ randomly and uniformly among the variables, which are less than $$$a$$$. You will terminate the process when all variables are not less than $$$b$$$. Compute the expected value of the number of variables that are equal to $$$a$$$.
$$$n, a, b \leq 250$$$
Solution
The intended solution is $$$O(na^2 \log n)$$$. This solution credits to djq_cpp.
We omit the routine generating function manipulations here. The hardest part of this problem is that let $$$P(x)=\sum_{i=0}^{b-1} \frac{x^i}{i!}$$$, you need to compute $$$P(x), P^2(x), \dots, P^{n-1}(x)$$$. If we use FFT directly, the time complexity is $$$O(na^2 \log n)$$$. Since the degree of $$$P$$$ is not small, the method in problem Exp doesn't help a lot here.
Let $$$R=\frac{x^{b-1}}{(b-1)!}, S=P^{k-1}, T=P^k, U=RS$$$. We notice that $$$P'=P-\frac{x^{b-1}}{(b-1)}=P-R$$$. Thus $$$T'=kP'S=k(P-R)S=kPS-RS=kT-U \Rightarrow (i+1)t_{i+1}=k t_i-u_i$$$. Since $$$U$$$ can be computed from $$$S$$$ in linear time, $$$P^k$$$ can be computed from $$$P^{k-1}$$$ in linear time. We remove the $$$O(\log n)$$$ factor here.
Generalization and Discussion
I'm thinking to generalize this technique to other problems. But I don't find amazing results. The following is the discussion with Rewinding.
For example, let $$$P(x)=\sum_{i=0}^n \frac{x^i}{i!i!}$$$, we can only get the similar recurrence of $$$P^{i} (P')^j$$$. If we need to compute $$$P^1,P^2, \dots, P^k$$$, we need to compute all terms $$$P^i (P')^j$$$ for $$$i+j\leq k$$$. So the time complexity is $$$O(k^2 nk )$$$. It may perform well when $$$k$$$ is extremely small, and $$$n$$$ is very large. However, there is another concern that I conjecture that the $$$in, in+1, \dots, i(n+1)$$$ term of $$$P^k$$$ maybe a P-recursive sequence. I do believe it's a P-recursive sequence. But I don't know how large the degree and the order it can be.
Another direction of generalization is to compute the first $$$n$$$-th term of $$$k$$$-th power of $$$P(x)=\sum_{i=0}^a \frac{x^i}{i!}$$$ more effectively. I don't think we can improve it to linear time. But it might be possible to improve it to $$$O(n \log a)$$$ or just $$$O(n\log n)$$$ but without $$$\exp, \ln$$$.
Chinese Elephant Chess By djq_cpp
Statement
Find the number of binary matrixes with $$$n$$$ rows and $$$m$$$ columns that there are at most $$$2$$$ ones in each row and column.
$$$n,m \leq 5\times 10^4$$$.
Solution
You can regard it as a bipartite graph with $$$n$$$ vertices and $$$m$$$ vertices in both parts. The degree of every vertex is no more than $$$2$$$. The graph consists of even cycles, even chains, and odd chains.
The EGF of even cycles, even chains and odd chains are
,
,
respectively.
And the answer is $$$n!m![x^ny^m] e^{F+G+H}$$$.
It's hard to deal with bivariate EGFs. We need to transform them into univariate EGFs. Since the degrees of $$$x$$$ and $$$y$$$ are the same in EGF of even cycles and even chains, we can transform them into univariate EGFs directly. For odd chains, we know the number of chains starting in the left part is less than the number in the right part by $$$m-n$$$. So we can enumerate the number of odd chains in the left part, and transform it into a univariate EGF.
The new EGF is
,
,
.
The answer is $$$n!m! [x^m] e^{F+G} \sum_{i=0}^n \frac{H^iH^{m-n+i}}{x^i i! (m-n+i)!}$$$. Let $$$P(x)=\sum_{i\geq 0} \frac{x^i}{i! (m-n+i)!}, Q=\frac{H^2}{x}$$$. The formula can be simplified to $$$n!m![x^m] e^{F+G} H^{m-n} P(Q)$$$.
We know the coefficient of $$$P$$$ is P-recursive. So we can construct an ODE for $$$P$$$ that $$$P=P"x+(m-n+1)P'$$$.
We also have that
. So
.
Then we can get the ODE of $$$P(H)$$$ :
, which is something like $P(H)\cdot A=(P(H))' \cdot B+(P(H))'' \cdot C$, where $$$A,B,C$$$ are three polynomials. So we can use D&C and FFT to solve this equation in $$$O(n\log^2 n)$$$.
Note that $$$H=\frac{1}{2}(x+\frac{x}{1-x})$$$, if we multiply $$$(1-x)^6$$$ in the both side of the ODE of $$$P(H)$$$, we can get a polynomial recurrence of $$$P(H)$$$ with small order and degree. So $$$P(H)$$$ can be computed in $$$O(n)$$$ here. It's not hard to see $$$H^{m-n}$$$ and $$$e^{F+G}$$$ are also P-recursive, so the convolution should also be P-recursive. Then we solved this problem in the linear time easily. But I think the order and degree will be too large if we try to find the recurrence of the answer directly. So a more effective way might solve these recurrences directly and compute the answer by FFT.
Generalization and Discussion
It looks polynomial modular composition can be computed in $$$\tilde{O}(n)$$$ in academia. But I don't know whether it's practical. The most popular algorithm is $$$O(n^2 + n^{1.5} \log n)$$$. You can also read the discussion and implementation of the Brent-Kung algorithm here. However, it is a bit slower than $$$O(n^2)$$$ one when $$$n=20000$$$.
It's not hard to see the above method can be applied to compute the polynomial composition $$$F(G(x))$$$ when $$$F$$$ is D-finite. We can construct the ODE of $$$F(G(x))$$$ and solve the equation using D&C and FFT in $$$O(n\log ^2 n)$$$. However, when the ODE of $$$F$$$ is too complicated, there is a severe issue of the constant factor.
Combined with Newton's method, we can compute the composition inverse of a D-finite series. A good application is to compute the number of simple permutations (a.k.a NEERC18 I) with length $$$1,2, \dots, n$$$ in $$$O(n\log^2 n)$$$.
Pearl in CTSC 2019 By laofudasuan
Statement
Compute $$$\frac{n!}{2^d} [x^n]\sum_{p=0}^{k} {m \choose p} (e^x-e^{-x})^p (e^x+e^{-x})^{m-p}$$$.
$$$n\leq 10^9, m\leq 10^5$$$
Solution
This solution credits to djq_cpp.
Let $$$Q(x)=\sum_{p=0}^k {m \choose p} (x-1)^p (x+1)^{m-p}$$$. So
. And we can simplify it to
.
The right is P-recursive, thus $$$Q$$$ can be computed in $$$O(m)$$$. So the answer is $$$\frac{n!}{2^d} [x^n] Q(e^{2x}) e^{-mx}$$$. The whole problem can be solved in $$$O(m(1+\log_m n))$$$.
Note: We need to compute $$$1^n, 2^n, \dots, m^n$$$, but it's multiplicative. Thus we can compute them in $$$O(m\log_m n)$$$.
Your maths is tooooooooooo strong.
Ahhh, that great feeling when you read the entire blog and you understand that you do not understand anything at all. :)
The blog would benefit from clarifying the notation you're using and not overloading variables so much. E.g. right at the beginning, you use $$$f$$$ 3 different times with 3 different meanings and the 3rd of them seems to be an operator from the space of functions to reals, since it takes $$$x$$$ as an argument and $$$x$$$ clearly has to be a function from context. I'm sure you meant something else, but not based on what you wrote.
I know this blog is not meant for me but this blog has completely demoralized me.
Me too bud
Weird flex but OK.
Me who understood nothing and scrolled whole the blog for upvote
CTSC => CTS
How to calculate coefficents $$$p_k$$$ in Gnutella chessmaster ($$$\sum p_kx^k=\prod (x+c_i)$$$) in O(nlog n)? Or second solution didn't use it?
It's $$$O(n\log ^2 n)$$$, but it's much faster than multipoint evaluation.
Thank you!
In general you can use D&C. For that specific problem you can observe that $$$0 \le c_i \le 2$$$ (or in different signs idk). So there is no need to compute polynomials or anything. You can just maintain a count of occurrence for $$$c_i$$$ and exponentiate.
This tutorial is like someone's fading memory of a town.
Will I understand this blog if I watch all 3Blue1Brown videos?
In the Dirichlet k-th root problem, what does $$$f'(n)$$$ denote? (how do you define the derivative of such a function?)
$$$F(x)$$$ is the generating function of $$$f(n)$$$, so $$$F'(x)$$$ is the generating function of $$$f'(n)$$$
Is there a Chinese version?I'm not used to reading English.
So you expect a post that's supposed to share Chinese problems to the rest of the world to be in Chinese?
Because MiFaFaOvO is Chinese, and I guess he had written a Chinese version before he wrote this. I just ask if there is one in some Chinese commuities, such as cnblog, csdn or his own blog, which is better.
For somebody interested in such combinatorics, here is something quite interesting related to graphs for you
Is there somewhere I can submit "Rhyme" and "Chinese Elephant Chess"?
Rhyme is available in LOJ, with $$$d\in {4, 6}$$$ and modulo $$$19491001$$$.
For "Chinese Elephant Chess", I think this problem only appeared in our discussion with djq_cpp currently :)
You can find "Chinese Elephant Chess" on NFLSOJ. (The problem ID is 534 and you need to login in order to view and submit the problem.)
Wow! but i didn't understand a thing.
Very helpful, thanks (:
In the problem Rhyme, how can we compute $$$G$$$ in $$$O(k^2)$$$ ? Fastpower need $$$O(\log n)$$$. And it seems not possible to use Euler Sieve, 'cause what we need to calc is $$$(i\omega+j)^n$$$.
Please forgive my poor English.
I think $$$O(k^2)$$$ is hard, but we can achieve $$$O(k^2\log_k n)$$$. Like what we did on $$$\mathbb Z$$$, you just need to calculate the power of those irreducible $$$a+b\omega \in \mathbb Z[\omega]$$$. For $$$d=4$$$ it's Gaussian integer and for $$$d=6$$$ it's Eisenstein integer. There are $$$\Theta(k^2/\log k)$$$ Gaussian primes or Eisenstein primes with $$$|p|\leq k$$$.
"Let the EGF of $$$g$$$ be $$$x(t)$$$ and EGF of $$$A$$$ be $$$a(t)$$$. Thus $$$x′(t)=\frac{1}{2}a(t)x^2(t)+1$$$. We can solve this equation by D&C and FFT in $$$O(n\log^2n)$$$."
Do you have a resource that goes into more detail on this technique?
I think this blog is not for me as its level is too above....only one line for this "Head Over for me".
jqdai0815 I think cf broke your blog :(
original if anyone interested to see the original blog
Are you able to see it? I tried but of no use, can you add a screenshot of that?
Strange it's working for me. anyway here's the full page image. Image
PS: wayback doesn't have latex support so kinda hard to read
MikeMirzayanov Please take a look, thank you!
Available only for Premium users.
Lol I think I'll pay if sb monetizes well known problems in china...
Now this blog pretty much sums up why i can't even aim for higher ratings in my wildest dreams ?
So is this what it takes to be among the top programmers?
orz
Bit of a note on the choice of $$$\ln$$$ in the Dirichlet $$$k$$$-th root problem. In general, you can define $$$f'(n) = f(n) a(n)$$$ where $$$a$$$ is any completely additive function.
If $$$F(s)$$$ is the Dirichlet series corresponding to $$$f(n)$$$, define $$$F'(s)$$$ to be the series corresponding to $$$f'(n)$$$.
You get
So, you can pick $$$a(n)$$$ to be, say, $$$\nu_2(n)$$$ (the largest $$$k$$$ such that $$$2^k$$$ divides $$$n$$$) and the recurrence,
would still hold. However, the issue is that you can't always recover $$$f(n)$$$ from $$$f'(n)$$$ using $$$f(n) = f'(n) / a(n)$$$ since $$$a(n) = \nu_2(n) = 0$$$ for odd $$$n$$$.
So for computing $$$f$$$, you can pick $$$a(n)$$$ to be any completely additive function such that $$$a(n) \neq 0$$$ for all $$$n > 1$$$. So something like $$$a(n) = $$$ sum of primes dividing $$$n$$$ (counting multiplicities) works just as well.
Sorry for necro-commenting, I think there is typo in solution of Gnutella Chessmaster.
Instead of $$$h_i = x h'_{i - 1} + b_i h_{i - 1}$$$ it should be $$$h_i = -x h'_{i - 1} + b_i h_{i - 1}$$$
Recently I was rethinking this problem and it turns out we really could.
1. Let $$$F = P^k$$$, we have the differential equation $$$F' P = kF P'$$$.
It gives a recursive formula to compute $$$F$$$, moreover, such one of the sequence of the relaxed convolution only has length $$$a$$$, we can solve it by $$$O(n/a)$$$ many relaxed convolutions of length $$$2a$$$. This first gives an $$$O(n \cdot \mathsf R(a) / a)$$$ algorithm.
2. Then we consider refining the above approach. Suppose we have computed $$$F$$$ up to degree $$$< m$$$, say $$$F = F_0 + x^m F_1$$$ and we have $$$F_0$$$. The differential equation gives
Note that the first $$$m$$$ terms of $$$F$$$ is correct, then $$$F_0' P = k FP'$$$ is still correct up to an error of $$$O(x^{m-1})$$$, i.e.,
for some $$$R$$$. The differential equation of $$$x_0 + x^m F_1$$$ simplifies to
Then one can solve the above first-order linear ODE up to $$$a$$$ terms each time, this gives an $$$O(n \log a)$$$ algorithm.
3. We still didn't use the special form of $$$P$$$, so the above algorithm is quite general. When $$$P$$$ is the truncation of $$$\exp$$$, we can assume $$$P = e^x$$$ because we only compute $$$a$$$ terms each time. Then the differential equation is simplified to
The solution is
which can be implemented by several convolutions. This gives an $$$O(n \log a)$$$ algorithm without $$$\exp$$$ and $$$\ln$$$.
how to be unbelievable like u ?