Thanks for participation!
The official solution of E is $$$O(n)$$$. If your solution of E has a larger complexity, I recommend reading the tutorial.
The optimal sequence of operations is very simple.
The optimal sequence of operations is adding $$$k-1$$$ 1-s into the set each time, at the same time decreasing $$$n$$$ by $$$k-1$$$. This implies that the answer is $$$\lceil \frac{n-1}{k-1}\rceil$$$.
I failed to find a Div2-A level proof. If you have a simpler proof please share it in the comments.
Consider the number of elements that is $$$\equiv 1\pmod{(k-1)}$$$ in the set. The number of such elements increase by at most $$$k-1$$$ in each operation, and the aforementioned sequence of operation achieves the maximum increment.
"Most sequences" can be transformed into $$$[1]$$$. Conditions for a sequence to be un-transformable is stringent.
Find several simple substrings that make the string transformable.
We list some simple conditions for a string to be transformable:
- If 111 exists somewhere (as a substring) in the string, the string is always transformable.
- If 11 appears at least twice in the string, the string is always transformable.
- If the string both begins and ends with 1, it is always transformable.
- If the string begins or ends with 1 and 11 exists in the string, it is always transformable.
These can be found by simulating the operation for short strings on paper.
Contrarily, if a string does not meet any of the four items, it is always not transformable. This can be proved using induction (as an exercise).
1988C - Increasing Sequence with Fixed OR
The answer is a simple construction.
The maximum length for $$$2^k-1\ (k>1)$$$ is $$$k+1$$$.
TODO
1988D - The Omnipotent Monster Killer
Formulate the problem.
Some variables can't be large.
Suppose monster $$$i$$$ is killed in round $$$b_i$$$. Then, the total health decrement is the sum of $$$a_i\times b_i$$$. The "independent set" constraint means that for adjacent vertices, their $$$b$$$-s must be different.
Observation: $$$b_i$$$ does not exceed $$$\lfloor \log_2 n\rfloor+1$$$. In this problem, $$$b_i\le 19$$$ always holds for at least one optimal $$$a_i$$$.
Let the mex of a set be the smallest positive integer that does not appear in it. Note that in an optimal arrangement, $$$b_i=\mathrm{mex}_{(j,i)\in E} b_j$$$.
Consider an vertex with the maximum $$$b$$$, equal to $$$u$$$. Root the tree at this vertex $$$x$$$. The vertices connected with $$$x$$$ should take all $$$b$$$-s from $$$1$$$ to $$$u-1$$$. Denote $$$f(u)$$$ as the minimum number of vertices to make this happen, we have
This ends the proof.
By dp, we can find the answer in $$$O(n\log n)$$$ or $$$O(n\log^2 n)$$$, depending on whether you use prefix/suffix maximums to optimize the taking max part.
Bonus: Find a counterexample for $$$b_i\le 18$$$ when $$$n=300000$$$. (Pretest 2 is one case)
Formulate the problem on a cartesian tree.
Find a clever bruteforce. Calculate the time complexity carefully.
Build the cartesian tree of $$$a$$$. Let $$$lc_x,rc_x$$$ respectively be $$$x$$$'s left and right children.
Consider a vertex $$$x$$$. If we delete it, what would happen to the tree? Vertices that are on the outside of $$$x$$$'s subtree will not be affected. Vertices inside the subtree of $$$x$$$ will "merge". Actually, we can see that, only the right chain of $$$x$$$'s left child ($$$lc_x\to rc_{lc_x}\to rc_{rc_{lc_x}}\to \dots$$$) and the left chain of $$$x$$$'s right child will merge in a way like what we do in mergesort.
Now, if we merge the chains by bruteforce (use sorting or std::merge), the time complexity is $$$O(n)$$$! It's easy to see that each vertex will only be considered $$$O(1)$$$ times.
We can use dp to calculate the number of permutations of $$$1\sim n$$$ with $$$i$$$ prefix maximums and $$$j$$$ ascents, $$$f(n,i,j)$$$: consider where 1 is inserted, we have
For suffix maximums, (the number of permutations of $$$1\sim n$$$ with $$$i$$$ prefix maximums and $$$j$$$ ascents, $$$g(n,i,j)$$$, we can just reverse some dimension of $$$f$$$).
This runs in $$$O(n^3)$$$.
To calculate the answer, consider the position of $$$n$$$. Suppose it's $$$p$$$. The the answer is
Let $$$u(x,y)=\sum_{i}f(x,i,y)a_{i+1}$$$, $$$v(x,y)=\sum_{z}g(x,i,y)b_{i+1}$$$ (both of these are calculated in $$$O(n^3)$$$), then the answer is
By seeing $$$u$$$ and $$$v$$$ as 2D polynomials, this can be calculated with 2D FFT in $$$O(n^2\log n)$$$.
Of course! This problem can also be solved with modulo $$$10^9+7$$$ or on an algebraic structure where FFT is not easy, but interpolation can be successfully carried out.
We still need to consider $$$u(p-1,*)$$$ and $$$v(n-p,*)$$$ as polynomials $$$\sum_{i}u(p-1,i)x^i,\sum_{i}v(n-p,i)x^i$$$. Instead of doing FFT, consider substitute $$$x$$$ with $$$0,1,\dots,n+1$$$, and use interpolation to recover the final coefficients.
Suppose we have a value of $$$x$$$. Then, calculating $$$u(p-1)$$$ and $$$v(n-p)$$$ takes $$$O(n^2)$$$ in total. For fixed $$$x$$$ and $$$n$$$, the answer for $$$n$$$ is (here, note how $$$x$$$ is multiplied)
This also takes $$$O(n^2)$$$ in total. So, for each $$$x$$$, the calculation is $$$O(n^2)$$$.
For each $$$n$$$, the naive implementation of interpolation runs in $$$O(n^2)$$$. After we recover the coefficients, we multiply it with $$$c$$$ and update the answer.
So, the time complexity is $$$O(n^3)$$$.