Thanks for the participation! We hope you enjoyed the contest.
The solution codes will be added shortly after the system tests are finished. They have been added.
Problem Credit: Proof_by_QED
First, let $$$m=\operatorname{mex}(A)$$$. We can ignore all elements that are greater than $$$m$$$. This is because since $$$m$$$ is the mex, $$$m$$$ does not appear in $$$A$$$, and consequently it also cannot appear in any of the partitions. Thus, which of the partitions each element goes to does not matter.
Now, let's consider the number $$$0$$$. Suppose there is at least one $$$0$$$ in $$$A$$$. Then, at least one multiset in the partition must contain $$$0$$$ (because each element in $$$A$$$ must be in one multiset in the partition). This also implies that all multisets must contain $$$0$$$, as otherwise the $$$\operatorname{mex}$$$ of all multisets cannot be the same (it is $$$0$$$ for some, but greater than $$$0$$$ for others).
Once we reach the conclusion that $$$0$$$ must be present in all partitions, we can make the same conclusion for $$$1,2,\ldots,m-1$$$. That is, $$$m$$$ is the only possible score.
Problem Credit: Proof_by_QED
For simplicity, let's denote $$$f(a[l,r])$$$ as $$$f(l,r)$$$.
First, notice that $$$f(l,r)\leq f(l,r+1)$$$. This is trivially true because each unique element that appears in the range $$$(l,r)$$$ must appear in $$$(l,r+1)$$$ also. Now, $$$f(l,r+1)$$$ is either equal to $$$f(l,r)$$$ if $$$a_{r+1}$$$ already appeared, or $$$f(l,r)+1$$$ if $$$a_{r+1}$$$ does not appear in $$$[l,r]$$$.
Now, expanding $$$b_i-b_{i-1}$$$ yields $$$f(i,i) + (f(i-1,i)-f(i-1,i-1))+(f(i-2,i)-f(i-2,i-1))+\ldots+(f(1,i)-f(1,i-1))$$$. From the above, we know that $$$f(l,i) \neq f(l,i-1)$$$ only when $$$a_i$$$ does not appear in $$$[l,i-1]$$$. Let $$$x$$$ be the largest integer such that $$$f(x,i-1)=f(x,i)$$$. Note that this implies that $$$a_x=a_i$$$. For all $$$y \gt x$$$, we have $$$f(y, i) - f(y,i-1) = 1$$$. Thus, we have $$$b_i-b_{i-1}=i-(i-x) = x$$$, because all terms of the expansion above are $$$1$$$ except for the ones corresponding to indices $$$x$$$ to $$$i-1$$$. Since we know that $$$a_x = a_i$$$, and $$$x = b_i - b_{i-1}$$$, we should set $$$a_i$$$ to be the same as $$$a_{b_i-b_{i-1}}$$$. There is one more case where $$$x$$$ is undefined, because $$$f(y,i) \neq f(y,i-1)$$$ for all $$$y \geq 1$$$. This implies $$$a_i$$$ did not appear before in the array. We can just set $$$a_i$$$ to any unused number arbitrarily.
Problem Credit: Proof_by_QED
Let's view $$$x$$$, $$$f(x)$$$, and $$$x \oplus f(x)$$$ each as $$$y$$$-bit numbers (where $$$y$$$ is the smallest integer with $$$2^y \gt x$$$). We can see that $$$x \oplus f(x)$$$ is necessarily a palindrome. This is because the last bit of $$$x \oplus f(x)$$$ is the XOR of the last bit of $$$x$$$ and the first bit of $$$f(x)$$$. But since $$$f(x)$$$ is just $$$x$$$ reversed, this simplifies to the last bit of $$$x$$$ XOR'ed with the first bit of $$$x$$$. Now, look at the first bit. It is the XOR of first bit of $$$x$$$ and the first bit of $$$f(x)$$$. By similar reasoning, this also simplifies to the first bit of $$$x$$$ and the last bit of $$$x$$$. We can use the same argument to argue that the second and second to last, third and third to last bit, etc. are all the same. This is what a palindrome is.
One caveat is that although $$$x$$$ is not allowed to have leading zeroes (by our definition of $$$y$$$), both $$$f(x)$$$ and $$$x \oplus f(x)$$$ is. Thus, if we let $$$z$$$ be the smallest integer with $$$2^z \gt n$$$, then $$$y$$$ is not necessarily equal to $$$z$$$. Fortunately, using our palindrome condition, we can find the number of leading zeroes $$$n$$$ must have (it is the same as the number of trailing zeroes in the binary representation of $$$n$$$). This is how we can find $$$y$$$. There is one more trick — suppose $$$z$$$ is odd and there exists a middle. If the middle digit is $$$1$$$, then we cannot find an integer $$$x$$$ with $$$x \oplus f(x)=n$$$. This is because the middle digit is the result of XOR between a bit and itself in $$$x$$$ (since it is in the middle), and both $$$1 \oplus 1$$$ and $$$0 \oplus 0$$$ results in $$$0$$$. Note all other digits are allowed to be either $$$1$$$ or $$$0$$$. If the result is $$$0$$$, we can place a $$$1$$$ in both the corresponding left bit and corresponding right bit. If the result is $$$1$$$, we can arbitrarily place a $$$1$$$ in the left bit and a $$$0$$$ in the right bit, and this is possible if the left bit and right bit are not the same.
2159A - MAD Interactive Problem
Problem Credit: wuhudsm
It is not hard to see that the following property holds:
- For some subsequence $$$T$$$ and index $$$i$$$ such that $$$i \notin T$$$, let the query results be $$$0$$$ for $$$T$$$ and $$$x \neq 0$$$ for $$$T \cup {i}$$$. Then, we gain the information that $$$a_i=x$$$.
To use this property to our advantage, let's maintain a subsequence $$$S$$$ whose query result is known to be $$$0$$$. Consider the following subroutine:
- Query the subsequence $$$S \cup {i}$$$ and gain the result $$$x$$$.
- If the result is nonzero, then we know that $$$a_i=x$$$.
- Otherwise, let's assign $$$S \leftarrow S \cup {i}$$$.
If we perform this subroutine for $$$i=1,2,\ldots,2n$$$, we gain the information about all $$$a_i$$$ such that its value appeared for the second time. We now need to find the other $$$n$$$ elements' values.
Notice that currently the subsequence $$$S$$$ contains each of $$$1,2,\ldots,n$$$ exactly once, and so does $$$[1,2n] \setminus S$$$. Therefore, we assign $$$S \leftarrow ([1,2n] \setminus S)$$$ and perform the subroutine over all undiscovered indices again. Each run of the subroutine will give you a nonzero result, and give you the values of the $$$n$$$ undiscovered elements in $$$n$$$ additional queries.
Thus, this solution uses exactly $$$3n$$$ queries (or $$$3n-1$$$ if you would) to discover the values of all $$$2n$$$ elements.
Problem Credit: chromate00
UPD: I sincerely apologize about the weak pretests of the problem. I have greatly underestimated the runtime and memory usage of the worst solutions, while still wanting to be generous about slower solutions. This has led to weak tests during contest. Deeply sorry about the bad contest experiences affected by this.
Let us define an minimal set of rectangles as a set of rectangles on the grid that satisfy the following conditions:
- For all rectangles $$$(u,d,l,r)$$$ in the set, there is no other rectangle $$$(u,d,l',r')$$$ on the grid such that $$$l \le l' \le r' \le r$$$ and $$$r'-l' \lt r-l$$$.
Then, let us call the minimal set of rectangles with maximum cardinality the exhaustive minimal set $$$\mathbb{S}$$$.
This set $$$\mathbb{S}$$$ satisfies the following conditions:
The set $$$\mathbb{S}$$$ is unique.
$$$|\mathbb{S}| \le {{n}\choose{2}}\cdot m$$$
$$$\sum_\limits{(u,d,l,r) \in \mathbb{S}} {(r-l+1)} \le 2 \cdot {{n}\choose{2}} \cdot m$$$
For each covered cell $$$(x,y)$$$ on the rectangle, there is at least one rectangle in $$$\mathbb{S}$$$ that covers $$$(x,y)$$$ with minimum area.
Elements of $$$\mathbb{S}$$$ can be enumerated in $$$\mathcal{O}(n^2m)$$$ time by the following algorithm:
For all pairs $$$u,d$$$ of rows, enumerate columns $$$i_1,i_2,\ldots,i_k$$$ such that $$$G_{u,i_j}=G_{d,i_j}$$$ for all $$$j$$$.
Then, $$$(u,d,i_j,i_{j+1})$$$ is an element of $$$\mathbb{S}$$$ for all $$$1 \le j \lt k$$$.
It can be shown that the algorithm finds all elements of $$$\mathbb{S}$$$.
Then, it suffices to propagate the area of each rectangle $$$(u,d,l,r) \in \mathbb{S}$$$ to all cells covered by it.
This can be done in multiple ways, but the simplest out of all such ways is keeping a range DP for each column and propagating the values based on $$$\text{dp}[u][d] \le \text{dp}[u][d+1]$$$ and $$$\text{dp}[u][d] \le \text{dp}[u-1][d]$$$. Then, the value of $$$\text{dp}[i][i]$$$ on column $$$j$$$ becomes the answer for cell $$$(i,j)$$$.
Finally, the conditions satisfied by $$$\mathbb{S}$$$ are also satisfied when the grid is transposed, so if $$$n \gt m$$$ you can simply compute the same way on the transposed grid. This gives us a solution with $$$\mathcal{O}(nm \min(n,m))=\mathcal{O}(nm\sqrt{nm})$$$ complexity.
Enumerating all elements of $$$\mathbb{S}$$$ and saving it in memory might take $$$\mathcal{O}(nm \min(n,m))$$$ memory, but it may still pass if sufficiently optimized. There is a way to modify the solution to use only $$$\mathcal{O}(\min(n,m)(n+m))$$$ memory, which is left as a practice for the reader.
Problem Credit: wuhudsm
Let's solve the easy version first: all $$$a_i = -1$$$.
We can convert the condition to: for $$$0 \leq i \leq n$$$, $$$a_i = \sum_{a_j=i} j$$$.
Since $$$a_0=-1$$$, we can only consider $$$j \gt 0$$$ and get $$$a_0 = \sum_{a_j=0}j$$$ automatically.
We have a claim: for a cool polynomial and all $$$i \gt 0$$$, there are only three cases:
Case $$$1$$$: $$$a_i = 0$$$.
Case $$$2$$$: $$$a_i = i$$$.
Case $$$3$$$: $$$a_i = j$$$ and $$$a_j = i$$$, where $$$i \neq j$$$.
Use the method of proof by contradiction. Assuming a cool polynomial does not meet the above conditions.
Let $$$m$$$ be the minimum positive index not meeting the above three conditions.
If there is no index $$$j \gt m$$$ satisfying $$$a_{j} = m$$$, we get $$$a_m=0$$$ or $$$a_m=m$$$, which meets condition $$$1$$$ or $$$2$$$.
If there is only one index $$$j \gt m$$$ satisfying $$$a_{j} = m$$$, we get $$$a_m=j$$$, which meets condition $$$3$$$.
If there two indices $$$j,k \gt m$$$ such that $$$a_{j} = m$$$, $$$a_{k} = m$$$. We can get some indices $$$x_1,x_2,...$$$ satisying $$$\sum x_i=m$$$ and $$$a_{x_i}=j$$$, and some indices $$$y_1,y_2,...$$$ satisying $$$\sum y_i=m$$$ and $$$a_{y_i}=k$$$. This contradicts the fact that $$$m$$$ is the minimum positive index not meeting the above three conditions. In other words, you can not find such indices $$$x_1,x_2,...$$$ and $$$y_1,y_2,\ldots$$$
For more indices, the proof is the same as above.
Thus, we can write an easy DP:
$$$dp_i=2dp_{i−1}+(i−1)dp_{i−2}$$$
Now, let’s return to the original problem. For all $$$a_i \neq -1$$$, we need to check whether they form "valid coefficients"(if the coefficients can represent at least one cool polynomial). After that, the problem reduces to the easy version.
There are several ways to check validity. Cases $$$1$$$ and $$$2$$$ are trivial. For Case $$$3$$$, we can use DSU to merge $$$i$$$ and $$$j$$$, and then check the size of each connected component.
2159D1 - Inverse Minimum Partition (Easy Version), 2159D2 - Inverse Minimum Partition (Hard Version)
Problem Credit: chromate00, wuhudsm; Special Thanks: satyam343
This problem's editorial will be divided into parts. You may need to read all of them to understand the full picture.
For this problem there exist two versions of a core observation required for solutions:
- When you discard all elements other than suffix minimums, the optimal total cost does not change. (1.a)
- Denoting optimal total cost for sequence $$$b_l,b_{l+1},\ldots,b_r$$$ as $$$\operatorname{opt}(l,r)$$$, it can be shown that $$$\operatorname{opt}(l-1,r) \ge \operatorname{opt}(l,r)$$$ for all $$$1 \lt l \le r \le n$$$. (1.b)
(1.a) can be shown by exchange arguments. Assume a partition consists mainly of two subarrays (and some irrelevant others), $$$[\dots,[x_1,\dots,x_p],\dots,[y_1,\dots,y_q,\dots,y_r],\dots]$$$. Then, if $$$x_p \ge y_q$$$, you can find a partition $$$[\dots,[x_1,\dots,y_q],[y_{q+1},\dots,y_r],\dots]$$$ which is at least good as the original one. Likewise, if $$$x_p \ge y_r$$$, you can find a partition $$$[\dots,[x_1,\dots,y_r],\dots]$$$ which is at least as good as the original one. Repeating this exchange process on any "optimal" partition, you will now have another "optimal" partition whose endpoints all belong to suffix minimums. And because then the denominator of $$$\mathtt{cost}$$$ will always lie on a suffix minimum, you can discard all of the rest without changing the total cost.
(1.b) can be shown much easier. Consider the $$$\mathcal{O}(n^2)$$$ algorithm to compute $$$\operatorname{opt}$$$. We know that $$$\operatorname{opt}(l,r)=\min\limits_{i=l}^{r}{(\operatorname{opt}(i+1,r)+c[l,i])}$$$ where $$$c[l,r]\ge 0$$$ for all $$$1 \le l \le r \le n$$$. Also, observe $$$c[l,r] \ge c[l+1,r]$$$ for all $$$l \lt r$$$. As all terms that constitute the minimum possible value of $$$\operatorname{opt}(l,r)$$$ are no less than that of $$$\operatorname{opt}(l+1,r)$$$, $$$\operatorname{opt}(l,r) \ge \operatorname{opt}(l+1,r)$$$ indeed does hold.
Both observations hold similar implications, and all subsequent observations for subtask 1 can be shown for both observations. While (1.a) is much convenient for implementing the solution for subtask 1, we will prove all subsequent observations using (1.b) because it will be reused in subtask 2 also.
We give you three solutions to solve subtask 1, two of which are helpful in solving subtask 2, while the other one is not.
For the first solution of subtask 1, we show the following.
- The optimal total cost is no greater than $$$2\log_2(a_n)$$$. (2.1.a)
We will show this by an approximate algorithm that constructs a partition that might be suboptimal.
Consider the following method to partition the sequence.
Starting from the last element of the sequence, we expand the left border of the leftmost subarray greedily:
- If it is possible to use a subarray of cost $$$2$$$, then do use it.
- Otherwise, use a subarray of cost $$$1$$$.
In either event, the rightmost element of the leftmost subarray must be strictly less than the one on the previous leftmost subarray. By contradiction, this event cannot happen more than $$$\log_2(a_n)$$$ times. As we only used subarrays of cost no greater than $$$2$$$, this algorithm always constructs a partition with total cost no greater than $$$2 \log_2(a_n)$$$.
As now we know that the optimal total cost is no greater than $$$C = 128 \ge 2 \log_2(a_n)$$$, we can optimize the $$$\mathcal{O}(n^2)$$$ DP into a solution that resembles two-pointers, but we will use $$$C$$$ pointers on one side for each possible cost. This is possible due to the monotonicity that we proved either as a corollary of (1.a) or by (1.b).
This solution has a time complexity of $$$\mathcal{O}(nC)=\mathcal{O}(n \log a)$$$.
For the second solution of the first subtask, we show the following.
- There is always an optimal solution which only uses subarrays with cost at most $$$3$$$. (2.2.a)
We will show this by exchange arguments.
Observe that for any $$$X \ge 4$$$ there exists a pair of integers $$$(x,y)$$$ that satisfies the following conditions:
- $$$x \cdot y \ge X$$$
- $$$x + y \le X$$$
- $$$1 \le x,y \lt X$$$
The proof for the above is simple; $$$x=\left \lceil {\frac{X}{2}}\right \rceil$$$ and $$$y=2$$$ satisfies the conditions for all $$$X \ge 4$$$. Then, it is possible to divide a subarray $$$A$$$ into two subarrays $$$P$$$, $$$Q$$$ such that $$$\mathtt{cost}(P) \le x$$$ and $$$\mathtt{cost}(Q) \le y$$$. Formally, assume a subarray $$$A=[A_1,A_2,\ldots,A_k]$$$ where $$$\mathtt{cost}(A)=X$$$. Let $$$A_i$$$ be the unique element for which $$$y \cdot A_i \lt A_k$$$ and $$$y \cdot A_j \ge A_k$$$ for all $$$i \lt j \le k$$$. Here, $$$A_i$$$ may be $$$A_1$$$. This element $$$A_i$$$ always exists, and we can divide $$$A$$$ into $$$P=[A_1,A_2,\ldots,A_i]$$$ and $$$Q=[A_{i+1},A_{i+2},\ldots,A_k]$$$. Then, it is easy to show that $$$\mathtt{cost}(P) \le x$$$ and $$$\mathtt{cost}(Q) \le y$$$. This defines an "exchange scheme" which will never worsen the total cost while decreasing the maximum subarray cost used.
Repeating this process on an optimal partition, the new partition has a total cost not worse than the original, and will only use subarrays with cost at most $$$3$$$.
We can show that this bound is tight by construction. Namely, the sequence $$$[3,4,8,9]$$$ has total cost $$$3$$$, and splitting it into any $$$2$$$ or more subarrays will have total cost at least $$$4$$$.
Applying this to the solution outlined in 2.1, you get a solution with time complexity $$$\mathcal{O}(n)$$$. Alternatively, you can use data structures such as segment tree for a solution with time complexity $$$\mathcal{O}(n \log n)$$$.
The third solution of the first subtask strays away from using any observation and abuses the cost function directly.
Recall that the cost function is as follows:
Considering $$$\min(a_{j+1},\ldots,a_{i})$$$ as a constant, $$$\mathtt{cost}([a_{j+1},\ldots,a_i])+\operatorname{opt}(1,j)$$$ can be regarded a linear function of $$$a_i$$$ rounded up. Therefore, we can consider using Convex Hull Trick to optimize our $$$\mathcal{O}(n^2)$$$ DP. The only caveat is how to use Convex Hull Trick with changing values of $$$\min(a_{j+1},\ldots,a_{i})$$$.
This can be resolved by maintaining values of $$$\min(a_{j+1},\ldots,a_{i})$$$ using a monotone stack. We can simply keep the minimum value of $$$\operatorname{opt}(1,j)$$$ for same values of $$$\min(a_{j+1},\ldots,a_{i})$$$. The updates that might happen on the CHT structure are maintainable if you can roll back the CHT structure (like we did the monotone stack); and a Li-Chao Tree can be rolled back quite easily as its inner behaviour is not different from a segment tree.
This leads us to a solution with time complexity $$$\mathcal{O}(n \log {1/\epsilon})$$$. Certain solutions with too big constants can get MLE under the given constraints, but there exist solutions with small enough constants that use double instead of whatever other type you might want to use. Why such solutions are still correct is left as a practice for the reader (seriously, you should be able to take a guess if you understood the other "more elegant" solutions).
Before reading, please read the solutions to the first subtask (and hopefully identify the two solutions that are helpful in solving the second subtask).
Directly applying the most optimal DP solution to the first subtask, we have a solution with $$$\mathcal{O}(n^2)$$$ time complexity. However, it seems hard to improve further as even our optimal DP has $$$\mathcal{O}(n)$$$ states and it looks like that can't be helped with.
However, we somehow already have enough resources to improve it even further; there is a way to know all values $$$\operatorname{opt}(1,r),\operatorname{opt}(2,r),\ldots,\operatorname{opt}(r,r)$$$ without using $$$\mathcal{O}(n)$$$ states.
Recall the following:
- There will be only at most $$$2\log_2(a_r)$$$ different values of $$$\operatorname{opt}(i,r)$$$. (Direct consequence of 2.1.a)
- Our DP to compute $$$\operatorname{opt}(1,r)$$$ is monotonic, i.e. $$$\operatorname{opt}(i,r) \ge \operatorname{opt}(i+1,r)$$$. (= 1.b)
Using the above, we can infer the following.
- We can model another DP by swapping the places of cost and index, and this new DP will only have at most $$$C=128 \ge 2\log_2(a_r)$$$ states.
This "swapping cost and index" might sound like it's so unexpected, but surely you may have seen such a DP formulation if you have solved many DP problems. Namely, this technique is used in Knapsack problems when the cost is small and the capacity is huge.
So, how would we formulate this DP? Let us define a new function:
- $$$\operatorname{opt}'(c,r)$$$: minimum value of $$$l$$$ such that $$$\operatorname{opt}(l,r) \le c$$$ ($$$0 \le c \le C$$$, $$$1 \le r \le n$$$).
Then, we can see by 1.b that $$$\operatorname{opt}'(c+1,r) \le \operatorname{opt}'(c,r)$$$ for all $$$0 \le c$$$. Also, after we know $$$\operatorname{opt}'(c,r)$$$ and $$$\operatorname{opt}'(c+1,r)$$$, we can compute the range of $$$l$$$ such that $$$\operatorname{opt}(l,r) = c$$$, and it lets us find the sum of $$$\operatorname{opt}(l,r)$$$ over all $$$1 \le i \le r$$$ in $$$\mathcal{O}(C)$$$ time.
So let's think of how we will compute values of $$$\operatorname{opt}'(c,r)$$$ for one $$$r$$$ in $$$\mathcal{O}(C)$$$ time.
Let us first precompute the minimum $$$l$$$ such that $$$\mathtt{cost}([a_l,a_{l+1},\ldots,a_i]) \le c$$$ for all $$$i$$$ and $$$c$$$. By 2.2.a, it suffices to compute this for only $$$1 \le c \le 3$$$. We will denote this as $$$L(i,c)$$$. These values can be easily preprocessed using walk on segment tree in $$$\mathcal{O}(n \log n)$$$ time. This will come in handy for our new DP formulation.
In the formulation of our original $$$\mathcal{O}(n)$$$ DP for the first subtask, we required a pointer for each cost $$$1 \le c \le 3$$$ on one side. On the new formulation, however, we will need some transition directly for indices. This is where the values $$$L(i,c)$$$ comes in handy.
We can find the following correct DP transition:
- $$$\operatorname{opt}'(c,r)=r+1$$$ (when $$$c \le 0$$$)
- $$$\operatorname{opt}'(c,r)= \min_\limits{1 \le k \le 3} \left ({\min_\limits{\max(\operatorname{opt}'(c-k,r)-1,0) \le i \le r}{L(i,k)}}\right )$$$ (when $$$c \gt 0$$$)
The correctness of this DP transition is shown by 1.b and 2.2.a.
Because we have already preprocessed the values of $$$L(i,1)$$$, $$$L(i,2)$$$, $$$L(i,3)$$$ in $$$\mathcal{O}(n \log n)$$$ time, we can now construct a RMQ data structure for these values in $$$\mathcal{O}(n \log n)$$$ time for $$$\mathcal{O}(1)$$$ query time. This allows us to process all values of $$$\operatorname{opt}'(c,r)$$$ for one $$$r$$$ in $$$\mathcal{O}(C)$$$ time according to the transition described above.
Computing this for each $$$r$$$ independently, we have now computed the sum of $$$\operatorname{opt}(l,r)$$$ over all $$$1 \le l \le r \le n$$$ in $$$\mathcal{O}(n \log n + n \log a)$$$ time. The problem is solved.
2159E - Super-Short-Polynomial-San
Problem Credit: chromate00; Special Thanks: Proof_by_QED, zeemanz
First, we will solve the easier problem of computing just single coefficients $$$[x^k] F(n)$$$. For this, observe the following facts.
It takes $$$\Theta((n+m) \log (n+m))$$$ time to multiply two polynomials of size $$$n$$$ and $$$m$$$ using FFT.
However, it takes $$$\Theta(\min(n,m))$$$ time to retrieve only one coefficient from their product.
Let's consider how we can make a tradeoff between these two time complexities.
To make a meaningful tradeoff in time complexity, we can use something that resembles the Baby Step-Giant Step algorithm.
Formally, consider the following method:
Select a constant $$$B \gt 1$$$ arbitrarily.
For all $$$kB \le N$$$, precompute the polynomial $$$\mathcal{F}(k)=F(kB)$$$. This takes $$$\mathcal{O}(N \log N \cdot \frac{N}{B}) = \mathcal{O}(\frac{N^2}{B} \log N)$$$ time.
For all $$$0 \le k \le B$$$, precompute $$$F(k)$$$. This takes $$$\mathcal{O}({B}^2)$$$ time.
For each query, find the respective coefficient of $$$F(n)=\mathcal{F}({\left \lfloor{n/B}\right \rfloor}) F(n \bmod B)$$$. This takes $$$\mathcal{O}(\min(N,B))=\mathcal{O}(B)$$$ time.
Then, we achieve an algorithm that runs in $$$\mathcal{O}(\frac{N^2}{B}\log N + B^2 + QB)$$$ time. By adjusting $$$B$$$ to $$$B \approx \sqrt{N \log N}$$$, we can achieve an algorithm that runs in $$$\mathcal{O}((N+Q)\sqrt{N \log N})$$$ time.
We only have two issues left:
- We have to compute the prefix sum of $$$[x^i]F(n)$$$ instead of just $$$[x^k]F(n)$$$.
- Our algorithm is going to be slow, especially under the modulus $$$10^9+7$$$ which makes FFT quite inconvenient.
For the first issue, observe the following:
- Define a polynomial $$$G(n)$$$ of infinite degree such that $$$[x^k]G(n)=\sum_\limits{i=0}^{k}[x^i]F(n)$$$. Then, $$$G(n+m) = G(n) F(m)$$$ holds for all $$$n,m \ge 0$$$.
The proof is left as a practice for the reader.
Then, we can transform $$$\mathcal{F}(k)=F(kB)$$$ to $$$\mathcal{G}(k)=G(kB)$$$ easily in $$$\mathcal{O}(N)$$$ time for each, and then the algorithm can compute the respective coefficient of $$$G(n)=\mathcal{G}({\left \lfloor{n/B}\right \rfloor})F(n \bmod B)$$$ in $$$\mathcal{O}(\min(\infty,B))=\mathcal{O}(B)$$$ time likewise. It is not required to save infinitely many coefficients of $$$G(n)$$$; the coefficients $$$[x^k]G(n)$$$ are constant for $$$k \gt 2n$$$. Thus we have resolved the first issue.
For the second issue, observe the following:
By expressing $$$(F(n+1))'$$$ in two ways, we can conclude that $$$n F(n) (F(1))' = (F(n))'F(1)$$$.
Then, comparing the coefficients of both sides, we can derive the coefficients of $$$F(n)$$$ in $$$\mathcal{O}(nk)$$$ time where $$$k=\operatorname{deg}(F(1))=\mathcal{O}(1)$$$.
This gives us an alternative way to precompute polynomials $$$\mathcal{F}(k)=F(kB)$$$, and using this, the algorithm will run in $$$\mathcal{O}(\frac{N^2}{B} + B^2 + QB)$$$ time. By adjusting $$$B$$$ to $$$B \approx \sqrt{N}$$$, we achieve an algorithm that runs in $$$\mathcal{O}((N+Q)\sqrt{N})$$$ and does not use FFT. This fits nicely in the given time limit of $$$7$$$ seconds.
Depending on implementation, some smaller values of $$$B$$$ might lead to a MLE verdict. However, if this happens, your submission should fail on the first test case as the precomputation itself uses the most amount of memory. It is fine to adjust the value of $$$B$$$ larger as there is quite some margin in the time limit. Slower programming languages might also get a TLE verdict, but the implementation for the intended solution should be simple enough to be translated to C++ in a few minutes.
Problem Credit: wuhudsm
An important observation is that $$$f(l, p+i)$$$ is unimodal for $$$1 \le i \le l$$$.
Proof: If $$$f(l, x) \lt f(l, x+1)$$$, then the snake head at time $$$T = x+1$$$ is the maximum value, and this value will not leave the snake's body until at least $$$l$$$ seconds later.
Thus, for a snake of length $$$l$$$, we can divide it into $$$\lfloor \frac{n}{l} \rfloor$$$ segments and perform ternary search on each segment. The total number of segments is $$$O(n \log n)$$$, and the total number of queries is $$$O(n \log^2 n)$$$.
We also need to handle the case where $$$f(l, M_1) = f(l, M_2)$$$. Suppose $$$i \gt 1$$$ and $$$a_{i,j} = f(l, M_1) = f(l, M_2)$$$, the snake reaches $$$a_{i,j}$$$ at time $$$T = i + j - 1$$$. We determine whether $$$M_1$$$ is in the increasing or decreasing half by querying $$$f(l, i + j - 2)$$$. A similar approach is used for $$$i = 1$$$.
After this, we obtain several monotonic contiguous segments. The classic priority queue method is used to obtain the top $$$m$$$ smallest values.








first.
Please give each problem's credits ;)
C was easier than B
For me c was more difficult
Yes, I could do C but not B. I think different people are wired differently.
agree with u,but the odd length ,the mid 1 is a tiny crash for me
I also think c was more diffcult
What do you mean by also?
He says C was easier than B not harder
Sorry I replied to the wrong person.I want to reply to the person named sayonaraboy123.His opinion is the same as mine.
Ah alright! I just wanted to clarify, no need to apologize.
But I think B is only easy if you have solved it previously.
I'm not sure why the heck I even got stuck on C for so long, probably some dumb mistakes.
But I genuinely solved a problem recently where the idea was about how much each thing contributes. Same with this really, you just had to see it
I see. Well, I've never done problem B before. It's different for everyone.
I got stuck on C for a long time because of a couple of dumb mistakes too. This problem involves many details
Well, I realized that I shouldn't spend so much time on B when I saw C. >_<
wow , thanks for superfast editorial..
thanks for cool problems ( I am biased, as I will get good +ve delta )
Div1B = Div2E was very similar to a problem that I made in a Korean contest (https://www.acmicpc.net/problem/34058). One of the problemsetters today had participated in that contest and has a record of getting accepted in BOJ.
I don't think this is plagiarism, since the problem situation is slightly different, and we often see coincidences in this field. However, I still think there is a high probability that this problem could be inspired by my problem, or affected by it.
Yeah, he even participated in this contest. I was struggling at B and suddenly remembered that exact problem, went to the solution files(https://static.ucpc.me/files/2025/ucpc25-prelim-solutions.pdf) and got the dp solution.
Yes, I admit that the problem was affected by your problem. Initially, the problem had max instead of min, and tight constraints that only allow $$$\mathcal{O}(n^2m)$$$ with very small constants (including usage of bitset). The intended solution was to sort the rectangles and then fill in the values using DSU. However, testers struggled on it and there were too many unintended cheese solutions, so we decided to use the current version instead.
I'm wondering if you told the coordinator about that BOJ problem and if he accepted it.
I didn't know.
Even if you had been informed, would you have accepted it? (I felt it would be advantageous for certain people, though I might be overthinking it)
Let A = div1B, B = the BOJ problem.
The thing in common between the two problems is "iterate over pairs of rows". I've seen this idea (including the swap if $$$n \gt m$$$) several times (in other problems C, D, ...), and it makes all these problems standard. I think there is a high chance that people who try to solve A have already solved one of B, C, D, ..., and I don't think knowing B in particular is much different from knowing C for example. So problem B would have not changed my opinion on problem A (which I accepted despite being standard).
Thanks for your opinion! I agree that it’s a standard technique when generalized.
I think it's not just "iterate over pairs of rows". If you check the editorial in leinad2's comment above, the DP idea there can also be directly applied to div1B today, and the only difference would be how you find the initial set of $$$O(n^2m)$$$ rectangles.
Oh, after noticing this maybe I would have rejected the problem (I'm not sure).
The change was made in the last few weeks as an attempt to fix the difficulty distribution, so I forgot to tell the coordinator about this. Instead, I tried very hard to come up with problems to replace it as a whole, but we could not find very good ones...
Does anyone have an explanation for why this my Div2E solution got TLE on system tests even though it's practically the same as the editorial solution? 343394700
I have added my apology above the problem editorial. I thought most bad solutions should die due to most random tests (which we have a variety of).
For why your solution gets TLE; I think it is due to the use of many nested
std::vectors.std::vectors are dynamically allocated, so nesting them makes it so that the elements are not contiguous in memory. This leads to very bad constants.Sadly, even after minimizing nesting and some other optimizations, I'm still getting TLE 343416181, do you have any other suggestions? Regardless, I really loved the contest, thank you for making it!
making it an array of
std::vectors doesn't help as they are still dynamically allocated. probably you can look for the way to optimize the space complexity to $$$\mathcal{O}(\min(n,m)(n+m))$$$ and that might help.I knew changing some of the nested vectors to arrays probably wouldn't do much, I was mainly referring to the changing of my pairs vector to be only doubly nested compared to triply nested (and same with the curr_idx vector). With that being said, I'll give this a shot later, thank you so much for the help!
For Div1 D1/Div 2 G1, there's a much easier version of CHT compared to Part 2.3 of the editorial. The trick is to do DP on suffixes rather than prefixes.
Let $$$v_1, v_2, \dots, v_n$$$ be the array, and assume $$$v_1 \lt v_2 \lt \dots \lt v_n$$$ (using observation 1.a).
Let $$$dp(i)$$$ denote the min cost to partition the subarray $$$v[i..n]$$$. Then $$$dp$$$ satisfies the recurrence
We could already use CHT here, since this is the same as querying for the min y-value (rounded up) at $$$x=\frac{1}{v_i}$$$ over all lines of the form $$$y_j(x) = dp(j) + v_{j-1}x$$$.
But to avoid floats, we can multiply and divide by $$$v_i$$$ to get
Then, we can query for the min at $$$x=v_i$$$ over lines of the form $$$y_j(x) = xdp(j) + v_{j-1}$$$, then set $$$dp(i)$$$ to be the result divided by $$$v_i$$$ (rounded up).
343413853
There also exists an O(n log n) solution for D1/G1 that doesn’t use any data structure, and it can easily be optimized to O(n).
UPD: nvm — part 2.2 of the editorial already includes this approach.
I ran my solution to 2E (which mle'd on system tests) in gym with higher limits and it takes 3.5/4s and 1400/1024 MB.
I'm not sure if its possible to optimize. :(
My solution idea was to use a bitset which denotes which elements of the frequency array are positive and use find first to find the best area.
Submission: 343415947
Update: I used short instead of int for areas that are small enough and my solution passed.
Submission: 343431788
The limits for Div 1 E seem rather tight (all the solutions take over half the time limit, and mine is the only solution that uses under half the memory limit).
I wonder how much time / memory the model solution / testers' solutions used.
The model uses 3296ms and 461MB, where the tester's AC uses 3624ms and 348MB. I believe this meant there was quite some margin in TL to adjust the value of $$$B$$$ and bring the memory usage lower, at the cost of slightly slower solution.
I think some users had issues, but an overall good contest and nice questions.
Thank you for organizing!
B was really nice, missed C by like 5 minutes, shouldve started on time
L testcases for div1B
Found some approach for div2F (without fixed elements):
Suppose $$$a_0 = 0$$$.
Then all $$$a_1, \dots, a_n \gt 0$$$. This means that $$$f(x)$$$ has exactly $$$n$$$ terms, while $$$g(x)$$$ has at most $$$n$$$ terms.
For the identity $$$f(x) = g(x)$$$ to hold, it is necessary that all $$$a_i$$$ are distinct and form a permutation of $$${1, 2, \dots, n}$$$.
Any such permutation gives a valid steep polynomial.
Suppose $$$a_0 \gt 0$$$.
Then there exist indices $$$i_1, \dots, i_k$$$ such that:
$$$a_{i_1} = a_{i_2} = \dots = a_{i_k} = 0$$$
$$$i_1 + i_2 + \dots + i_k = a_0$$$
$$$i_1, \dots, i_k \in {1, \dots, n}$$$
The remaining indices $$$j_1, \dots, j_m$$$ will have:
$$$a_{j_1}, \dots, a_{j_m} \gt 0$$$
Then $$$f(x)$$$ has exactly $$$m+1$$$ terms, and $$$g(x)$$$ has at most $$$m+1$$$ terms.
For $$$f(x) = g(x)$$$ to hold, it is necessary that all $$$a_{j_1}, \dots, a_{j_m}$$$ are distinct and form a permutation of the remaining elements in $$${1, \dots, n} \setminus {i_1, \dots, i_k}$$$.
Any such permutation gives a valid steep polynomial.
Is it correct? Can be impemented?
This approach is incorrect.
Usually, when I guess a conclusion, my first step is to check it against the sample cases for a sanity check.
Consider this test case from the samples:
The answer for it is $$$0$$$, which means that $$${0, 2, 3, 1}$$$ is not a valid solution. However, your logic for the $$$a_0 = 0$$$ case would have considered this valid, since $$${2, 3, 1}$$$ is a permutation of $$${1, 2, 3}$$$.
In fact, the conditions you've proposed are too weak. The actual constraint is that for any index $$$i$$$ such that $$$i \neq 0 \land a_i \neq 0$$$, the condition $$$a_{a_i} = i$$$ must also be satisfied.
For more specifics, you can read the editorial to understand this. I won't elaborate further.
fast editorial)
Can anyone explain Rectangle problem dp part mentioned in the editorial?
DP is unnecessary for this part of the problem. (Assume #rows>#columns) Say we have already computed area[i][l][r], which stores the smallest area rectangle that overlaps row = i, has left end point = l, and has right end point = r. Then the answers for row i can be computed simply by:
Basically just fix a left endpoint, and iterate over rectangles from most wide to least wide. I think once you understand this it is easy to write a similar dp.
Thank you for simple explanation :)
My Div1C (Div2F) solution (might really be identical to the editorial's).
First, define $$$f(m) \; (m \ge 0)$$$ as the number of ways of partitioning a set of $$$m$$$ elements into sets of size 1 and 2. This is OEIS A000085, and can be calculated as follows:
Then back to the problem. If we find a valid combination of $$$a_1, a_2, \ldots, a_n$$$, then $$$a_0$$$ is automatically determined, so let's ignore $$$a_0$$$ and pretend that $$$a$$$ is an $$$n$$$ element array $$$[a_1, a_2, \ldots, a_n]$$$. Make sure we exclude the impossible cases and update $$$a$$$ so that, for all $$$i$$$ satisfying $$$a_i \ge 1$$$, $$$a_i = j$$$ and $$$a_j = i$$$ (or $$$a_i = i$$$) holds.
Suppose that there are $$$c$$$ elements of $$$-1$$$ s left in $$$a$$$. Let's ignore the requirement of $$$a_n \ne 0$$$ for now and allow it to become $$$0$$$. Assume we allocate $$$0$$$ to $$$c-k$$$ elements and non-$$$0$$$ values to $$$k$$$ elements out of those $$$-1$$$ s. There are $$$\displaystyle \binom{c}{k}$$$ ways to pick those $$$k$$$ elements. Furthermore, due to the fact that we need to ensure $$$a_i = j$$$ and $$$a_j = i$$$ or $$$a_i = i$$$ for those $$$k$$$ elements, there are $$$f(k)$$$ ways to distribute non-$$$0$$$ values. Therefore, define $$$g(c) \; (c \ge 0)$$$ as follows:
If $$$a_n \ne -1$$$, there's no possibility that $$$a_n$$$ can be $$$0$$$, so $$$g(c)$$$ is the answer. If $$$a_n = -1$$$, we need to take account of the possibility of $$$a_n$$$ being $$$0$$$. Fortunately, it's not too hard to do that: there are exactly $$$g(c - 1)$$$ ways of possible arrangements such that $$$a_n = 0$$$, so the answer is $$$g(c) - g(c - 1)$$$.
To get rid of binomial coefficients you can change the formula of $$$f$$$ to this:
$$$f(m) = 2f(m - 1) + (m - 1) \cdot f(m - 2)$$$
In a combinatorical sense this $$$2$$$ means that we either pair $$$a_i$$$ with itself (set $$$a_i = i$$$) or pair with 0 (set $$$a_i = 0$$$). The answer is $$$f(c) - f(c - 1)$$$
Nice _Kee . how did you get — what to search on OEIS ?
After I found out the requirement was $$$a_i = j, a_j = i$$$ or $$$a_i = i$$$ if $$$a_i \gt 0$$$, I tried to manually count the combinations of positive values of length $$$k$$$, for small $$$k$$$.
Then I went to OEIS and looked up $$$[1, 2, 4, 10, 26]$$$ and found this sequence. It indeed looked the right one according to the EXAMPLES section. So I used recursive formula in that page.
I don't know what to think about the F problem editorial. I have submitted a solution after the contest (I was close during the contest but lacked time) and i have a different equation:
dp[i] = 2*dp[i-1] + (i-2)*dp[i-2]. It's not easy to say if the editorial is wrong since "After that, the problem reduces to the easy version." makes it hard to tell if they reduce it in a different way or the equation is wrong. I have(i-2)instead of(i-1)since we don't want to "swap" i-th element with 0.in problem D1, part2.2 how to prove that cost(P) <= x? I am really bad at equations with floor and ceil functions...
Just upsolved div1B, as mine solution failed on the added systest. And now I am confused.
I did this (343430329) following the editorial, where I have ~516MB (twice as much won't fit right?). I have the input and $$$n \times n \times m$$$ memory (wlog $$$n \le m$$$).
When my submission failed after retest (343345702) I thought that it is fair enough as I haven't really proved my solution memory consumption (I do rectangle search and then delayed range minimum updates with some priority queue trick to have better constant factor). But now I look at it and I see that instead of having $$$n\times n \times m$$$ dp I store the push-min updates on range that I afterwards process. Number of updates is sum of length long-side borders of the rectangles from $$$\mathbb{S}$$$. So it is like $$$2 \times n \times n \times m$$$. I do up to 4x factor as I am not careful when borders are shared but it is rather easy to handle (if you got MLE to fix lol).
So it seems even if I do patches my solution doesn't fit. But it is also clear I have only 2 times more memory (well, there is std::vector overhead but I can manage). And the solution seems to me, well, careful enough, not abusing sets or segment trees.
So I dont understand two things regardless of the missing maxtest:
upd: actually, when I say 1040Mb, I also assume I store 4-byte integers — but 3-byte integers can break this gap as well.
Initially I had no plans of letting $$$\mathcal{O}(nm \min(n,m))$$$ memory pass, but Proof_by_QED told me that probably I should allow a wider range of solutions. I was totally fine with that, and so I cranked up the ML to 1024MB which was the maximum ML possible. Looking back, I feel that this was a mistake, and I think I certainly should have tested the relatively worse solutions more thoroughly and I didn't. I was in a hurry for everything; both inside and outside this round.
I think you can regard that line as "it might pass if you're lucky, or you might have to shave a few constants off". Other than that, the model solution uses $$$\mathcal{O}(\min(n,m)(n+m))$$$ memory and should be regarded as the "safe" solution.
Every sqrt problem should be attempted to be optimized to linear memory. $$$n \sqrt n$$$ memory is always bad, so do that only if there is no other way
Yeah thats true, but pretests passed so I didnt think that was an issue, cause ML was already high so didnt think of calculating the memory required by hand.
when i looked at the problem, it felt like $$$O(nm\sqrt{nm})$$$ memory is too much. so i was surprised to see that the editorial uses that amount of memory. i used a nice trick to solve this "offline setmin queries" problem with little memory and time, and i don't see it being mentioned here, so let me share it.
there is a standard problem "static RMQ". in this problem we have a fixed array and a bunch of range minimum queries. everybody knows we can use sparse table to solve this problem in $$$O(N \log N + Q)$$$ time. but then our problem feels very similar, we just kinda swapped queries and updates. this idea of swapping queries and updates is standard for segment trees but not that much for sparse tables. but we can actually use sparse tables to perform $$$Q$$$ setmin queries and restore the final array at the end in $$$O(Q + N \log N)$$$ time and $$$O(N \log N)$$$ memory. we just initially set all values of our sparse table to be $$$\infty$$$, and then whenever we have an update, we split it into two updates of length power of two, and update the corresponding cells of the sparse table. at the end, we just propagate DOWN (rather than up as in the normal sparse table) the values. after that, the lowest level of our sparse table stores the values of the final array.
This is smart, will definitely remember this! thanks
In div1C/div2F, Can anyone explain the given DP relation?
I think you dont need to apologize because of div 1 B. In the past contests, it was normaly that you can pass pretests but fail systems. It reminds me about the old codeforces 8 years ago :)
The solution for Div1D1 felt almost the same as https://qoj.ac/contest/1738/problem/9129 after coming up with the idea that only suffix minimums are important.
But the problem seems way different
can anyone explain the DP part of Div2E/Div1B ? It seems quite a few people didn't quite get it (myself included).
UPD:Oh, I think I've figured it out. We just need to define dp[i][j] as the maximum possible answer for the row range [l, r], then set dp[i][j] = min(dp[i-1][j], dp[i][j+1]), and initialize the DP array using the identified rectangles.
When solving Div1C, I mistakenly interpreted "only if $$$a_i$$$ is a non-negative integer for all $$$0 \le i \le n$$$" as "$$$a_i$$$ lies in the range $$$[0, n]$$$". Unfortunately, the sample inputs didn’t clarify this ambiguity—perhaps because very few people make such a mistake. As a result, I encountered enormous difficulties while working on this problem. I didn’t realize my misinterpretation until the final few minutes, but by then, it was already too late. lol
I would like to ask: how should we solve Div1C if we add the constraint that $$$a_i$$$ is in the range $$$[0, n]$$$? The best approach I can come up with has a time complexity of $$$O(n\sqrt{n})$$$; is there anyone who can find a more optimal solution?
How would you do it in the complexity that you're saying? Even that's interesting.
I made some mistakes, I'm sorry. That my problem.
My solution can only be effective when a is all -1, but perhaps this can directly calculate the answer using an equation (I'm not very sure)
Specifically, it is equivalent to selecting x numbers from 1 ~ n, so that their sum<=n, clearly x<=sqrt n, and each time dp can solve in O(n), so it is n sqrt n.
I am still unable to understand what Div2B was asking and how to come up with the solution.
My solution for Rectangles uses $$$\mathcal{O}(nm)$$$ memory: 343421540
Problem 2160 C Submission id : 343463506 can someone pls tell where my code is wrong? my idea was to consider highest set bit of x and loop it over from highest set bit of n to 29 ( max int limit) and checking for n as a plaindrome in each case... the logic for starting from highest set bit of n was that either x or f(x) will have highest set bit greater than that of n and i can switch between my x and f(x)
n < 2^30 but x can be greater than 2^30. I have taken a simple case where n = 2^15 + 2^16 i.e. set bit 15 and bit 16 to 1 and your code will give "NO" for this which is wrong, you should store all bits in vector and let z = number of trailing zeroes in binary representation of n , you should insert z zeroes to the beginning of vector and then check the palindrome. Here size of vector can be > 30 but you are only considering it to be <= 30.
thnx bro
For problem E, was it intended that solutions with large constants should fail? Otherwise why are the time limits so tight?
Problem D is insects ioi 22 easy version
In 2160B, please note the correction
b[i] - b[i - 1] = i - xexcept when undefined sincef(y, i) - f(y, i - 1) = 0for ally <= xandf(y, i) - f(y, i - 1) = 1for ally > x.Thanks
Hi, I didn't understand div2E (=div1B) solution. Can someone explain it in more details? Or suggest a different solution?
Its just dp in columns with states as pairs of row indices. Say n<=m then dp[c][i][j] represents the minimum area possible for points (i,c) to (j,c). Then how we calculate this dp? Initially initiliase everything to INT_MAX. Then as you find (i,j,c1,c2) as rectangles in S, set dp[c1][i][j]to dp[c2][i][j] to area. When done with this, calculate dp[c][i][j] in all columns by looping over decreasing subarray lengths. dp[c][i][j]=min(min(dp[c][i][j], dp[c][i-1][j]), min(dp[c][i][j], dp[c][i][j+1])) and, this will give you dp[c][i][i] which will be answer for (i,c) grid point.
Propagation technique in ecnerwala's solution 343437238 to B is just mind blowing. Do we have similar problems that use the same technique?
ecnerwala I'm also intrigued...
This idea came from ksun. Pretty much, for each column, we have the best answer for each of $$$n^2$$$ intervals; then, combining them with DP is a pretty standard idea. Here, I just interleaved that dp with computing the value to save some memory.
chromate00, when will the codes arrive?
Certainly today!
In problem D2 editorial I think this
$$$\ \text{opt}'(c, r) = \min_{1 \le k \le 3} \left( \min_{\text{opt}'(c - k, r) \le i \le r} L(i, k) \right) \quad \text{(when } c \gt 0\text{)} \ $$$
should be
$$$\ \text{opt}'(c, r) = \min_{1 \le k \le 3} \left( \min_{max(\text{opt}'(c - k, r) - 1, 0) \le i \le r} L(i, k) \right) \quad \text{(when } c \gt 0\text{)} \ $$$
$$$L(i,k)$$$ won't go negative by definition so neither will $$$\text{opt}'$$$
Yeah in 1-indexing it won’t go negative, my main point was we need to subtract 1 from opt in searching range
Oh, I see. Thanks!
Code is somehow not posted yet so here's my solution to 2E/1B: 343529349
Thanks for the code! It is much more readable than the one from editorial
Very elegant indeed. Thank you for sharing, I learnt so much from your code
I'm really disappointed because i thought for B for nealy 40 minutes and couldn't get the right solution. @#%@!%@!%!@!
can some pls explain the solution of Div2E in detail. I am unable to comprehend the editorial.
How can I make my solution for E faster 343557726? It's basically what the editorial says...
I'm not 100% sure if this is the issue, but I think you just need to optimize how you iterate through the array. For example:
Generally you want to iterate through an array with the inner loop being the last "indexer", because otherwise it results in a large constant factor. What I personally did was
Instead of directly reading from (or in your case, writing to) the vector, I made another vector stores the elements in the order that they would be accessed in the loop to make the access more cache-friendly
Also pragmas might help (they cut my solution runtime by 400ms)
Can anyone help me on my solution to problem Rectangles Div 1B, I'm unable to optimize its memory used any further, My submission: 343564145
you can modify the ans on the fly instead of storing, here is a modified version of ur code
Thanks a lot
Could anyone solve Div2E/Div1B using DSU?
For the editorial of problem 2E/1B, I think it would be nice to mention that $$$dp[u][d]$$$ means the area of the smallest rectangle that covers the range $$$[u, d]$$$ of column $$$j$$$. The transition is easier to understand with this assertion.
A better solution for div1.F that uses $$$\le 40n+m$$$ queries:
Iterate the answer $$$d$$$ from small to large, and try to get all $$$f(i,T)=d$$$. Repeat this process until we have $$$m$$$ answers.
For every $$$i$$$, we have a range $$$[l,r]$$$ meaning that it's possible to have some $$$l\le T\le r$$$, and $$$f(i,T)=d$$$. We then ask for $$$f(i,mid)$$$. If $$$f(i,mid)\le d$$$, then we proceed to solving $$$[l,mid-1]$$$ and $$$[mid+1,r]$$$.
On the other hand, suppose $$$f(i,mid)=v$$$ and $$$v$$$ appears at time $$$t$$$ (this time is fixed at the beginning since both ways of moving will always make the $$$x+y$$$ value increase by $$$1$$$). Then we can say for sure that for all $$$T\in [t,t+i-1]$$$, $$$f(i,T)\ge v \gt d$$$. Which means that we can skip this range, and go on to solve $$$[l,t-1]$$$ and $$$[t+i,r]$$$.
We keep a table of all known $$$f(i,T)$$$ and don't have to ask for repeating ones. If a pair $$$f(i,T)$$$ is asked and it is included in the answer, it contributes to the $$$m$$$ queries. Otherwise, for any $$$f(i,T)$$$ asked and not included in the answer, it will eliminate at least $$$\min(\frac{r-l+1}2,\frac i2)$$$ values for which we will never ask.
This way, the total number of queries is $$$m+\sum_i 2\times\frac{2n}i=m+4\log n$$$, where $$$40n+m$$$ is definitely a safe bound, considering that there are some waste. And by testing, this solution can pass all tests within $$$35n+m$$$ queries, which is much better than the official solution.
There is a small issue when implementing. Doing this directly has a time complexity of $$$O(n^4)$$$. However, notice that except $$$d=1$$$, we only have to consider pairs $$$(d,i)$$$ where $$$d$$$ appears as some answer received when asking about $$$i$$$ previously. This leaves us with only $$$O(n^2)$$$ pairs to solve, and the complexity is then $$$O(n^3)$$$.
I just realised Problem D1E was a reference to Super-Slow-Internet-san
The editorial for Twin Polynomials has a typo.
The claim "Since $$$a_0 = −1$$$, we can only consider $$$i \gt 0$$$ and get $$$a_i = \sum_{a_j = 0} j$$$ automatically."
should be corrected to "... and get $$$a_0 = \sum_{a_j = 0} j$$$ automatically.".
I believe the author's intent was to say that the coefficients $$$a_1, a_2, ..., a_n$$$ uniquely determine $$$a_0$$$, so it should really say $$$a_0$$$ instead of $$$a_i$$$. It is a small error, but I wanted to point it out because I got stuck in that part for some time.
wuhudsm can you confirm if my understanding is correct about the typo? Or did I completely misunderstand the editorial?
this indeed seems like a typo chromate00
I asked the author for this, but as it seems like a typo anyways, I fixed that myself. I hope I didn't typo that again...
The fact in d1E that we can find the $$$n$$$ power of a polynomial of degree $$$k$$$ in $$$O(nk^2)$$$ feels like magic to me. How can we just differentiate in 2 different ways and get something of what should be identical polynomials? And is this idea well-known? I just can't imagine that I'd ever come up with something like that, it's a long leap of faith
Though it is not the most intuitive idea, I know for sure that it was used in multiple problems historically.
i think it would better if the code solution was mostly written in C++ or you should consider python and C++. Because the majority of the participants enter the competition by writing C++ code.
My submission for question D
Can anyone tell, why is it giving tle. According to me, time complexity of my code is O($$${n^2}$$$).
Please help.
I couldn’t understand the editorial for 2160B - Distinct Elements. Could someone please explain it to me?
Maybe i'm not right, but i think there is a typo in editorial(problem C div 2). It says that if z is odd and middle bit is 1 then we cannot find such x. Counterexample: n = 6 = 110. x = 1101, f(x) = 1011. We can see that z = 3 is odd and middle digit is 1, but x exists. So i think this condition should apply to y. If i missed something, please prove me wrong.
0110is a palindromeYes, but due to editorial we think of n as a z-bit number, likewise x as y-bit number. So for n = 6, z = 3 -> n is represented like 110, after we extend it to 0110 — it's palindrome and y is even, so answer is YES. Another exmaple is 8 = 1000, z = 4(even), but when we want to find x we extend it to 0001000, here y is odd(not z) ans middle element is 1, so answer is NO. So i just want to understand why you consider odd z and not y.
in odd $$$z$$$ the middle index will be always $$$0 = 0 \oplus 0 = 1 \oplus 1$$$
When you take a number and XOR it with its bitwise reverse, the result will always have an even number of 1s (set bits) — never odd.
i think it is true by this we can solve c by just if ,else.
div2F/div1C tutorial is kinda not correct? you cant just validate using only dsu, either with a lot of extra work, either you should just fill in the blank according the size2 scc rule straightly and check if the final sequence has only -1's, 0's and scc's of size 2.
In Div1C, setting $$$a_i \le 10^9$$$ instead of $$$a_i \le n$$$ is evil.
Well people mistakenly thought $$$a_0 \le n$$$ when we had the latter
For part 2.3 of div2G1/div1D1: when you discard all the lines $$$j + 1, ... i$$$ and insert the line with the minimum $$$opt(1,j)$$$ we know it will be the optimal line for some suffix (because if we remove all the lines that will never be part of the convex hull $$$min(a_j, ... a_i)$$$ will be non-decreasing and thus $$$dp_j$$$ will have to be non-increasing), thus we can binary search on the leftpoint of the suffix the replacement line will be optimal for.
Here's my own solution to 2159B - Rectangles, based on AksLolCoding's implementation.
If you are struggling reading my solution, read it along side with the implementation would help you visualizing the general idea.
For all pairs $$$i, j$$$ of rows, with $$$i=1,2,...,n$$$ and $$$j=n,n-1,...,i+2,i+1$$$, we enumerate columns $$$k=1,2,...,m$$$:
Let's define $$$mn_k$$$ as the minimum area of a valid rectangle when we are iterating through the $$$i$$$ row that covers the column $$$k$$$, and $$$pos$$$ as the set of all columns $$$k$$$ such that $$$G_{i,k}=G_{j,k}=1$$$.
When enumerating through column $$$k$$$, we insert $$$k$$$ to $$$pos$$$ if $$$G_{i,k}=G_{j,k}=1$$$.
After finishing iterating all columns, we have $$$pos={k_1,k_2,...,k_u}$$$, then for each pair $$$(k_1,k_2),(k_2,k_3),...,(k_{u-1}, k_u)$$$, we update $$$mn_v=\min(mn_v,wh)$$$, for each $$$v=k_u,k_u+1,k_u+2,...,k_{u+1}$$$, with $$$w$$$ is the width between column $$$k_u$$$ and $$$k_{u+1}$$$ and $$$h$$$ is the height between row $$$i$$$ and $$$j$$$.
Then we just need to update the answer to the current row $$$j$$$ before switching to the next row $$$j$$$, $$$ans_{j,k}=\min(ans_{j,k},mn_k)$$$, with $$$k=1,2,...,m$$$.
Do the same before switching to the next row $$$i$$$, $$$ans_{i,k}=\min(ans_{i,k},mn_k)$$$, with $$$k=1,2,...,m$$$.
By enumerating through all rows and columns this way, we will always find the minimum area of a valid rectangle covers each cell:
The complexity here is $$$O(mn^2)$$$, but if $$$n$$$ is large, $$$n^2$$$ can be very large and exceed the time limit. So we need to swap $$$m$$$ and $$$n$$$ if $$$n \gt m$$$, and ensure that $$$mn^2$$$ is not greater than $$$mn\sqrt{mn}$$$.
The overall time complexity is $$$O(mn\min(m,n)) \le O(mn\sqrt{mn})$$$, and the memory complexity is $$$O(\min(m,n)(m+n))$$$.
343529349
351156832
If you find this helpful, please give me an upvote. Thank you a lot!
The solution for Problem B (Distinct elements) is wrong here:
bi−b(i−1)=i−(i−x)=x //wrong here
it should be: bi-b(i-1) = i-x then, x = i-(bi-b(i-1))
I think theres a mistake in the solution of B: instead of bi−bi−1=i−(i−x)=x it should be bi−bi−1=(i−x)
How to improve on solving problems like C and D?