1654A - Maximum Cake Tastiness
Author: TheScrasse
Preparation: TheScrasse
Suppose you want to choose pieces of cake $$$i$$$, $$$j$$$. Can you make them adjacent in $$$1$$$ move?
The answer is the sum of the $$$2$$$ maximum weights.
You can always pick the $$$2$$$ maximum weights: if they are $$$a_i$$$ and $$$a_j$$$ ($$$i < j$$$), you can flip the subsegment $$$[i, j-1]$$$ to make them adjacent.
The result can't be larger, because the sum of the weights of any $$$2$$$ pieces of cake is never greater than the sum of the $$$2$$$ maximum weights.
Iterating over all pairs of pieces of cake is enough to get AC, but you can solve the problem in $$$O(n \log n)$$$ by sorting the weights and printing the sum of the last $$$2$$$ values, or even in $$$O(n)$$$ if you calculate the maximum and the second maximum in linear time.
Complexity: $$$O(t \cdot n^2)$$$, $$$O(t \cdot n \log n)$$$ or $$$O(t \cdot n)$$$
Official solution: 150288088
Author: emorgan
Preparation: TheScrasse
Are there any characters that you can never remove?
You can't remove the rightmost occurrence of each letter. Can you remove the other characters?
If you can remove $$$s_1, s_2, \dots, s_{i-1}$$$ and $$$s_i$$$ is not the rightmost occurrence of a letter, you can also remove $$$s_i$$$.
Let $$$a$$$ be the initial string. For a string $$$z$$$, let's define $$$z(l, r) = z_lz_{l+1} \dots z_r$$$, i.e., the substring of $$$z$$$ from $$$l$$$ to $$$r$$$. The final string is $$$a(k, n)$$$ for some $$$k$$$.
In the final string, $$$x = 0$$$, so the first character doesn't appear anywhere else in $$$s$$$. This means that $$$a_k\not=a_{k+1}, a_{k+2}, \dots, a_n$$$. In other words, $$$a_k$$$ is the rightmost occurrence of a letter in $$$s$$$.
Can you ever remove $$$a_i$$$, if $$$a_i\not=a_{i+1}, a_{i+2}, \dots, a_n$$$? Notice that you would need to remove $$$a(l, r)$$$ ($$$l \leq i \leq r$$$): this means that there must exist $$$a(l', r') = a(l, r)$$$ for some $$$l' > l$$$. So, $$$a_{i+l'-l} = a_i$$$, and this is a contradiction.
Therefore, $$$k$$$ is the smallest index such that $$$a_k\not=a_{k+1}, a_{k+2}, \dots, a_n$$$.
You can find $$$k$$$ by iterating over the string from right to left and updating the frequency of each letter. Indeed $$$a_i\not=a_{i+1}, a_{i+2}, \dots, a_n$$$ if and only if the frequency of the letter $$$a_i$$$ is $$$0$$$ up to now (in the iteration from right to left we are performing). The value of $$$k$$$ is the minimum such index $$$i$$$.
Complexity: $$$O(n)$$$
Official solution: 150288210
Author: emorgan
Preparation: TheScrasse
Can you find the initial weight which results in $$$a$$$?
The initial weight is the sum of the final weights. Let's simulate the division, starting from a new cake. How to choose fast which splits to do?
If the largest piece is not in $$$a$$$, you have to split it. How to find the largest piece efficiently?
Keep $$$a$$$ and the new cake in two multisets or priority queues.
First, let's find the initial weight. When a piece of cake is split, the sum of weights is $$$\lfloor\frac{w}{2}\rfloor + \lceil\frac{w}{2}\rceil$$$:
- if $$$w$$$ is even, $$$\lfloor\frac{w}{2}\rfloor + \lceil\frac{w}{2}\rceil = \frac{w}{2} + \frac{w}{2} = w$$$;
- if $$$w$$$ is odd, $$$\lfloor\frac{w}{2}\rfloor + \lceil\frac{w}{2}\rceil = \frac{w-1}{2} + \frac{w+1}{2} = w$$$.
Therefore, the sum of weights is constant, and the initial weight is the sum of the final weights.
Now let's start from a cake $$$b$$$ of weight $$$b_1 = \sum_{i=1}^n a_i$$$, split it (into pieces of weight $$$b_i$$$) and try to make it equal to $$$a$$$. At any moment, it's convenient to consider the largest $$$b_i$$$, because you can determine if you can split it or not.
More specifically,
- if $$$b_i$$$ is not in $$$a$$$, you have to split it;
- if $$$b_i = a_j$$$ for some $$$j$$$, you can only match $$$a_j$$$ with $$$b_i$$$ or with a $$$b_k$$$ such that $$$b_k = a_j = b_i$$$ (because there doesn't exist any larger $$$b_k$$$): that's equivalent to removing $$$a_j$$$ and $$$b_i$$$ from $$$a$$$, $$$b$$$, respectively.
Notice that, if at any moment the maximum element of $$$b$$$ is smaller than the maximum element of $$$a$$$, the answer is NO
.
If can keep $$$a$$$ and $$$b$$$ in any data structure that supports inserting an integer, asking for the maximum and removing the maximum (e.g., multiset or priority queue), the following algorithm works.
While either $$$a$$$ or $$$b$$$ is not empty,
- if the maximum element of $$$b$$$ is smaller than the maximum element of $$$a$$$, print
NO
and break; - if the maximum element of $$$b$$$ is equal to the maximum element of $$$a$$$, remove it from both $$$a$$$ and $$$b$$$;
- if the maximum element $$$m$$$ of $$$b$$$ is larger than the maximum element of $$$a$$$, remove it from $$$b$$$ and split the piece of cake (i.e., insert $$$\lfloor\frac{w}{2}\rfloor$$$ and $$$\lceil\frac{w}{2}\rceil$$$ into $$$b$$$).
If $$$a$$$ or $$$b$$$ are both empty at the end, the answer is YES
.
Complexity: $$$O(n \log n)$$$
Official solution: 150288232
Author: emorgan
Preparation: TheScrasse
The $$$n-1$$$ facts make a tree.
Assume that the amount of ingredient $$$1$$$ is $$$1$$$. For each $$$i$$$, the amount of ingredient $$$i$$$ would be $$$c_i/d_i$$$ for some integer $$$c_i, d_i > 0$$$. How to make an integer amount of each ingredient?
The optimal amount of ingredient $$$1$$$ is $$$\text{lcm}(d_1, d_2, \dots, d_n)$$$.
Can you find the exponent of each prime $$$p \leq n$$$ in $$$\text{lcm}(d_1, d_2, \dots, d_n)$$$?
If you visit the nodes in DFS order, each edge changes the exponent of $$$O(\log n)$$$ primes. At each moment, keep the factorization of $$$c_i/d_i$$$ of the current node, i.e., the exponents of each $$$p \leq n$$$ (they can be negative). For each $$$p$$$, also keep the minimum exponent so far.
Read the hints. The rest is just implementation.
Start a DFS from node $$$1$$$. Keep an array $$$f$$$ such that $$$f_p$$$ is the exponent of $$$p$$$ in the amount of ingredients of the current node. Keep also $$$v_i = c_i/d_i \,\, \text{mod} \,\, 998\,244\,353$$$. At the beginning, the amount of ingredients (of node $$$1$$$) is $$$1$$$, so $$$f_p = 0$$$ for each $$$p$$$. Whenever you move from node $$$i$$$ to node $$$j$$$, and $$$r_i/r_j = x/y$$$,
- for each $$$p^k$$$ such that $$$p^k \mid x$$$ and $$$p^{k+1} \nmid x$$$, decrease $$$f_p$$$ by $$$k$$$;
- for each $$$p^k$$$ such that $$$p^k \mid y$$$ and $$$p^{k+1} \nmid y$$$, increase $$$f_p$$$ by $$$k$$$.
Notice that there exist $$$O(\log n)$$$ such values of $$$p^k$$$ for each edge, and you can find them by precalculating either the smallest prime factor (with the sieve of Eratosthenes) or the whole factorization of every integer in $$$[2, n]$$$.
Let $$$g_p$$$ be the minimum value of $$$f_p$$$ during the DFS. Then, for every $$$p$$$, you have to multiply the amount of ingredients of node $$$1$$$ by $$$p^{-g_p}$$$.
The answer is the sum of $$$v_i$$$, multiplied by the amount of ingredients of node $$$1$$$.
Complexity: $$$O(n \log n)$$$
Official solution: 150288255
Author: emorgan
Preparation: TheScrasse
Consider each element of the array as being a point in the plane $$$(i, a_i)$$$. Then all of the elements that don't get affected by an operation lie on a single line in the plane. Find a line that maximizes the number of points on it.
Let $$$m$$$ be the upper bound on $$$a_i$$$. The intended complexity is $$$O(n \sqrt m)$$$.
Let $$$m$$$ be the upper bound on $$$a_i$$$. Let $$$d$$$ be the difference between adjacent elements in the final array. Create a piecewise algorithm for the cases where $$$|d| < \sqrt m$$$ and $$$|d| \geq \sqrt m$$$
As explained in the hints, instead of computing the fewest number of operations, we will compute the largest number of elements that don't have an operation on them, and we will create a piecewise algorithm with final complexity $$$O(n\sqrt m)$$$ , where $$$m$$$ is the upper bound on $$$a_i$$$.
Let $$$d$$$ be the common difference between elements in our final sequence. First of all, I will assume that $$$d \geq 0$$$, since solving the problem for negative $$$d$$$ is as simple as reversing the array and running the solution again. If $$$d$$$ is fixed beforehand, we can solve the problem in $$$O(n)$$$ by putting element $$$i$$$ into bucket $$$a_i-d\cdot i$$$ and returning $$$n$$$ minus the size of the biggest bucket.
For $$$d < \sqrt m$$$, we can use the above algorithm to handle all of these $$$d$$$ in $$$O(n \sqrt m)$$$ time. We can keep a hashmap from bucket index $$$\to$$$ number of elements in the bucket, or we can just keep an array since the bucket indices have a range of at most $$$O(n \sqrt m)$$$ which is not enough to exceed the memory limit.
For $$$d \geq \sqrt m$$$, we observe that if we have two indices $$$i, j$$$ such that $$$j > i+\sqrt m$$$, then at least one of them definitely has to have an operation performed on it, because the difference between them would have to be $$$a_j-a_i \geq \sqrt m \cdot d > m$$$ which is not possible. In other words, if we consider the set of elements which are not edited, that set will have gaps between pairs of elements of size at most $$$\sqrt m$$$.
So, we can build a graph between indices with an edge $$$i \to j$$$ with label $$$x$$$ if $$$i < j \leq i+\sqrt m$$$ and $$$\frac{a_j-a_i}{j-i} = x$$$. This graph has at most $$$n\sqrt m$$$ edges. Then we just have to find the longest path in the graph where all edges have the same label. You can do this with dynamic programming -- let $$$dp_{i,d}$$$ be the length of the longest path ending at index $$$i$$$, where all edges have label $$$d$$$. For each $$$i$$$, we only need to check edges to $$$j$$$ where $$$i - \sqrt m < j < i$$$. This means the time complexity is $$$O(n\sqrt m)$$$. To store the values $$$dp_{i,d}$$$ sparsely, we can use either a hash map or a rotating buffer (where we only store $$$dp_{i,d}$$$ for $$$i$$$ in a sliding window of width $$$\sqrt m$$$).
Complexity: $$$O(n \sqrt m)$$$
Official solution: 150288285
1654F - Minimal String Xoration
Author: dario2994, emorgan
Preparation: dario2994, TheScrasse
Let $$$f(s, x)$$$ be the string $$$t$$$ such that $$$t_i = s_{i\oplus x}$$$. The intended solution not only finds the $$$x$$$ such that $$$f(s, x)$$$ is lexicographically minimized, but produces an array of all $$$0 \leq x < 2^n$$$ sorted according the comparator $$$f(s, i) < f(s, j)$$$.
The solution is similar to the standard method to construct a suffix array.
Let $$$a_k$$$ be an array of the integers $$$0$$$ through $$$2^n-1$$$, sorted according to the lexicographic ordering of the first $$$2^k$$$ characters of $$$f(s, x)$$$. The answer to the problem is $$$f(s, a_{n,0})$$$. Given the array $$$a_{k-1}$$$, can you compute $$$a_k$$$?
Let $$$f(s, x)$$$ be the string $$$t$$$ such that $$$t_i = s_{i\oplus x}$$$.
The solution is inspired by the standard method to construct a suffix array. Let $$$a_k$$$ be an array containing the integers $$$0, 1, 2,\dots, 2^n-1$$$, sorted according to the lexicographic ordering of the prefixes of length $$$2^k$$$ of the strings $$$f(s, 0), f(s, 1), \dots, f(s, 2^n-1)$$$ (i.e., the prefix of length $$$2^k$$$ of $$$f(s, a_k[i])$$$ is lexicographically smaller or equal than the prefix of length $$$2^k$$$ of $$$f(s, a_k[i+1])$$$). We can construct $$$a_k$$$ using $$$a_{k-1}$$$, and $$$a_0$$$ is easy to construct as a base case.
In order to construct $$$a_k$$$ from $$$a_{k-1}$$$, we will first construct an auxiliary array of integers $$$v$$$ using $$$a_{k-1}$$$, where $$$v_i < v_j$$$ iff $$$f(s, i)_{0..2^{k-1}} < f(s, j)_{0..2^{k-1}}$$$. Then, we will sort the array $$$a_k$$$ according to the comparison of tuples $$$(v_i, v_{i\oplus {2^{k-1}}}) < (v_j, v_{j\oplus {2^{k-1}}})$$$.
Once we have $$$a_n$$$, then we just print $$$f(s, a_{n,0})$$$. In total, the solution takes $$$O(2^n n^2)$$$ time, which can be optimized to $$$O(2^n n)$$$ using the fact that tuples of integers in the range $$$[0, m]$$$ can be sorted using radix sort in $$$O(m)$$$ time. This optimization was not required to get AC.
Official solution: 150288326
Author: emorgan
Preparation: dario2994, emorgan, TheScrasse
Let's say that a vertex is "flippable" if it has at least one neighbor of the same height. The optimal strategy for a skier is to ski to a flippable vertex with lowest height, flip back and forth between that vertex and its neighbor until all energy is used, and then ski to a base lodge.
Assuming the skier starts at vertex $$$v$$$ and chooses vertex $$$u$$$ as the flippable vertex to flip back and forth at, this skier will "waste" a total of $$$h_u$$$ units of kinetic energy for a total path length of $$$2h_v-h_u$$$. Note that if no such $$$u$$$ exists, then the maximum amount of $$$h_v$$$ will be wasted as the skier goes straight to a base lodge. Let $$$w_v$$$ be this "wasted" amount. Instead of computing the maximum path length for the skier starting at vertex $$$v$$$, compute $$$w_v$$$. The answer for that vertex is $$$2h_v-w_v$$$.
Try to solve this seemingly unrelated problem: Let $$$S$$$ be the set of flippable vertices. What is the largest possible value of $$$\sum\limits_{v\in S}h_v$$$?
The largest possible value of $$$\sum\limits_{v\in S}h_v$$$ is $$$O(n)$$$. Therefore, there are at most $$$O(\sqrt n)$$$ distinct values of $$$w_v$$$ across all vertices. Solve for each value of $$$w_v$$$ separately.
Read the hints. The rest is just implementation.
For each set of flippable vertices of the same height, we can calculate the set of starting vertices which are able to reach at least one vertex in that flippable set. To do this, split the graph up into layers of equal height. Let $$$c_v$$$ be the minimum required energy to reach a vertex in the flippable set. $$$c_v$$$ can be computed via shortests paths, where edges in the same layer have weight $$$1$$$ and edges from layer $$$i$$$ to $$$i+1$$$ have weight $$$-1$$$. We can use bfs to relax the costs of vertices in a single layer, and then easily transition to the next layer. We do this for $$$O(\sqrt n)$$$ different starting heights, so the total complexity is $$$O(n\sqrt n)$$$.
Official solution: 150288345
Author: dario2994, TheScrasse
Preparation: dario2994, TheScrasse
If $$$p_1,\dots, p_n$$$ is good and $$$p_i=1$$$, what can you say about $$$p_1,p_2,\dots, p_i$$$ and $$$p_i,p_{i+1},\dots, p_n$$$?
Find a quadratic solution ignoring the constraints given by the string $$$s$$$.
Find an $$$O(n^2 + nm)$$$ solution taking care of the constraints given by the string $$$s$$$.
Optimize the $$$O(n^2+nm)$$$ solution using generating functions.
The solution to the ODE $$$y'(t) = a(t)y(t) + b(t)$$$ with $$$y(0)=1$$$ is given by $$$\exp(\smallint a)\Big(1+\int b\exp(-\smallint a)\Big)$$$
First of all, let us state the following lemma (which is sufficient to solve the problem in $$$O(n^2)$$$ if one ignores the constraints given by the string $$$s$$$). We omit the proof as it is rather easy compared to the difficulty of the problem as a whole.
Lemma: The following statements hold for a permutation $$$p_1,p_2,\dots, p_n$$$.
- $$$p$$$ is good if and only if $$$p[1:i]$$$ and $$$p[i:n]$$$ are good, where $$$p_i = 1$$$.
- If $$$p_1=1$$$, then $$$p$$$ is good if and only if $$$p[1:i]$$$ and $$$p[i:n]$$$ are good, where $$$p_i = 2$$$.
- If $$$p_1=1$$$ and $$$p_n=2$$$, then $$$p$$$ is good if and only if it is bitonic, i.e., $$$p_1<p_2<p_3<\cdots<p_i>p_{i+1}>\cdots p_{n-1}>p_n$$$, where $$$p_i=n$$$.
Given $$$1\le l < r\le n$$$, we say that a permutation $$$q_1,q_2,\dots, q_{r-l+1}$$$ of $$${1,2,\dots, r-l+1}$$$ is $$$[l,r]$$$-consistent if for any $$$l\le i \le \min(r, m-1)$$$:
- $$$q_{i-l+1} < q_{i-l+2}$$$ if $$$s_i = \texttt{<}$$$;
- $$$q_{i-l+1} > q_{i-l+2}$$$ if $$$s_i = \texttt{>}$$$.
Informally speaking, a permutation is $$$[l,r]$$$-consistent if it satisfies the constraints given by $$$s$$$ when it is written in the positions $$$[l, r]$$$.
For $$$1\le l<r\le n$$$, let $$$a_{\ast\ast}(l, r)$$$, $$$a_{1\ast}(l, r)$$$, $$$a_{\ast 1}(l, r)$$$, $$$a_{12}(l, r)$$$, $$$a_{21}(l, r)$$$ be the the number of good permutations which are $$$[l,r]$$$-consistent and, respectively,
- No additional conditions;
- Start with $$$1$$$;
- End with $$$1$$$;
- Start with $$$1$$$ and end with $$$2$$$;
- Start with $$$2$$$ and end with $$$1$$$.
For $$$1\le i < n$$$ and $$$c\in{\texttt{<}, \texttt{>}}$$$, let $$$q(i, c) := [i>m\text{ or } s_i= c]$$$. Informally speaking, $$$q(i, \texttt{<})$$$ is $$$1$$$ if and only if it can be $$$p_i<p_{i+1}$$$ and $$$q(i, \texttt{>})$$$ is $$$1$$$ if and only if it can be $$$p_i > p_{i+1}$$$.
Thanks to the Lemma, one has the following relations:
- $$$a_{\ast\ast}(l, r) = \sum_{i=l}^r a_{\ast1}(l, i) a_{1\ast}(i, r) \binom{r-l}{i-l}$$$.
- $$$a_{1\ast}(l, l) = 1$$$. For $$$l < r$$$, $$$a_{1\ast}(l, r) = \sum_{i=l+1}^r a_{12}(l, i)a_{1\ast}(i, r)\binom{r-l-1}{i-l-1}$$$.
- Analogous formula for $$$a_{\ast 1}$$$.
- $$$a_{12}(l, l) = 0$$$ and $$$a_{12}(l, l+1)= q(l, \texttt{<})$$$ and $$$a_{12}(l, l+2)= q(l, \texttt{<})\cdot q(l+1, \texttt{>})$$$. For $$$l<l+1<r$$$, $$$a_{12}(l, r) = q(l, \texttt{<})a_{21}(l+1, r) + q(r-1, \texttt{>})a_{12}(l, r-1)$$$.
- Analogous formula for $$$a_{21}$$$.
The problem asks to compute $$$a_{\ast\ast}(1, n)$$$. Thanks to the formulas stated above, it is straightforward to implement an $$$O(n^2)$$$ solution. Now we will tackle the hard task of optimizing it to $$$O(nm + n\log(n))$$$.
In order to compute $$$a_{\ast\ast}(1, n)$$$, we will compute $$$a_{\ast1}(1, k)$$$ and $$$a_{1\ast}(k, n)$$$ for all $$$1\le k\le n$$$.
We have the recurrence relation (for $$$k\ge 2$$$)
Setting $$$x_{k-1} := a_{\ast1}(1, k) / (k-1)!$$$, (1) is equivalent to (for $$$k\ge 1$$$, and also for $$$k=0$$$!)
This looks very similar to an identify between generating functions (a derivative on the left, a product on the right); but for the fact that $$$a_{21}$$$ depends on two parameters. To overcome this issue let us proceed as follows.
Notice that, if we set $$$b$$$ to any of the functions $$$a_{\ast\ast}$$$, $$$a_{\ast1}$$$, $$$a_{1\ast}$$$, $$$a_{12}$$$, $$$a_{21}$$$, it holds $$$b(l, r) = b(l+1, r+1)$$$ whenever $$$l > m$$$. Hence, let us define $$$b_{\ast\ast}(k) = a_{\ast\ast}(n+1, n+k)$$$ and analogously $$$b_{1\ast}(k)$$$, $$$b_{\ast 1}(k)$$$, $$$b_{12}(k)$$$, $$$b_{21}(k)$$$.
With these new definitions, (2) becomes (for $$$k\ge 0$$$)
Let $$$u_i:= \frac{b_{21}(i+2)}{i!}$$$ and $$$v_{k-1}:= \sum_{i=0}^{\min(k-1, m-1)} x_i \frac{b_{21}(k+1-i) - a_{21}(i+1, k+1)}{(k-1-i)!}$$$. So, (3) simplifies to
We precompute in $$$O(nm)$$$ the values of $$$a_{12}(l, r)$$$ and $$$a_{21}(l, r)$$$ for $$$1\le l\le m$$$, $$$l < r\le n$$$. We can also precompute in $$$O(n)$$$ the values of $$$b_{12}(k), b_{21}(k)$$$ for $$$1\le k\le n$$$. In $$$O(m^2)$$$ we compute also $$$x_i$$$ for all $$$0\le i\le m-1$$$. Thus, in $$$O(nm)$$$ we can compute, for all $$$0\le k < n$$$, the values $$$u_k$$$.
It is now time to start working with generating functions. Let $$$X(t):= \sum_{k\ge 0} x_k t^k$$$, $$$U(t):=\sum_{k\ge0} u_kt^k$$$, $$$V(t):=\sum_{k\ge0} v_kt^k$$$. We know $$$U(t)$$$ and $$$V(t)$$$ (at least the first $$$n$$$ coefficients) and we want to compute $$$X(t)$$$. Since $$$x_0=1$$$, we know $$$X(0)=1$$$. Moreover (4) is equivalent to the ordinary differential equation $$$X' = V + U\cdot X$$$. This ODE is standard and its (unique) solution is given by
Since the product of generating functions and the exponential of a generating function can be computed in $$$O(n\log(n))$$$, we are able to obtain the values of $$$x_k$$$ for all $$$0\le k <n$$$ and thus the values of $$$a_{\ast 1}(1,k)$$$.
Now, let us see how to compute $$$a_{1\ast}(k, n)$$$. Since $$$a_{1\ast}(k,n) = b_{1\ast}(n-k+1)$$$ for all $$$m<k\le n$$$, let us first compute $$$b_{1\ast}(k)$$$ for all $$$1\le k\le n$$$. By repeating verbatim the reasoning above, we get that the generating function $$$Y(t):=\sum_{k\ge 0} y_k t^k$$$, where $$$y_{k-1}:=b_{1\ast}(k) / (k-1)!$$$, is given by ($$$V=0$$$ in this case) $$$Y=\exp(\int U)$$$. So, it remains only to compute $$$a_{1\ast}(k, n)$$$ for $$$1\le k\le m$$$. This can be done naïvely in $$$O(nm)$$$.
The overall complexity is $$$O(nm + n\log(n))$$$.
Official solution: 150306974
didn't solve C :(
However I really like this kind of easy problems that high rated users might stuck on!
Same here... Was second in the contest to solve F, but smh couldn't do C at all. Locked myself into trying to recover stuff in bottom-up manner rather than top-down...
Aha, I was just the opposite. I solved C rather quickly, then locked myself in the top-down manner for problem F for a long time.
Can I know why does bottom-up doesn't work? Even I thought of same approach, but it gives WA on TC 2 150395732
Try this Testcase on your code: 1 1 1 2 2 2
The answer should be Yes but your code will give No.
Got the reason why it fails. Thanks
Yes, good contest. Loved solving.
really? it's hard to believe I was able to solve the problem that red_id ignores :v
How to prove the complexity of C?
Here's a rough sketch of why at most $$$O(n)$$$ iterations can happen:
If we exit with an answer of YES, it's $$$O(n)$$$ iterations.
Let $$$p$$$ and $$$q$$$ be the multisets in the official solution. At any point in time, all the elements of $$$p$$$ lie in an interval $$$[c, 2c+2]$$$ for some constant $$$c$$$.
If we exit with an answer of NO, one step before we exited, there was an element of $$$p$$$ larger than the largest element of $$$q$$$, and $$$\sum p = \sum q$$$. Combining with fact 1, we get that $$$|p| \leq 2|q|+O(1)$$$ (this is the hardest step).
So, we insert into $$$p$$$ about $$$2n$$$ times, and remove from $$$q$$$ at most $$$n$$$ times. So it's $$$O(n)$$$ in total.
We can simply exit with an answer of NO as soon as we make N splits, because that'd generate N+1 numbers, so there's at most N splits and at most N "consume up both equal numbers" operations.
If you have solved it now then can you provide proof (in a less mathematical way that one would come up in contest) of how it would take O(n) operations (so time complexity is O(nlogn)) in the official solution of problem C?
It's a balanced contest in terms of difficulty.
F is really awesome!
Video editorial, if anyone prefers that
जय हनुमान ज्ञान गुन सागर। जय कपीस तिहुं लोक उजागर।।
Nice questionset. Thanks!
I solved F using hashing and xor segment tree. We can maintain a segment tree to answer queries to get the rolling hash of $$$s_{l \oplus x} s_{(l+1) \oplus x} \ldots s_{r \oplus x}$$$ so we can compare $$$2$$$ strings in $$$O(n^2)$$$ by finding the longest common prefix. I think this can be optimized to $$$O(n)$$$ if you really wanted.
Code $$$O(n^2 2^n)$$$: 150271155
Code $$$O(n 2^n)$$$: 150276264
Me too. The intended solutions looks so much simpler and easier to implement tho :(
Problem E asks for the same condition as 1616C - Representative Edges, though the constraints make the two problems quite different. I don't think there's anything problematic here, just thought it was interesting.
Edit: There's actually an important distinction because problem E requires the sequence to consists of integers. I feel quite dumb because I failed to solve E in contest due to trying to map fractional slopes and getting TLE due to costly hashing...
Great round! Thank you.
Great problem!
Problem E: Was this simple $$$O(n \sqrt{n} \log n)$$$ (with maybe good constant) allowed?
150252654
The TL for problem G is suspicious.
I implemented another algorithm with $$$O(n\sqrt n)$$$ runtime: note that either the resulting height will be $$$\le W$$$, or only at most $$$W$$$ energy could accumulate and be used during the process for $$$W=\sqrt n$$$. One might find the submission here: 150268402. However, it got TLE even with setting $$$W$$$ to 105, which takes precisely $$$O(nW)$$$ time. (I passed during the contest with $$$W=25$$$)
I suspected that either the std constant was very small, or the test cases were weak such that std takes far less than $$$O(n\sqrt n)$$$ time on them.
I am not understanding, so I will ask some questions:
Thanks a lot for the explanation, now I see the point.
Regarding your solution getting AC instead of WA, that's unfortunate (but not for you :p). I have seen (thanks to your attempted hacks) that you seem to be the only contestant affected, so that's not too bad (for sure not for you :p).
Regarding the TL being too small for your solution. I would guess that you have a bad constant factor, but I have not checked so I don't know (and I will not investigate further).
Regarding the TL being too small in general and the authors not noticing because of weak tests. I don't think this is the case. The official solution's slowest behavior (i.e., when $$$z\approx 100$$$, where the letter $$$z$$$ counts the number of interesting heights) is triggered by the tests (just checked adding an assert). Moreover there is also an alternative solution (not mentioned in the editorial) with complexity $$$O(n\log(n))$$$ which fits easily in the TL and whose running time is almost independent of the structure of the tree.
Implementing E in $$$O(n\sqrt{n} \log{n})$$$:
*remembers -is-this-fft- said it is evil complexity in his last post and should be avoided at all costs
-> alright i will just use array of size 70 million
150284056
Can you, please, clarify your approach?
When $$$d \le \sqrt{M}$$$ it is same as in editorial, and when $$$d >\sqrt{M}$$$ you can look at my approach as:
Consider each starting point $$$i$$$ (first element that will be "saved") and check next $$$\sqrt{M}$$$ elements and their possible $$$d$$$, reason why it is enough to check only $$$\sqrt{M}$$$ elements is because after that even when $$$a_i=0$$$, $$$a_j$$$ where $$$j>i+\sqrt{M}$$$ would must be $$$> M$$$ which isn't possible (because $$$a_j=a_i+d*(j-i)$$$)
I got MLE in C by popping the largest $$$b_i$$$, as in the editorial, and had to switch to a min heap to pop the smallest $$$b_i$$$ instead. Am I miss something? In this case the solution is $$$O(n\log^2n)$$$ I think.
Your submission is $$$O(sum*\log sum)$$$
1 536870911
.In general, $$$1$$$ and $$$2^n-1$$$.
Exit if max in the priority queue is less than multiset. Or
Exit when the size of the priority queue becomes more than n.
Anything should work.
Problem C was good :)
--Not me, That guy's (that is not me) hypocritical arguments
Yup, I agree :D
In question D, the same solution can be done for node 2,3 and all. So why should we not take max of all of them. In other words, why taking node 1 as lcm of denom and setting others accordingly works?
If you start DFS from any node $$$s$$$, the result is already the minimum possible, so you would get the same result for any node.
Ah, I thought of exact same logic, but thought will have to check for al vertices. F
My Approach for Problem C:
Perform BFS starting from the total sum of the given weights(root node). There can be maximum of two child nodes of a parent node. Child Nodes are $$$ceil(p/2)$$$ and $$$floor(p/2)$$$, where $$$p$$$ is a parent node.
Only explore those child nodes which are not present in the given weight array.Reason for this is, because If we have already found a weight(value of the child node) which is present in the input weight array, we should not divide it further, instead we will count this weight in our answer.
(Exploring means cutting a piece of cake, Or dividing the child node into 2 halves.)
For example, If both child nodes of a particular parent node are $$$64$$$ and If $$$64$$$ is present in the input array ($$$1$$$ time), then we will explore only one child node.If its present more than $$$1$$$ time, we will not explore these $$$2$$$ child nodes, because we will be counting these $$$2$$$ child nodes in our answer.
Note- You can draw a tree on a paper, It will be more clear.Start from sum of all given weights, and divide it further, It will look like a binary tree.
If we have explored $$$n-1$$$ nodes, that means we have performed $$$n-1$$$ cuts to the cake.Hence we have divided the cake into $$$n$$$ weights.
Answer will be YES, If we found all the input weights in our binary tree.Otherwise, It will be NO.
Time Complexity — $$$O(n)$$$, where $$$n$$$ is number of input weights (size of the array). Reason being, BFS will stop as soon as we explored $$$(n-1)$$$ nodes.
My AC Solution-Click Here
Wow, F is just awesome, I have never seen use of suffix array in this manner.
[Deleted]
https://mirror.codeforces.com/contest/1654/submission/150245226
Could someone explain why this solution work ? Isn't it has complexity $$$O(n^2)$$$ so it should be TLE ?
It's supposed to get both WA and TLE: 150245226, 150307656.
I'm wondering why it works so well on official tests.
Lesson learnt rest well before contest!
C is really a good problem. I was confused at first, but when I finally solved it, the solution seems to be obvious.
The editorial is amazing! I hope all the editorial have "Hints".
In problem G, there is
I don't know why, can anyone help me?
Thanks in advance.
I dont have a rigorous proof of this fact, but I believe this can be obtained by giving a lower bound for the number of vertices with height $$$i$$$ given the number of flippable vertices with greater or equal height (for example, lower bounded by half of this quantity).
I am not able to prove strict linear dependency but it is very close. Let's have an array{X1,X2,X3...,Xn} which denotes the frequency of flippable vertex for each height. Then let S = X1*1+ X2*2 ...+ Xn*n The S is amount of nodes that will be needed generate all flippable vertex. For example, flippable vertex with H=3 will require 3 vertex with H=0,1 and 2. There will be overcounting. Specifically, a node can be contribute to multiple flippable vertex. Let O be that value. We then have S-O <= n. We can use small to large to proof that there will atmost Logn flippable nodes that a single node can contribute to.That gives O <= Nlogn. Thus we have S<= N(Logn + 1).
Here are my thoughts. qwq
First root the tree at a base lodge. Consider a BFS cover process below.
BFS from the root and we can find some nearest base lodges.
A pair of "flippable points" must exist at the midpoint of the path from these lodges to the root. We pair the lower endpoints of each point pair with certain nearset lodge and cover the path between them.
In addition, there are no other flippable point pairs on the merge of paths from these lodges to the root (blue area in the figure).
So we delete blue area and the original tree is divided into several subtrees with roots. We can do similar process in each subtree.
Finally, we can see the coverage areas do not intersect so half of $$$\sum\limits_{v\in S}h_v$$$ is $$$O(n)$$$.
I figured out another proof of this. First, we can group the vertices in a way that each vertex and its nearest base lodge are in the same group. Let $$$k$$$ denote the number of base lodges. Then we get $$$k$$$ group. Each group forms a tree, let $$$T_i$$$ denote the $$$i$$$-th tree, $$$|T_i|$$$ denote the number of vertices in the $$$i$$$-th tree.
Besides the edges inside every tree, there are $$$k-1$$$ cross-tree edges, each of which connects two vertices in different trees. The difference of the height of two vertices of a cross-tree edge is no more than 1. If the height is equal, then the two vertices form a pair of flippable vertices. Since we want to make the sum of these height as large as possible, we can assume that all $$$k-1$$$ cross-tree edges connect equal-height vertices. In other words, we have $$$k-1$$$ pair of flippable vertices.
For a pair of flippable vertices with height $$$h$$$, if they belong to $$$T_A$$$ and $$$T_B$$$ respectively, then we can get $$$h \leq |T_A|, h \leq|T_B|$$$.
If we compress each tree $$$T_i$$$ as a single vertex, and all $$$T_i$$$ and the $$$k-1$$$ cross-tree edges will form an abstract tree. We can let $$$T_1$$$ be the root of this tree, then the edges of the tree can be listed as $$$(T_2, p(T_2)), (T_3, p(T_3))... (T_k, p(T_k))$$$, where $$$p(x)$$$ is the parent of $$$x$$$.
Thus, $$$\sum h \leq 2(|T_2| + |T_3| + ... + |T_k|) \leq 2n$$$.
Q.E.D.
I tried explaining my solutions for problem A,B and C . Link to the video
Hope this can be of any help.
D claims that to recover LCM it takes the minimum of all powers for each prime p over DFS. This claim does not make sense to me — and in the code itself it seems like it takes the maximum
for (int p : factors[x]) f[p]++, cmax(wf[p], f[p]);
It would be good to know where I have misunderstood
Edit — Ah I see, taking the max and multiplying by 1 is the same as taking the min and multiplying by -1.
I didn't solve C. because I'm keep writting solutions of bottom-up during the contest... :(
i can 100% relate to this when testing the round D:
Sometimes thinking too hard can screw you over :/
Problem C, my solution is based on something that I can't prove...
I guessed that if the initial value is $$$k$$$,then the numbers we can get by spliting $$$k$$$ is like $$$\frac{k}{2^w}$$$ and $$$\frac{k}{2^w}+1$$$.
according to this conclusion, I passed this problem. 150259381
I don't know how to prove or hack this submission.If you can help me I will be very thankful :D
In my submission, $$$x$$$ is the number of $$$s$$$, and $$$y$$$ is the number of $$$s+1$$$
Your claim is true. In fact, if you define $$$f(x) = \lfloor \frac{x}{2} \rfloor$$$, $$$g(x) = \lceil \frac{x}{2} \rceil$$$, you get $$$f^w(x) = \lfloor \frac{x}{2^w} \rfloor$$$, $$$g^w(x) = \lceil \frac{x}{2^w} \rceil$$$.
Sketch of proof:
Let $$$x = 2^w \cdot \lfloor \frac{x}{2^w} \rfloor + b_0$$$ ($$$0 \leq b_0 < 2^w$$$). Then, $$$f^v(x) = 2^{w-v} \cdot \lfloor \frac{x}{2^w} \rfloor + b_v$$$.
$$$b_v = \lfloor \frac{b_{v-1}}{2} \rfloor$$$, so $$$b_w = 0$$$ and $$$f^w(x) = \lfloor \frac{x}{2^w} \rfloor$$$.
Same proof for $$$g$$$.
I literally got stuck in C and couldn't solve C and did not try to move on
At last I lost 171 rating
I think in the Hint 5 of the editorial of H, the ODE should be:
In the current version, the ODE has 2 "+", without any "=".
G can also be solved in O(nlog^2n) with centroid decomposition
It's also possible to improve this to $$$\mathcal{O}(n \log n)$$$, since the positions of the segment tree you need to update and query are bounded by the current centroid's subtree size, so you can simply use an array with prefix minimums. Code: 150307419
I realized that I don't even need a segment tree since I am also only checking the maximum number in the complete segment tree and not a subrange.Can just maintain the maximum in one variable.Code
I solved C using BFS .. multiset and queue ..and got AC ..you can see my simple code here: https://mirror.codeforces.com/contest/1654/submission/150350334
Can someone help me with this code Solution ,I guess there is some issue due to modular operations.
modular gcd is wrong
I really loves this kind of editorials. This helps soo much!
I am trying to solve D by running DFS twice: First Bottom-Up and then Top-Down.
The solution seems to be working fine on smaller test-cases (5-6 edges) but its output slightly differs on the pretest's 3rd test-case, which is difficult to analyze.
Submission.
The editorial with serveral hints was great help to me! It can always help me understand how to come up with the correct ideas. And the solutions are also very comfortable to read and easy to understand.
Problem C editorial solution will get TLE for bellow case:
1
200000
1000000000 1000000000 1000000000 1000000000 1000000000 ... all 2*10^k element of a is 10^9
I did some wrong maths, and finally understood the solution. thanks.
anyone please tell me my mistake in problem C 156969000
It's a really silly error, but in line 52, you wrote
bb.top()
when you actually meantbb.pop()
. This led to time/memory limit exceeded because if you enter that case even once, then it would perpetually add new elements forever while the top remains unchanged.For future reference, to better identify the mistakes you made, my suggestion is to:
(if the sample input passes, then you'd want to construct edge cases or tricky cases that do not seem to be covered in the sample input, and follow the same steps)
By the way, I only tested your code on the sample input, and the correction was sufficient. However, I don't actually know if the rest of your code is actually correct.
Yes, I know this is like, two months late, but I understand the frustration of not knowing why your code is failing, so hopefully this may be a little comforting and can help you avoid similar errors in the future, with proper debugging.