Блог пользователя Proof_by_QED

Автор Proof_by_QED, 3 недели назад, По-английски
Rating Predictions

2211A - Antimedian Deletion

Problem Credits: nifeshe, Proof_by_QED

Hint 1
Solution
Code
Rate The Problem!

2211B - Mickey Mouse Constructive

Problem Credits: Proof_by_QED

Hint 1
Solution
Code
Rate The Problem!

2211C1 - Equal Multisets (Easy Version)

Problem Credits: Proof_by_QED

Special thanks to __baozii__ for suggesting this subtask.

Hint 1
Solution
Rate The Problem!

2211C2 - Equal Multisets (Hard Version)

Problem credits: Proof_by_QED

Hint 1
Solution
Code
Rate The Problem!

2211D - AND-array

Problem credits: nifeshe

Hint 1
Hint 2
Hint 3
Hint 4
Hint 5
Hint 6
Solution
Code
Rate The Problem!

2211E - Minimum Path Cover

Problem credits: Proof_by_QED

Hint 1
Hint 2
Solution
Rate The Problem!
Code

2211F - Learning Binary Search

Problem credits: Proof_by_QED

Unfortunately, it seems like there are a lot of cheaters on this problem. We will do the best to clear them out, but keep in mind this is out of our control.

Hint 1
Hint 2
Solution 1

Thanks to Dragos for writing up the following solution.

Solution 2
Code (Solution 1)
Code (Solution 2)
Bonus
Rate The Problem!

2211G - Rational Bubble Sort

Problem credits: chromate00. Special thanks: BurnedChicken

Hint 1
Solution
Code
Rate The Problem!

2211H - Median Deletion

Problem credits: nifeshe

Hint 1
Hint 2
Hint 3
Hint 4
Hint 5
Hint 6
Hint 7
Hint 8
Hint 9
Hint 10
Hint 11
Hint 12
Hint 13
Solution
Code
Rate The Problem!
  • Проголосовать: нравится
  • +58
  • Проголосовать: не нравится

»
3 недели назад, скрыть # |
 
Проголосовать: нравится +1 Проголосовать: не нравится

Thanks for the fast Editorial

»
3 недели назад, скрыть # |
 
Проголосовать: нравится 0 Проголосовать: не нравится

fast

»
3 недели назад, скрыть # |
 
Проголосовать: нравится 0 Проголосовать: не нравится

fast editorial

»
3 недели назад, скрыть # |
 
Проголосовать: нравится -26 Проголосовать: не нравится

Nice contest.problems A-D felt more or less based on pnc.

»
3 недели назад, скрыть # |
 
Проголосовать: нравится +13 Проголосовать: не нравится

The formula for the summation in the solution for D has the upper bound spilling into the argument for the summation.

»
3 недели назад, скрыть # |
 
Проголосовать: нравится +1 Проголосовать: не нравится

Problem D's editorial has some formatting inconsistencies (82 is written, the 8 should be at the top of summation)

»
3 недели назад, скрыть # |
 
Проголосовать: нравится -17 Проголосовать: не нравится

Auto comment: topic has been updated by Proof_by_QED (previous revision, new revision, compare).

»
3 недели назад, скрыть # |
Rev. 2  
Проголосовать: нравится +5 Проголосовать: не нравится

Alternate Solution to E:

Solve at 11PM running on 5 hours of sleep, make a solution that has a trivial countercase but still passes, have author hack your solution, then go back and forth 5 times (including figuring out the intended solution in the process) until it is decided that nobody is going to do this, voila.

»
3 недели назад, скрыть # |
 
Проголосовать: нравится +44 Проголосовать: не нравится

I don't understand why you decided to include the killing test in E to 25+ cases. With this, queue become longer at the end of the contest, and people who received TLE/MLE on those cases did not have enough time to fix their solution

  • »
    »
    3 недели назад, скрыть # ^ |
     
    Проголосовать: нравится +18 Проголосовать: не нравится

    Did you not expect your code to get TLE/MLE? Do you not check what your solution's runtime is before submitting?

    Even still, it's not in any rules of problemsetting that the test that cuts your solution specifically must be early. Ideally, you would get a verdict immediately, but that's not how CF or any online judge works, and that is something you have to adapt to, not blame us for putting the test late.

    • »
      »
      »
      3 недели назад, скрыть # ^ |
       
      Проголосовать: нравится 0 Проголосовать: не нравится

      At first, I thought to use prime factorization, but when I saw 10^18 in the constraints, I realized that was the wrong approach and prayed to God that there might be $$$O(log A)$$$ different values. But my guess was wrong in like 5 minutes. However, after the competition, my friend who had the same idea and found the same MLE said the same thing, and I instantly get the idea for taking the LCM of the set

      For honestly, it really sucks not getting AC when you can get it.

    • »
      »
      »
      3 недели назад, скрыть # ^ |
       
      Проголосовать: нравится +37 Проголосовать: не нравится

      I think putting more important tests earlier improves the experience for everyone. Because the queue becomes shorter and you don't have people thinking they "are close" just because the actual strong test is at the end.

      Maybe in the future, CodeForces could do something automatic based on the testing of the contest to permute the tests on which many testers and author solutions went wrong, earlier. I know such a thing is implemented in BAPCtools, the tooling used in our ICPC regional.

  • »
    »
    3 недели назад, скрыть # ^ |
     
    Проголосовать: нравится +27 Проголосовать: не нравится

    Have you considered having a correct solution?

    • »
      »
      »
      3 недели назад, скрыть # ^ |
       
      Проголосовать: нравится +1 Проголосовать: не нравится

      The only thing that needed to be done to get the correct solution at that point was to take the LCM of the set. I believe if I had 3-4 minutes more when I saw it gets MLE, I could get it. Yeh, I understand it was not correct but it was so close :(

»
3 недели назад, скрыть # |
 
Проголосовать: нравится 0 Проголосовать: не нравится

I got baited by 676767677 in the problem B, I thought the answer would never reach that high, so my approach is wrong cause why else would they mention it. Never thought I'd get rekt by 67.

»
3 недели назад, скрыть # |
 
Проголосовать: нравится 0 Проголосовать: не нравится

The proof for B us brilliant!

I wasn't able to make my intuition rigorous in-contest (so ended up waiting a long time before submitting), but the proof is just genius.

  • »
    »
    3 недели назад, скрыть # ^ |
    Rev. 2  
    Проголосовать: нравится 0 Проголосовать: не нравится

    I tried to understand proof of B, but it is having a lot of maths/theroms and I am not able to understand it.

    Edit:

    Now I understood:

    So basically the +1's and -1's will cancel out and only extra 1's or -1's will remain and now it is like we need to count of spliting this extra's into equal parts.

    • »
      »
      »
      3 недели назад, скрыть # ^ |
      Rev. 2  
      Проголосовать: нравится -10 Проголосовать: не нравится

      Try working through it with GPT first -- it's usually pretty great at simplifying.

      And if you're still confused, come back and I can help out.

    • »
      »
      »
      3 недели назад, скрыть # ^ |
      Rev. 3  
      Проголосовать: нравится 0 Проголосовать: не нравится

      let x > y, and let d be a divisor of x-y, then x-y = qd, we just need to prove that for any array, we can partition it into equal parts each of which has a total sum of d, for any array, we prove the existance of a prefix where the sum is d (because then we would have x'<=x and y'<=y remaining elements, in which x' — y' = (q-1)d, and thus we could repeat the process again), if the start element is equal to d, we are done, otherwise, keep adding elements to the prefix, at each point we are either increasing by 1 or decreasing by 1, thus there must exist a prefix of value = 1, 2, ..., d, for if a prefix of said value didnt exist, that would contradict that the sum of the whole array (which is a prefix) = qd >= d.

»
3 недели назад, скрыть # |
Rev. 2  
Проголосовать: нравится +61 Проголосовать: не нравится

Provide an almost brain-dead solution to F:

fix the node $$$v$$$ on binary search tree as the node where we found value $$$k$$$. assume there are $$$L$$$ elements less than $$$k$$$ and $$$R$$$ elements greater than $$$k$$$, then the number of such $$$a$$$ is $$$\binom{k - 2 + L}{L}\binom{m - k - 1 + R}{R}$$$.

sum over $$$k$$$ gives $$$\sum\limits_{k = 2}^{m - 1}\binom{k - 2 + L}{L}\binom{m - k - 1 + R}{R} = \binom{m - 3 + L + R + 1}{L + R + 1}$$$. (by the identity $$$\sum\limits_{x \in \mathbb{Z}}\binom{a + x}{a}\binom{b + c - x}{b} = \binom{a + b + c + 1}{a + b + 1}$$$). Note that $$$k = 1$$$ and $$$k = m$$$ can be handled in $$$O(1)$$$.

So for each vertex $$$v$$$, just enumerate $$$L+R$$$ then we can calculate everything in $$$O(n\log n)$$$ since sum of subtree size in binary search tree is $$$O(n\log n)$$$.

  • »
    »
    3 недели назад, скрыть # ^ |
    Rev. 3  
    Проголосовать: нравится +18 Проголосовать: не нравится

    I solved it similarly without using Vandermonde (I didn't know it).

    Let's say we're at fixed node $$$v$$$ in the binary search tree, and this node covers range $$$[l, r]$$$ (subrange of $$$[1, n]$$$).

    We want to count the number of non-decreasing arrays of length $$$n$$$ (with all values in $$$[1, m]$$$) which have the value $$$k$$$ in some segment $$$[x, y]$$$ such that the following properties are satisfied:

    1. $$$l \leq x \leq v \leq y \leq r$$$.
    2. $$$1 \lt k \lt m$$$ (for convenience, we will handle $$$k \in \lbrace 1, m \rbrace$$$ separately).
    3. $$$a_i \neq k$$$ for all $$$i \notin [x, y]$$$.
    4. $$$a_i = k$$$ for all $$$i \in [x, y]$$$.

    We would ideally like to compute this in $$$O(r - l + 1)$$$ time.

    How do we usually do stars-and-bars to count non-decreasing arrays of length $$$n$$$ where each value is in $$$[1, m]$$$?

    We have two ordered sequences of objects:

    1. $$$n$$$ labelled red balls (labelled from $$$1$$$ to $$$n$$$), representing "elements"
    2. $$$m$$$ labelled blue balls (labelled from $$$1$$$ to $$$m$$$), representing "value markers"

    Then the number of non-decreasing arrays is equal to the number of ways to interleave these two sequences while preserving their order, such that the first element is a blue ball. The number of ways here is just the binomial coefficient $$$\binom{n + (m - 1)}{n}$$$.

    In this interpretation, the set of interleavings (whose first element is a blue ball) bijectively maps to the set of non-decreasing arrays through the following conversion rule: set the value of the $$$i$$$-th element in the array to the number of blue balls behind the $$$i$$$-th red ball in the interleaving.

    Now, consider a weaker version of our subproblem:

    Given $$$n, m, l, r$$$, find $$$g(n, m, x, y)$$$ = the number of non-decreasing arrays of length $$$n$$$ such that $$$a_{x - 1} \neq k = a_x = a_{x + 1} = \dots = a_y \neq a_{y + 1}$$$, and $$$1 \lt k \lt m$$$ (assume $$$a_0 = a_{n + 1} = 0$$$ are sentinel elements).

    How can we view this problem through our red/blue ball lens?

    To enforce all the constraints mentioned above, we need to count the number of ball interleavings that satisfy the following property:

    1. The first ball is blue.
    2. There is a blue ball directly behind the $$$x$$$-th red ball (to enforce $$$a_{x - 1} \neq a_x$$$).
    3. There is no blue ball between the $$$x$$$-th and $$$y$$$-th red ball (to enforce $$$a_{x} = a_{x + 1} = \dots = a_y$$$).
    4. There is a blue ball directly after the $$$y$$$-th red ball (to enforce $$$a_y \neq a_{y + 1})$$$.

    How can we count the number of such ball interleavings? Simply compress the $$$x$$$-th to $$$y$$$-th red ball into one red ball, and also merge a blue ball into either side of this new "super red ball".

    So we are now just left with $$$n - (y - x + 1) + 1$$$ red balls, and $$$m - 2$$$ blue balls. The number of ways to interleave them (while ensuring that the first ball is blue) is $$$g(n, m, x, y) = \binom{m - 3 + (n - (y - x + 1) + 1)}{m - 3}$$$.

    Now, the original subproblem reduces to finding $$$\sum_{l \leq x \leq v \leq y \leq r}{g(n, m, x, y)}$$$

    $$$= \sum_{l \leq x \leq v} \sum_{v \leq y \leq r} g(n, m, x, y)$$$
    $$$= \sum_{l \leq x \leq v} \sum_{v \leq y \leq r} \binom{m - 3 + n - (y - x + 1) + 1}{m - 3}$$$
    $$$= \sum_{l \leq x \leq v} \sum_{(m - 3 + n - r + x) \leq s \leq (m - 3 + n - v + x)} \binom{s}{m - 3}$$$
    $$$= \sum_{l \leq x \leq v} \binom{m - 3 + n - v + x + 1}{m - 3 + 1} - \binom{m - 3 + n - r + x}{m - 3 + 1} \qquad \text{(hockey-stick identity)} $$$

    So we can simply iterate over $x$ and compute these binomial coefficients, solving the original subproblem. We perform $$$v - l + 1 \leq r - l + 1$$$ iterations and compute one binomial coefficient in $$$O(1)$$$ at each, so this takes $$$O(r - l + 1)$$$ time, which is what we had set out to do.

    Note that $$$k = 1$$$ and $$$k = m$$$ are handled separately in a similar way.

»
3 недели назад, скрыть # |
 
Проголосовать: нравится -19 Проголосовать: не нравится

B is really good, i have known from the start the the monotone sequence must be the correct answer, but could'nt prove it quickly enough, i guess sometimes you gotta do it based on intuition

»
3 недели назад, скрыть # |
Rev. 2  
Проголосовать: нравится -20 Проголосовать: не нравится

//

»
3 недели назад, скрыть # |
Rev. 2  
Проголосовать: нравится +16 Проголосовать: не нравится

Hi Proof_by_QED for problem F editorial I think the term $$$\binom{y+n-r}{y-1}$$$ should be $$$\binom{y+n-r-1}{y-1}$$$. Also, I'm not sure if that identity can be called Vandermonde's identity (I looked up the definition on Wikipedia and it's simply $$$\sum_{k=0}^{n} \binom{r}{k}\binom{s}{n-k}=\binom{r+s}{n}$$$ but in your formula the top numbers depend on the $$$k$$$), unless they are somehow equivalent in a trivial way that I cannot see :)

  • »
    »
    3 недели назад, скрыть # ^ |
    Rev. 2  
    Проголосовать: нравится +3 Проголосовать: не нравится

    Nice catch on the part about Vandermonde's identity, I didn't realise that the $$$s$$$ and $$$r$$$ terms were not constants when reading.

    On another note, it's rather disappointing that the editorial for F has so many gaps that the reader has to fill in on their own. In addition to those you pointed out, the edge cases for $$$x=1$$$ and $$$x=m$$$ were "ignored for now" but never actually addressed.

»
3 недели назад, скрыть # |
Rev. 2  
Проголосовать: нравится -19 Проголосовать: не нравится

@Proof_by_QED Here is my personal feedback on problem B:

The problem is fine, but the time constraint is not, only letting fast I/O (even with bad algorithm) pass, even fast factorization without modulo and division with standard printf/scanf I/O got TLE:

  • Fast Factorization with printf/scanf: 368617316 TLE
  • Slow factorization (a lot of modulo and division) with complex manually buffered fast I/O: 368617206 AC

I hope you noted this and made constraint for next problem/contest more wisely.

I don't know why no one complaining about this, but for my personal view (and the evidence above) this B problem constraint is very bad :(

»
3 недели назад, скрыть # |
 
Проголосовать: нравится 0 Проголосовать: не нравится

Problem H editorial: Some of the markup is broken. Also, shouldn't "at most 3" be at "at least 3"?

»
3 недели назад, скрыть # |
 
Проголосовать: нравится 0 Проголосовать: не нравится

why c2 so hard,im dead again!

»
3 недели назад, скрыть # |
 
Проголосовать: нравится 0 Проголосовать: не нравится

This is the first time I've seen the last problem be a variant of the first problem

»
3 недели назад, скрыть # |
Rev. 2  
Проголосовать: нравится +7 Проголосовать: не нравится

Prob. D can be solved using NTT convolution method.

Let $$$c_k$$$ $$$(1 \le i \le n)$$$ the number of $$$i$$$ s.t. $$$a_i \ \text{bitwise and} \ 2^k \neq 0$$$ . that is to solve for the equations

$$$ \sum\limits_{i=0}^{28} 2^i \binom{c_i}{j} \equiv b_j \pmod{10^9+7} $$$

where $$$1 \le j \le n$$$ . if we multiply $$$z^j$$$ on both side and sum $$$j$$$ from $$$1$$$ to $$$n$$$ :

$$$ \begin{align*} \sum\limits_{j=1}^{n}b_jz^j &= \sum\limits_{j=1}^{n}\sum\limits_{i=0}^{28} 2^iz^j \binom{c_i}{j} \\ &= \sum\limits_{i=0}^{28}2^i \Bigg (\sum\limits_{j=1}^{n}z^j \binom{c_i}{j} \Bigg ) \\ &= \sum\limits_{i=0}^{28}2^i \big ((z + 1) ^ {c_i} -1\big ) \\ \end{align*} $$$

Let $$$x=z+1$$$ ,and $$$F(x) =\sum\limits_{j=1}^{n}b_jz^j +\sum\limits_{i=0}^{28}2^i = 2^{29} - 1 +\sum\limits_{j=1}^{n}b_j(x-1)^j = \sum\limits_{j=0}^{n}A_jx^j $$$ ,then:

$$$ F(x) = \sum\limits_{j=0}^{n}A_jx^j = \sum\limits_{i=0}^{28}2^ix^{c_i} $$$

so we enumerate $$$A_j$$$ , if $$$A_j \ \text{bitwise and} \ 2^i \neq 0$$$, then let $$$c_i = j$$$ .

so the answer $$$a_k = \sum\limits_{i=0}^{28} 2^i[k \le c_i]$$$ .

as for finding $$$F(x)$$$ , let $$$b_0 = 2^{29} - 1$$$ , then:

$$$ \sum\limits_{j=0}^{n}A_jx^j = \sum\limits_{j=0}^{n}b_j\sum\limits_{k=0}^{j}\binom{j}{k}x^k(-1)^{j-k} $$$
$$$ \begin{align*} A_k &= \sum\limits_{j=k}^nb_j\binom{j}{k}(-1)^{j-k} \\ &= \frac{1}{k!} \sum\limits_{j=k}^n(b_j \times j!) \frac{(-1)^{j-k}}{(j-k)!} \\ \end{align*} $$$

let $$$f(j)=b_j \times j! $$$ ,and $$$g(m)=\frac{(-1)^m}{m!}$$$ ,then:

$$$ A_k = \frac{1}{k!}\sum\limits_{j=k}^{n}f(j) \cdot g(j - k) $$$

can be solved using NTT algorithm.

since the modulus is not $$$998244353$$$ , we may use MTT to solve this.

  • »
    »
    3 недели назад, скрыть # ^ |
     
    Проголосовать: нравится 0 Проголосовать: не нравится

    Let's find $$$A$$$, such that $$$A_{i+1}$$$ is a submask of $$$A_{i}$$$ for all i.

    $$$A_k=b_k-\sum_{j=k+1}^{n}A_j\binom{j}{k}$$$

    Maximum number of unique values in $$$A$$$ is 30. By grouping equal values in $$$A$$$, contribution of $$$x\in A$$$ looks like

    -$$$\sum_{j=l}^{r}x\binom{j}{k}$$$,

    where $$$l$$$ is the smallest number such that $$$A_l = x$$$, $$$r$$$ is the greatest number such that $$$A_r = x$$$.

    -$$$\sum_{j=l}^{r}x\binom{j}{k} = -x(\binom{r+1}{k+1}-\binom{l}{k+1})$$$.

    Calculate $$$A_k$$$ in decreasing order of $$$k$$$ and maintain groups of equal numbers.

»
3 недели назад, скрыть # |
Rev. 2  
Проголосовать: нравится 0 Проголосовать: не нравится

For problem B if testcase is like 2 2 then according to me [1,-1] , [1,-1] and [1,1,-1,-1] there are two answer but actually it is 1 can anyone explain

»
3 недели назад, скрыть # |
 
Проголосовать: нравится 0 Проголосовать: не нравится

Can someone help and say what's wrong in my solution for C2?

First, I look at the first k elements with a set, then the other n-k. If at some point I can't add another element, then output no, otherwise output yes.

368629138

»
3 недели назад, скрыть # |
Rev. 2  
Проголосовать: нравится +21 Проголосовать: не нравится

I have no idea why the E's editorial says that there are $$$d(a_{node})$$$ states of dp. I define $$$dp(node, x)$$$ as the answer for the subtree of node when the current gcd value carried into $$$node$$$ is $$$x$$$.

The possible values of $$$x$$$ are not all divisors of $$$a[node]$$$. $$$x$$$ can only be a value of the form $$$gcd(a[v_1], a[v_2], ..., a[node])$$$, where $$$v_1 - \gt v_2 - \gt ... - \gt node$$$ is some path descending to $$$node$$$.

There are only $$$O(\log A)$$$ such $$$x$$$. So, we just iterate over the children and try to merge the root to each of them:

Code
  • »
    »
    3 недели назад, скрыть # ^ |
    Rev. 3  
    Проголосовать: нравится +5 Проголосовать: не нравится

    Because your solution is different from the one that takes d(a_node) time per node. The solution that I am referring does something like follows

    Pseudocode

    This, and variants trying to optimize this, should get the TLE verdict.

  • »
    »
    3 недели назад, скрыть # ^ |
     
    Проголосовать: нравится 0 Проголосовать: не нравится

    why is there only logA such paths and not all the divisors?

»
3 недели назад, скрыть # |
 
Проголосовать: нравится +16 Проголосовать: не нравится

Actually, problem F can be solved in $$$O(T\log n)$$$ time complexity. This is because there are only $$$\log n$$$ distinct types of intervals in Solution 2, and same intervals can be calculated simultaneously. Each interval can be represented by a 4-tuple: (total length, whether left bound exist a node, whether right bound exist a node, depth) $$$\to (len,l,r,dep)$$$. There are at most $$$4\log n$$$ distinct intervals in total, each interval is calculated in $$$O(1)$$$ .So the total time complexity is $$$O(T\log n)$$$ or $$$O(T\log^2 n)$$$ . The constant factor is a little bit large.

Here is my submission: 368630243

»
3 недели назад, скрыть # |
 
Проголосовать: нравится 0 Проголосовать: не нравится

at least solved 2 problems

»
3 недели назад, скрыть # |
 
Проголосовать: нравится 0 Проголосовать: не нравится

Thanks for the fast and clean editorial, it was really easy to follow

»
3 недели назад, скрыть # |
Rev. 2  
Проголосовать: нравится 0 Проголосовать: не нравится

Hello. I didn't really understand the editorial. Can anyone help me figure out what I am doing wrong in C2?

solution

My Solution: Check if the first k elements are rearrangements. I don't immediately replace -1, but keep track of which elements should replace -1. $$$ex_i$$$ is the count of $$$i$$$ that should replace -1.

If the condition holds for $$$[l,l+k-1]$$$, then check if the condition holds for $$$[l+1,l+k]$$$. Assuming $$$r = l+k$$$, check two conditions:

  1. If $$$a_l = a_r$$$, then $$$b_l = b_r$$$ should be true. If it is not then answer is NO.
  2. If $$$a_l \neq a_r$$$, then $$$a_l = b_l$$$ and $$$a_r = b_r$$$ should be true.
»
3 недели назад, скрыть # |
Rev. 2  
Проголосовать: нравится 0 Проголосовать: не нравится

[turns out what I wrote was written in the second solution, which I didn't see]

»
3 недели назад, скрыть # |
 
Проголосовать: нравится 0 Проголосовать: не нравится
// In the case d.size() >= 2, we can directly return NO.
// Since a[j] is constant over the chain, having ≥2 distinct values in b (excluding -1)
// implies at least one b[j] != a[j], which already violates the condition.
// Hence, the additional loop check is not required and confusing.
if (d.size() >= 2) {
			for (int j = i; j < n; j += k)
				if (b[j] != -1 && b[j] != a[j]) {
					cout << "NO" << "\n";
					return;
				}
		}
»
3 недели назад, скрыть # |
 
Проголосовать: нравится +3 Проголосовать: не нравится

For F, solution 1, shouldn't the number of ways to fill in numbers be

Spoiler
»
3 недели назад, скрыть # |
 
Проголосовать: нравится 0 Проголосовать: не нравится

It's hard for me to understand the precise meaning of the question but it's attractive^^.

»
3 недели назад, скрыть # |
 
Проголосовать: нравится 0 Проголосовать: не нравится

Problem E

this will easily get the Time Limit Exceeded verdict.

I had the same dp solution and it passed with 1300ms and 70000KB.

  • »
    »
    3 недели назад, скрыть # ^ |
    Rev. 2  
    Проголосовать: нравится +10 Проголосовать: не нравится

    Your solution is different from the one I was referring to. I am referring to the following solution:

    Pseudocode

    Your solution should work in $$$n \log^2(A)$$$ time

»
3 недели назад, скрыть # |
 
Проголосовать: нравится 0 Проголосовать: не нравится

PnC?forces

»
3 недели назад, скрыть # |
 
Проголосовать: нравится 0 Проголосовать: не нравится

A solution for C2 which minimises the number of primes required to atmost 2 (debugging hell) 368804443

»
3 недели назад, скрыть # |
Rev. 2  
Проголосовать: нравится 0 Проголосовать: не нравится

Proof_by_QED in problem c1 in editorial in last paragraph

"Consider the union of all size k intervals: starting from [1,k], then [2,k+1] ,and so on. The interval that is inside all of these intervals is the union of [1,k] and [n-k+1,n] "

Consider the intersection of all size k intervals: starting from [1,k], then [2,k+1] ,and so on. The interval that is inside all of these intervals is the intersection of [1,k] and [n-k+1,n].

i think intersection is right word here.. or did i misunderstand something?

»
3 недели назад, скрыть # |
 
Проголосовать: нравится 0 Проголосовать: не нравится

the problem C2 is also tagged with "DSU" ,can anyone please elaborate the dsu solution ?

»
3 недели назад, скрыть # |
Rev. 4  
Проголосовать: нравится +21 Проголосовать: не нравится

I have an alternate brainless solution for E that requires no number-theory knowledge (except for the fact that GCD is idempotent and can be computed in logarithmic time), and instead uses data structures.

As we process queries, we're essentially building a rooted forest. At every step, for every component in our rooted forest, we're going to maintain some information.

Consider some component $$$T$$$, rooted at node $$$r$$$. We maintain the following information:

  • $$$f(r)$$$ = the answer for subtree of $$$r$$$. Since we process the tree in a bottom up manner, $$$f(u) \forall u \in \text{subtree}_r$$$ will also have been computed by now.
  • For all $$$u \in \text{subtree}_r$$$, we maintain $$$g_r(u)$$$ = the gcd of vertices on the simple path from $$$r$$$ to $$$u$$$.
  • For all $$$u \in \text{subtree}_r$$$, we will maintain $$$h_r(u)$$$ = the optimal cost of the entire subtree, if we have a path from $$$r$$$ to $$$u$$$. If this path would be invalid ($$$g_r(u) = 1$$$), define $$$h_r(u) = \infty$$$. Otherwise, we observe that $$$h_r(u) = 1 + \sum_{v \in N_u} {f(v)}$$$. Here, $$$N_u$$$ is the set of nodes that are adjacent to some node on the path from $$$r$$$ to $$$u$$$, but don't themselves lie on the path.

Now as we process a query for some node $$$u$$$, if we were allowed to solve the problem naively (in quadratic time), having this information for the components rooted at all children of $$$u$$$ would be enough for us to compute analogous information for the new component rooted at $$$u$$$.

Naive Way

How do we do this efficiently? HLD + (small-to-large merging of lazy segtrees) XD

For every node $$$r$$$, we maintain a lazy segment tree that represents an array of pairs $$$(h_r(u), g_r(u))$$$ for all $$$u$$$ in the subtree of $$$r$$$.

It will allow us to do the following (assume segtree is an array of pairs $$$(x, y)$$$):

  • For arbitrary $$$z$$$, perform $$$x \leftarrow x + z$$$ for all pairs in the segtree.
  • For arbitrary $$$g$$$, perform $$$y \leftarrow \operatorname{gcd}(y, g)$$$ for all pairs in the segtree.
  • Query the minimum pair in the segment tree.

Note: null elements will be $$$( \infty, 0)$$$.

At every node, we simply push an update on the segment tree of every child individually. Then, we merge the segment trees of all the light children into that of the heavy child. More details in the spoiler.

Efficient Way:

If implemented naively, this takes $$$O(n \log^2{n} \cdot \log{A})$$$ time. However, it's not difficult to optimize it to $$$O(n \cdot \log{n} \cdot (\log{n} + \log{A}))$$$ time.

My code (with a slightly different implementation of this idea) can be found here: 368829460.

  • »
    »
    3 недели назад, скрыть # ^ |
     
    Проголосовать: нравится 0 Проголосовать: не нравится
    Actually brainless solution
    • »
      »
      »
      3 недели назад, скрыть # ^ |
       
      Проголосовать: нравится -13 Проголосовать: не нравится

      This is not as brainless as my solution because it requires one to observe that it's sufficient to maintain only the pairs $$$(h_r(u), g_r(u))$$$ with $$$h_r(u) = f(u)$$$. I do not make this observation, and instead maintain all $$$(h_r(u), g_r(u))$$$.

»
3 недели назад, скрыть # |
 
Проголосовать: нравится 0 Проголосовать: не нравится

What is the time complexity of bitwise operations does the system convert decimal to binary and then do the bitwise so is it O(1) or O(log n) or something else

»
2 недели назад, скрыть # |
 
Проголосовать: нравится 0 Проголосовать: не нравится

Despite kinda lengthy proof for G, it helps a lot to simply stresstest that a non-increasing sequence will never get prefix average < remaining suffix average. Then it's a matter of finding a construction if there's some prefix average < remaining suffix average and what non-increasing sequences can be made from a constant sequence.

»
2 недели назад, скрыть # |
 
Проголосовать: нравится 0 Проголосовать: не нравится

In the editorial for C1,

Consider the union of all size $$$k$$$ intervals: starting from $$$[1,k]$$$, then $$$[2,k+1]$$$, and so on. The interval that is inside all of these intervals is the union of $$$[1,k]$$$ and $$$[n−k+1,n]$$$

union should be replaced with intersection

»
12 дней назад, скрыть # |
Rev. 2  
Проголосовать: нравится 0 Проголосовать: не нравится

I submitted a solution for E but I can't prove it's time complexity, and I'm 85% sure it shouldn't pass. Please hack it and provide me the case.

The idea is: The cost of any node is either sum of costs of its children or sum of costs of its children + 1, so we want to save that extra 1. For every node, store the gcd of any path that guarantees minimum cost as gcd[u]. now, for every node, start a bfs to children, and keep the gcd of the each current path you've taken, until the gcd(current path, gcd[u]) != 1, hence we can connect those two paths to save that extra 1 and we can stop the search. If the gcd(current path, arr[u]) = 1, that means we can't benefit anything from that node and we have to consider that visited node as a different path which doesn't save us the extra 1. If instead gcd(current path, arr[u]) != 1, we continue the search.

Here's the code:

C++ code