intrusiv's blog

By intrusiv, 12 months ago, In English

A — Constructive Problems

Author: valeriu

Solution
Code (valeriu)
Rate Problem

B — Beginner's Zelda

Author: valeriu

Solution
Code (valeriu)
Rate Problem

C — Largest Subsequence

Author: tibinyte

Solution
Code (tibinyte)
Rate Problem

D — Cyclic MEX

Author: tibinyte

Solution
Code (tibinyte)
Rate Problem

E — One-X

Author: tibinyte

Solution
Code (tibinyte)
Rate Problem

F — Field Should Not Be Empty

Author: tibinyte

Solution
Code (tibinyte)
Rate Problem
  • Vote: I like it
  • +90
  • Vote: I do not like it

| Write comment?
»
12 months ago, # |
Rev. 3   Vote: I like it +33 Vote: I do not like it

why the name of problem C is Adam lying face when it is Largest subsequence? or am i stupid

upd. fixed, thanks

»
12 months ago, # |
Rev. 4   Vote: I like it +42 Vote: I do not like it

Problem names of A, B and C are wrong:

upd: Thanks for fixing it :)

  • »
    »
    12 months ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    In problem C my method has utilised O(n) complexity and yet it is giving tle at last test(9). please look upon this..237829405

    • »
      »
      »
      12 months ago, # ^ |
        Vote: I like it 0 Vote: I do not like it

      v.insert(...) runs in O(n) time. I can offer deeper feedback if you rewrite your code in a more readable style

»
12 months ago, # |
  Vote: I like it -25 Vote: I do not like it

Problem C did not mention whether the string had to be sorted in ascending or descending order. I got a wrong answer on test case 2 (21th test case dcbbaa) as i considered sorting in descending order too. Please look into this issue.

»
12 months ago, # |
  Vote: I like it -38 Vote: I do not like it

gray round, that says it all...

»
12 months ago, # |
  Vote: I like it 0 Vote: I do not like it

Can someone please explain C. for finding the number of operations we will subtract the length of the largest prefix of equal values of the subset from its length.

Didn't understand this.

  • »
    »
    12 months ago, # ^ |
      Vote: I like it +7 Vote: I do not like it

    If the largest subsequence is zzzba for example, the number of operations would be 2.

    Total length is 5 and the largest prefix of equal values would be zzz

    This is because after two operations, the largest subsequence will be zzz and further operations will be no-op.

»
12 months ago, # |
  Vote: I like it +50 Vote: I do not like it

In E you can also notice that for a subtree of fixed size the sum of LCAs for all the verices in this subtree is some linear function of $$$v$$$: $$$f(v) = kv + b$$$, where $$$v$$$ is root of the subtree. Then it is easy to solve the problem for arbitrary $$$v$$$, thus getting a linear function. Since there's at most $$$O(\log (n)^2)$$$ different subtrees you can easily solve the problem using recursion with memoization.

This is my implementation: 237520087

  • »
    »
    12 months ago, # ^ |
      Vote: I like it +30 Vote: I do not like it

    Used the same idea, except the only candidates for subtree sizes are $$$ \left\lfloor \frac{n}{2^x} \right\rfloor$$$ and $$$ \left\lceil \frac{n}{2^x} \right\rceil$$$.

    So this actually gives $$$O(\log n)$$$ per test case (assuming you merge in constant time).

    • »
      »
      »
      12 months ago, # ^ |
      Rev. 2   Vote: I like it -16 Vote: I do not like it

      $$$O(\log n^2)$$$ = $$$O(2\log n)$$$ = $$$O(\log n)$$$

      • »
        »
        »
        »
        12 months ago, # ^ |
          Vote: I like it +14 Vote: I do not like it

        What I've written was $$$O(\log (n)^2) = O((\log (n))^2)$$$, and you can't simplify the way you did. Maybe I should've written $$$O((\log (n))^2)$$$ in the first place, but nevertheless.

        • »
          »
          »
          »
          »
          12 months ago, # ^ |
          Rev. 3   Vote: I like it +14 Vote: I do not like it

          oh, I see. but isn't number of different subtrees actually $$$O(2\log n)$$$ ?

          • »
            »
            »
            »
            »
            »
            12 months ago, # ^ |
            Rev. 2   Vote: I like it +9 Vote: I do not like it

            According to ToniB that is true, but I don't have the proof yet, so I'm not too keen on believing this. Actually, I might've found counterexamples, but I'm not too sure.

            UPD. Tried to test out different values of $$$n$$$, and found that it is, indeed, true, and my counterexamples were wrong.

    • »
      »
      »
      12 months ago, # ^ |
        Vote: I like it +10 Vote: I do not like it

      Actually, if it really is the bound, then my solution also checks at most $$$O(\log (n))$$$ different subtrees per test case.

      However, I don't really know how to formally prove that it is really the case. Can you share your proof?

      • »
        »
        »
        »
        12 months ago, # ^ |
        Rev. 4   Vote: I like it +11 Vote: I do not like it

        for any $$$n$$$: $$$\left\lfloor\frac{n}{2}\right\rfloor$$$ , $$$\left\lceil\frac{n}{2}\right\rceil$$$, $$$\left\lfloor\frac{n + 1}{2}\right\rfloor$$$ and $$$\left\lceil\frac{n+1}{2}\right\rceil$$$ have at most 2 unique values, and they are also some $$$x$$$ and $$$x + 1$$$

        so for every depth of the tree we have at most 2 unique ranges.

        so max number of different subtrees is $$$O(2\log n) $$$

      • »
        »
        »
        »
        12 months ago, # ^ |
        Rev. 2   Vote: I like it +19 Vote: I do not like it

        For any $$$L \leq n \leq R$$$ you can see $$$\left\lfloor \frac{L}{2} \right\rfloor \leq \left\lfloor \frac{n}{2} \right\rfloor \leq \left\lceil \frac{n}{2} \right\rceil \leq \left\lceil \frac{R}{2} \right\rceil$$$.

        Now you prove by induction. Suppose $$$\left\lfloor \frac{n}{2^x} \right\rfloor \leq k \leq \left\lceil \frac{n}{2^x} \right\rceil$$$ where $$$k$$$ is a subtree size on $$$x$$$-th layer from the top. Using the inequality above, you get

        $$$\left\lfloor \frac{\left\lfloor \frac{n}{2^x} \right\rfloor}{2} \right\rfloor \leq \left\lfloor \frac{k}{2} \right\rfloor \leq \left\lceil \frac{k}{2} \right\rceil \leq \left\lceil \frac{\left\lceil \frac{n}{2^x} \right\rceil}{2} \right\rceil$$$

        which simplifies to

        $$$\left\lfloor \frac{n}{2^{x+1}} \right\rfloor \leq \left\lfloor \frac{k}{2} \right\rfloor \leq \left\lceil \frac{k}{2} \right\rceil \leq \left\lceil \frac{n}{2^{x+1}} \right\rceil $$$

        Since these are the only possible sizes of subtrees in the next layer, the next step will also hold. Base case is trivial.

        UPD: or just what ibrahimwq said, looks simpler.

  • »
    »
    12 months ago, # ^ |
      Vote: I like it +3 Vote: I do not like it

    Can you explain more clearly how to construct this formula: f(v) = kv + b Thank you!

    • »
      »
      »
      12 months ago, # ^ |
      Rev. 3   Vote: I like it +13 Vote: I do not like it

      Well, firstly, suppose we have some subtree of size $$$n$$$, and by induction for trees of size smaller than $$$n$$$ we already know their linear functions. Then what is the answer for this subtree (LCA is some vertex of this subtree)? LCA(S) = $$$v$$$ only in subsets, for which there is at least 1 leaf from the left son, at least 1 leaf from the right son, and there are no leafs besides this subtree (or LCA would be some ancestor of v). For a tree of size $$$n$$$ left son has the size $$$\lceil \frac{n}{2} \rceil$$$, and right son has the size $$$\lfloor \frac{n}{2} \rfloor$$$. Then there are $$$(2^{\lceil \frac{n}{2} \rceil} - 1)(2^{\lfloor \frac{n}{2} \rfloor} - 1)$$$ subsets for which LCA is $$$v$$$.

      Now we need to calculate the answer for the case when LCA is equal to some vertex in the left or the right son. Because by induction we already know what formula of the answer for the left and right son is, we have $$$f_l(v)=k_l v + b_l$$$, $$$f_r(v) = k_r v + b_r$$$. Then the answer for the left son is $$$f_l(2v)$$$, and $$$f_r(2v+1)$$$ for the right son. Composition of linear functions is also a linear function (given $$$f(v)=av+b, g(v)=pv+q, $$$ we have $$$f(g(v)) = a(pv+q)+b = (ap)v + (aq+b)$$$)

      Combining everything together, the answer for the subtree is $$$f(v) = (2^{\lceil \frac{n}{2} \rceil} - 1)(2^{\lfloor \frac{n}{2} \rfloor} - 1) v + f_l(2v) + f_r(2v+1).$$$

      UPD: base for this induction is linear function for subtree of size 1: $$$f(v) = v$$$

»
12 months ago, # |
  Vote: I like it 0 Vote: I do not like it

can someone help me in trying to check which testcase my code if failing for problem C :( 237528783

  • »
    »
    12 months ago, # ^ |
      Vote: I like it +5 Vote: I do not like it

    In the last for loop, you have to iterate over lar.size() not n.

»
12 months ago, # |
  Vote: I like it +4 Vote: I do not like it

The author's solution F is too long, it seems it could have been written simpler. https://mirror.codeforces.com/contest/1905/submission/237540164

»
12 months ago, # |
  Vote: I like it +2 Vote: I do not like it

Problem B from graph :p,though not related to graph related algos :)

»
12 months ago, # |
  Vote: I like it 0 Vote: I do not like it

can any one tell me the intuition of D?

  • »
    »
    12 months ago, # ^ |
    Rev. 2   Vote: I like it +42 Vote: I do not like it

    I might have had a different intuition than the Editorial, but i can try. For simplicity, let $$$pm(i)$$$ denote $$$\text{mex}({p_1, p_2, ..., p_i})$$$, that is the mex of a prefix of $$$p$$$ of length $$$i$$$.

    Lets assume $$$p_n=0$$$. What's the answer? $$$n$$$. Because every mex of a prefix will be equal to $$$0$$$ except the prefix containing the entire permutation.
    Now, what happens when $$$p_{n-1}=0$$$?
    If you try a few cases you will see that the answer in this case is $$$p_n + n$$$. If $$$p_n=1$$$ then the answer must be $$$1 + n$$$. Because $$$pm(i)=0$$$ for all $$$i<n-1$$$, and because $$$1$$$ is excluded from $$$p_1, ..., p_{n-1}$$$, $$$pm(n-1)=1$$$. $$$pm(n)=n$$$ as usual.
    We can see that the mex is $$$0$$$ until we "hit" the index $$$n-1$$$ (where $$$0$$$ sits). When that happens, mex will become equal to $$$p_n$$$, because that is the only remaining number that is excluded.

    Using this information, we can build a solution where we consider the positions of where $$$0$$$ can be in decreasing order. In other words we consider the cyclic shifts in this order (i assume WLOG that $$$p_n=0$$$):
    $$$[p_1, ..., 0]$$$
    $$$[p_2, ..., 0, p_1]$$$
    $$$[p_3, ..., 0, p_1, p_2]$$$
    And so on. The nice thing about doing it this way is that we know that the mex is $$$0$$$ for all indices before the index of the number $$$0$$$ as explained above. This way we can think of the operation not as a cyclic shift, but as appending numbers to the end of our list.
    So our operations really look like this:
    $$$[0]$$$
    $$$[0, p_1]$$$
    $$$[0, p_1, p_2]$$$
    $$$[0, p_1, p_2, p_3]$$$
    When appending a new number, we need to consider how it can change our answer. It can be helpful to think about some cases.
    If we ever append $$$1$$$ to the end, we know that all mexes will be $$$1$$$ except the last one, which will be $$$n$$$ (assuming our simplified view of 'appending to list'). If we ever append a $$$2$$$ after this, we know we won't change any of mexes before the index of $$$1$$$, but between the index of $$$1$$$ and the index of $$$2$$$ the mex will be $$$2$$$.
    If you try this for some different cases on paper, it shouldn't be impossible to see that what we need to do when appending a new number, say $$$p_i$$$, is that we must find the index $$$j$$$ of the last number that is smaller than $$$p_i$$$. I.e. we must find the largest $$$j$$$ in our list so far such that $$$p_j<p_i$$$. Because we will not affect any mexes up to $$$j$$$. All mexes between $$$i$$$ and $$$j$$$ will then be equal to our new value $$$p_i$$$, because by construction, $$$p_i$$$ is smaller than all numbers in this range. Thus, we will update our answer with $$$(i - j) \cdot p_i + \text{whatever answer we got when appending }p_j$$$. Our final answer will be the maximum of all of the $$$n$$$ answers from each 'shift'.

    • »
      »
      »
      12 months ago, # ^ |
      Rev. 2   Vote: I like it 0 Vote: I do not like it

      How do you find the last element smaller than $$$p_i$$$? Edit: Oh, that's why VectorViking has a monotonic stack. We read the array of $$$p_i$$$ from left to right, and at every $$$p_i$$$, we pop all elements bigger than $$$p_i$$$ before pushing $$$p_i$$$. Thank you very much!

      • »
        »
        »
        »
        12 months ago, # ^ |
          Vote: I like it +1 Vote: I do not like it

        and anyway in general if you wanna keep track of nextLarger, nextSmaller or prevLarger, prevSmaller you use monotonic stack

    • »
      »
      »
      12 months ago, # ^ |
        Vote: I like it +1 Vote: I do not like it

      It's actually the same as the editorial's solution, just changes the order to calculate ans. But your way is more easy to understand, and then we can clearly see that each number would push on and pop out in the stack for at most once, thus the time complexity is O(n).

    • »
      »
      »
      12 months ago, # ^ |
        Vote: I like it +5 Vote: I do not like it

      I guess there are some typos after the sentence

      will then be equal to our new value

      Those $$$p_j$$$ here should be $$$p_i$$$

    • »
      »
      »
      12 months ago, # ^ |
      Rev. 2   Vote: I like it 0 Vote: I do not like it

      I tried implementing this solution but idk where is my error. I get the following verdict: 45478th numbers differ — expected: '31', found: '0' even though at the end of my code I output something like $$$max+n$$$, where $$$max$$$ is always $$$max\geq 0$$$. Here is my code

    • »
      »
      »
      12 months ago, # ^ |
      Rev. 3   Vote: I like it 0 Vote: I do not like it

      pretty good explanation and the code was clean and easy to understand , got the idea clearly Thanks bro!!

    • »
      »
      »
      4 months ago, # ^ |
      Rev. 3   Vote: I like it 0 Vote: I do not like it

      Do you have any intuition that zero start from head? (0, p1, p2, p3, ...)

      My intuition is zero start from head, and i think of it for a day. But i can not find the solution like monotonic stack which can calculate the sum in O(1).

      EDIT: Or how do you come up with the intuition that zero start end but not front?

      EDIT2: I probably found out why zero can only be put on right.

      consider zero move right like 0 1 2 3 -> 3 0 1 2 I can use two pointer from left two right to found out where the integer start add. However, the number of right hand sid of number we found is not fixed, the number in the sequence may still become bigger. To detect this, I need O(N). For example:

      number : 0 1 4 5 2 3 6 7 -> 7 6 3 2 0 1 4 5

      sum : ___1 2 2 2 3 6 7 8 -> 0 0 0 0 1 4 5 8

      On the other hand, let zero start from right can avoid this problem. Because when I move some number from left to right we can use a monotonic stack to keep the changing of number.

      The different is that whether you need to traverse the array to check the current number or not.

  • »
    »
    12 months ago, # ^ |
    Rev. 2   Vote: I like it +3 Vote: I do not like it

    Let's solve the following example: 2 3 6 7 0 1 4 5

    We first place 0 at the end: 1 4 5 2 3 6 7 0

    What's the cost? It's 8 ( =n ), because everything before 0 adds +0, and at the end there is always +n (we used all numbers from 0 to n-1).

    Now we rotate the sequence to the left.

    • 1st iteration: 4 5 2 3 6 7 0 1 (cost is 1 + 8)
    • 2nd iteration: 5 2 3 6 7 0 1 4 (cost is 1 + 4 + 8)
    • 3rd iteration: 2 3 6 7 0 1 4 5 (cost is 1 + 4 + 5 + 8)
    • 4th iteration: 3 6 7 0 1 4 5 2 (cost is 1 + 3 * 2 + 8)

    Why is that? Because now 2 is at the end and we can use it for places where we used 4 and 5 (they are bigger). It's enough to maintain a monotonic stack while iterating, so the solution is linear.

    You can see my solution at: 237547878. Hope this helps!

    • »
      »
      »
      12 months ago, # ^ |
        Vote: I like it 0 Vote: I do not like it

      Why is the cost of the 4th iteration not $$$1 + 2 \times 2 + 8$$$?

      • »
        »
        »
        »
        12 months ago, # ^ |
          Vote: I like it 0 Vote: I do not like it

        Because the cost is computed as the sum of mex, for each iteration:

        • 0th: 0 0 0 0 0 0 0 8
        • 1st: 0 0 0 0 0 0 1 8
        • 2nd: 0 0 0 0 0 1 4 8
        • 3rd: 0 0 0 0 1 4 5 8
        • 4th: 0 0 0 1 2 2 2 8

        As you can see, when we "rotated" 2 from the start of the sequence to the end, it became the new smallest non-negative integer ( mex ) instead of 4 and 5. The third 2 comes from the rotation itself.

»
12 months ago, # |
  Vote: I like it 0 Vote: I do not like it

Can someone please explain D. I'm not able to understand the tutorial. Thanks!

  • »
    »
    12 months ago, # ^ |
    Rev. 3   Vote: I like it +12 Vote: I do not like it

    These were my ideas during the contest. I hope it helps!

    Idea 1
    Idea 2
    Idea 3
    Idea 4

    Source code: https://mirror.codeforces.com/contest/1905/submission/237526381

»
12 months ago, # |
  Vote: I like it 0 Vote: I do not like it

In problem B solution. I think there should be ceil(K/2) instead of ceil(K+1/2)

  • »
    »
    12 months ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    does it matter?

  • »
    »
    12 months ago, # ^ |
      Vote: I like it +1 Vote: I do not like it

    I've written about floor((X+1)/2), which is practically ceil(X/2) :)

    • »
      »
      »
      12 months ago, # ^ |
        Vote: I like it +17 Vote: I do not like it

      If you want to write the floor function properly, you can use $$$\left\lfloor\frac{K+1}{2}\right\rfloor$$$ (\left\lfloor\frac{K+1}{2}\right\rfloor) instead of $$$[\frac{K+1}{2}]$$$.

»
12 months ago, # |
Rev. 3   Vote: I like it +3 Vote: I do not like it

This is the proof that D worst time is O(n)

I will use the accounting method (the second method of amortized analysis)

The operations of the algorithm to solve problem D:

  • merge frequencies into one => cost = n
  • (by removing all frequencies of elements greater than v[i], then incrementing the frequency of v[i] by them)
  • increment frequency of v[i] => cost = 1
  • decrement frequency v[i] => cost = 1

So, the upper bound of the algorithm is O(n^2) It is correct, but not tight

Note two important things:

1- we have only n operations

2- we cannot remove frequency of any element unless it was incremented before

Can't we just make the cost of increment to be 2 instead of 1? one for incrementing and one as a credit to be the cost of the future remove operation so we can assume the cost of merge operation to be 0, and the increment operation to be 2

hence, the total cost $$$<= \sum_i^n 2 <= 2n$$$ Which is O(n)

»
12 months ago, # |
Rev. 2   Vote: I like it 0 Vote: I do not like it

Can someone pls see why my code for problem C fails for 1043rd numbers on test case 2 ? 237554366

  • »
    »
    12 months ago, # ^ |
      Vote: I like it +5 Vote: I do not like it

    I think the following line is causing an issue

    Spoiler
  • »
    »
    12 months ago, # ^ |
      Vote: I like it +5 Vote: I do not like it

    Found a test case that your solution will fail

    Test case
»
12 months ago, # |
  Vote: I like it 0 Vote: I do not like it

in C I don't know what is s = '$' + s for and what is the name of title I should study for this.

  • »
    »
    12 months ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    just to for from 1 to n if you dont have that its from 0

»
12 months ago, # |
  Vote: I like it 0 Vote: I do not like it

Do you know there is a good problem about MEX in luogu, too?
Link

»
12 months ago, # |
  Vote: I like it 0 Vote: I do not like it

Hi! Friends :)

»
12 months ago, # |
  Vote: I like it 0 Vote: I do not like it

i think problem F is easier than D for someone like me :(

»
12 months ago, # |
  Vote: I like it 0 Vote: I do not like it

Video Editorial For Problem A,B,C,D.

»
12 months ago, # |
  Vote: I like it 0 Vote: I do not like it

can anyone explain the editorial solution of E

»
12 months ago, # |
  Vote: I like it 0 Vote: I do not like it

"Thus, we can easily check if the string is sortable". How that's literally the point of an editorial plus for number of operations you're not explaining why which kind of defeats the whole purpose of an editorial.

»
12 months ago, # |
  Vote: I like it 0 Vote: I do not like it

Is there a simpler O(n) solution for F ?

»
12 months ago, # |
  Vote: I like it 0 Vote: I do not like it

Here comes a $$$O(\log N)$$$ solution for problem E.

We still focus on the fact that, for each depth there are at most 2 different interval lengths, and let's assume they are $$$(2k,2k+1)$$$, which will become $$$(k, k+1)$$$ a layer deeper. For the case $$$(2k-1, 2k)$$$ we do it in a similar way.

We can count the number and the sum of ids of both kinds of intervals, e.g. there are $$$cnt_0$$$ intervals of length $$$2k$$$, and the sum of their ids is $$$sum_0$$$. The same for $$$cnt_1, sum_1$$$ with intervals of length $$$2k+1$$$.
If we already know these values of $$$(2k, 2k+1)$$$, we can calculate those for $$$(k, k+1)$$$.

Then, we count the number of leaves of a segment-tree with a $$$(k,k+1)$$$-length interval root, e.g. there are $$$lf _0$$$ leaves in a segment-tree with a $$$k$$$-length interval root, and $$$lf _1$$$ for $$$k+1$$$.
We can calculate these values of $$$(2k, 2k+1)$$$ from $$$(k, k+1)$$$.

At last, the contribution of this depth is $$$sum _0*(2 ^{lf _0}-1)*(2 ^{lf_0}-1) + sum _1*(2^{lf_0}-1)*(2^{lf_1}-1)$$$.

The time complexity is $$$O(\log N)$$$. You can see my submission for more details.

»
12 months ago, # |
  Vote: I like it +8 Vote: I do not like it

I solve D with Treap algorithm.

»
12 months ago, # |
Rev. 2   Vote: I like it 0 Vote: I do not like it

Can someone please help me figure out the error for C in my submission 237655544? It fails on the 2124th test case, test 3.

  • »
    »
    12 months ago, # ^ |
    Rev. 2   Vote: I like it 0 Vote: I do not like it

    This is so weird. I rewrite your code in Python3 and it passed. 237667769

    Update:

    Found the issue
»
12 months ago, # |
Rev. 2   Vote: I like it 0 Vote: I do not like it

can any one tell me the intuition of E?

I'm not able to understand the editorial.

»
12 months ago, # |
  Vote: I like it 0 Vote: I do not like it

Will using set instead of deque in D exceed the time limit ? I used a set to maintain which numbers present in the current prefix mex array are grater than p1 and with a time complexity of O(nlogn) my code is giving tle on submission

https://mirror.codeforces.com/contest/1905/submission/237714597

»
12 months ago, # |
Rev. 2   Vote: I like it 0 Vote: I do not like it

I solved F without segment tree, here is my solution: 1) There is only 2 * n possible good swaps same as normal solution: n solutions of type(i, pi) and n solutions of type: can actually make some other numbers good 2) We calculate the solution for first type in O(n) time by precalculating for every i: are all elements before i-th index smaller than i in boolean array? So when we swap a[i] and a[a[i]] we need to "only" check did they become good solutions now(for pi = i we just check if it is good in our boolean array because if it is after swap it will be added to answer, but we also need to check is pi = ppi(if we have situation: 1 2 3 5 4, after swaping 5 and 4 we also make 5 in good spot) and similarly check will it increase solution.Yes, after doing this operation some more indexes might become good, but it will be checked in second case :) 3) Second case: which indexes can become good after swaping some 2 elements? Only those that have 1 pair inversed on left and right of them. We can find for every element can it become good and those 2 numbers in O(nlogn). After doing it, we just sort those pairs of indexes and for every pair (i,j) that makes x indexes good we add x to solution without swaps. Note: in this part we also check will (i, j) also get i or j in their good position and do same checks as in previous part. Solution:237847430

»
12 months ago, # |
Rev. 3   Vote: I like it +8 Vote: I do not like it

Very late to this, but I also found another solution for problem F that avoids segment trees or sets: 238072983

If we think of the permutation $$$p$$$ as a function $$$(i\mapsto p_i)$$$, then an index $$$x$$$ is good if and only if $$$p_x=x$$$ and no arc $$$i\rightarrow p_i$$$ jumps across it (where having the arc jump over $$$x$$$ means $$$i<x<p_i$$$ or $$$p_i<x<i$$$). Then the only helpful swaps are those that split the bounding range of a cycle into two cycles with disjoint bounding ranges (all other swaps merge two cycles into a bigger cycle, which we do not want). The set of new good indices that will result from splitting the cycle are the set of indices such that they are between the new resulting bounding ranges, they satisfy $$$p_x=x$$$, and they have no more arcs jumping over them after doing the swap (i.e. they had exactly 2 arcs jumping over them before doing the swap).

We can create an array where each index $$$i$$$ stores the number of arcs jumping over it, by simulating many range addition updates (this can be done efficiently by first constructing the array of partial differences, then calculating prefix sums). Then we create another array that keeps track of which indices currently satisfy $$$p_x=x$$$ and also have exactly 2 arcs jumping over them, which we perform range sum queries on (and can also be computed efficiently by using prefix sums). Then iterate over each cycle, over each adjacent pair of the cycle's elements in left-to-right order.

The only special case is when $$$p_x=x$$$ for all $$$x$$$, in which case the answer to the original problem is $$$n-2$$$.

Runtime is $$$O(n\log n)$$$ because we have to sort indices of each cycle, although this could easily be modified to run in $$$O(n)$$$ time by marking each index $$$1\dots n$$$ with a number identifying which cycle it is in, then scanning left-to-right.

»
12 months ago, # |
  Vote: I like it 0 Vote: I do not like it

Hey guys, can please someone help me out here ? In problem D, why are we running the last for loop in the code for n-1 one times and why not n times ? if we run it for n times, then we will end up with the original configuration only right so the answer should not change. Just curious to know why.

  • »
    »
    12 months ago, # ^ |
    Rev. 4   Vote: I like it 0 Vote: I do not like it

    If you have determined that the solution has stabilized, meaning that further iterations do not significantly change the result, there is no need to continue the loop for the full n iterations. This phenomenon is often referred to as convergence, and it is common in iterative algorithms. Stopping the iterations early can lead to faster execution times and more efficient resource utilization.

»
12 months ago, # |
  Vote: I like it 0 Vote: I do not like it

why am I getting TLE on this submission 238664593 but not on this one 238664527?

»
12 months ago, # |
  Vote: I like it 0 Vote: I do not like it

1905E

E can be considered in another way by first observing how to use O(logN) time to keep deduct segment tree size pair (size is for range) (k, k + 1) to segment tree size pair (k / 2, k / 2 + 1), then define $$$F_k(x)$$$ as sum for a segment tree with range having size $$$k$$$ and root node as $$$x$$$.

Then a transition can be: $$$F_k(x) = (2^{lsz} - 1) (2^{rsz} - 1) * x + F_{lsz}(2x) + F_{rsz}(2x + 1)$$$, where $$$lsz$$$ and $$$rsz$$$ are the size of range of the subtrees. Then, notice $$$F_k(x)$$$ will be an affine transformation in $$$\mathbb Z \to \mathbb Z$$$ (or $$$\mathbb R \to \mathbb R$$$). Considering $$$2x, 2x+1$$$ are both affine, and affine transformation is closed under adding, multiplying by a scalar (forms a vector space), and also closed under composition ($$$C(Ax + b) + d = ACx+ Cb + d$$$, so the result $$$F_k(x)$$$ is still an affine transformation. So by induction $$$F_k(x)$$$ is affine for all $$$k \in \mathbb N$$$.

»
12 months ago, # |
  Vote: I like it 0 Vote: I do not like it

I think the answer for problem A for n,m>=2 will always be 2 for minimum number of cities.

Then why is our answers the maximum of n,m?

»
12 months ago, # |
  Vote: I like it 0 Vote: I do not like it

Problem A reminds me of the folklore problem grid infection (try it if you haven't seen it before!) which is very nice.

Given $$$n \times n$$$ grid, some squares are initially infected. In each subsequent step in time, any square which has two or more infected neighbors also becomes infected. Squares that are infected stay infected forever. What's the minimum number of squares that need to be infected at the start for the infection to eventually spread to every square in the $$$n \times n$$$ grid?

»
11 months ago, # |
  Vote: I like it 0 Vote: I do not like it

I got TLE on problem D by using the lazy segment tree O(nlogn),maybe due to the big factor.Then I use the monotone stack to solve problem D in O(n) time and finally got AC.It`s such a really tough experience,but I enjoy it hahahaha! :(

»
11 months ago, # |
  Vote: I like it 0 Vote: I do not like it

i do not understand test case on problem C 15 czddeneeeemigec output:6 please help me

»
5 months ago, # |
  Vote: I like it 0 Vote: I do not like it

An easy implimentaion of problen C is given: https://mirror.codeforces.com/contest/1905/submission/268933428 , it can also be improved.

»
4 months ago, # |
Rev. 8   Vote: I like it 0 Vote: I do not like it

Solution: https://mirror.codeforces.com/contest/1905/submission/275880393

F can be solved quite simply and without a segment tree. The condition for an index being good is equivalent to

  1. p[1...X] is a permutation of elements 1...X
  2. p[x]=x.

Therefore we can iterate through all prefixes of p noticing that there is either zero or one swaps that improves the answer.

These can be found through casework: Case 1: the index x is already good (zero swaps will improve the answer) Case 2: p[x]=x and p[1...x] is missing exactly one element-> in this case notice there must be one element greater than x, the swap is then swap it for the missing element Case 3: p[x]>x and we are missing only x -> swap p[x] and x Case 4: Otherwise -> no swaps will improve the answer

Our answer is the most improving swap (makes the most elements good) + the number of indices already good. If all indices are good, it can be shown that the optimal answer is to swap the two last elements.

Wait! What if a swap makes an index no longer good! We can show that this will never happen. Notice that all our swaps swap a larger element that occurs earlier in the array with a smaller one that occurs later. Using this fact let us use casework (that fact is sufficient but this is easier to understand):

  • Case 1: ...a...b...x -> ...b...a...x : obviously in this swap x will remain good because the order of elements before has no effect on whether they form a permutation 1...x

  • Case 2: ...x...a...b -> ...x...a...b : our condition does not care about elements after x nor their order

  • Case 3: ...a...x...b -> ...b...x...a : b<a thus x cannot be good bc if b<x then a<x but if x is good p[1...x] must contain all elements <x

  • Case 4: ...a...x... -> ...x...a... : x<a therefore x cannot be good thus a contradiction

  • Case 5: ...x...a... -> ...a...x... : a<x thus x cannot be good.

Thus we don't need to worry about invalidating good indices when making swaps that improve other indices. (Of course in the case of 1,2,3,...N-1, N there are no good swaps thus our best answer is N-2 because each x can only be good in one location. Swapping the last two elements gives us that best case)

This can be implemented by sweeping left to right with three sets (missing elements, all elements in the prefix, a multiset of all good swaps)

Incidentally, this also proves there are at most N swaps.

»
4 months ago, # |
  Vote: I like it 0 Vote: I do not like it

Problem C was good,required a great depth of thought to avoid wrong subs.

»
3 months ago, # |
  Vote: I like it 0 Vote: I do not like it

I solved with the solution given by Apachee, but I don't understand the editorial solution. Can someone explain?

»
3 months ago, # |
  Vote: I like it 0 Vote: I do not like it

for problem c,is there any method to do it without finding the lexicographically largest subsequence?