Rating changes for last rounds are temporarily rolled back. They will be returned soon. ×

ko_osaga's blog

By ko_osaga, history, 5 years ago, In English

Update 2019/11/07. The contest is now in Gym. Thanks to MikeMirzayanov for uploading it. Have fun!

Hello! I'm happy to announce XX Open Cup. Grand Prix of Korea.

List of relevant previous contests:

Enjoy!

  • Vote: I like it
  • +161
  • Vote: I do not like it

»
5 years ago, # |
  Vote: I like it -8 Vote: I do not like it

I can't wait to solve them

»
5 years ago, # |
  Vote: I like it +46 Vote: I do not like it

Best sentence of the problem-set: If you know about “cardinals,” please assume that “infinite” refers only to “countably infinite.” If you don’t know about it, then you don’t have to worry.

»
5 years ago, # |
  Vote: I like it +16 Vote: I do not like it

How to solve I?

  • »
    »
    5 years ago, # ^ |
      Vote: I like it 0 Vote: I do not like it
    • »
      »
      »
      5 years ago, # ^ |
        Vote: I like it 0 Vote: I do not like it

      We almost removed I due to that issue (apparently it's well known in China), but I just decided not to. Hope you enjoyed our contest anyway.

      • »
        »
        »
        »
        5 years ago, # ^ |
          Vote: I like it 0 Vote: I do not like it

        Yes, other problems were pretty nice.

      • »
        »
        »
        »
        5 years ago, # ^ |
        Rev. 2   Vote: I like it 0 Vote: I do not like it

        That's a beautiful problem but I've seen it so many times already... It's well known in Poland as well and I guess everywhere else too

        • »
          »
          »
          »
          »
          5 years ago, # ^ |
            Vote: I like it 0 Vote: I do not like it

          I initially made an easier problem, but .o. lately discovered that it was solvable for a general graph, hence the current form.

          I don't think it was known in Korea (at least no testers knew), so it was nice to use for local contests. For OpenCups, I agree that it's not enough to please top-level contestants since the topic is pretty general.

        • »
          »
          »
          »
          »
          5 years ago, # ^ |
            Vote: I like it +8 Vote: I do not like it

          Also it was in Jagiellonian U Contest, at winter ptz camp 2013 (Problem H. Hunt).

  • »
    »
    5 years ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    I thought it was something like we can find all pair shortest distance between all the vertices. Let, the maximum distance d be between u and v. These will be our two ends of the diameter. After that, we find a path between u and v, of distance d. We then add the rest of the vertices in a greedy manner so that our diameter remains unchanged. Can we approach the solution like this?

»
5 years ago, # |
Rev. 2   Vote: I like it 0 Vote: I do not like it

So, is it possible to solve J faster than O(nlogn) with Treap? Our solution is still getting TL 13.

  • »
    »
    5 years ago, # ^ |
      Vote: I like it +23 Vote: I do not like it

    You can maintain $$$dp_i - dp_{i-1}$$$ in heap.

    • »
      »
      »
      5 years ago, # ^ |
        Vote: I like it 0 Vote: I do not like it

      Can you explain please? I don't think we can change Treap in our solution without getting O(nlog^2n).

  • »
    »
    5 years ago, # ^ |
    Rev. 3   Vote: I like it +34 Vote: I do not like it

    You can use std::priority_queue instead of treap. I claim that the complexity would still be $$$O(n \log n)$$$. Indeed, unlike the more common situation, when we merge things that have size $$$O(\text{subtree size})$$$, here we merge things that have size $$$O(\text{subtree depth})$$$. It is quite well-known that we will do only linear number of operations in that case (and each of them takes $$$O(\log n)$$$ time).

    The solution is not particularly quick even with heap (the implementation below took ~0.5s on Yandex.Contest), so it is unsurprising (though unfortunate) that using treap may give TL.

    Code

    • »
      »
      »
      5 years ago, # ^ |
      Rev. 2   Vote: I like it 0 Vote: I do not like it

      Oh, std::priority_queue is probably much faster than the nonsense I was doing with std::multiset.

      Our solution ran in 1.161s during the contest, but strangely runs twice as fast when I resubmit the same code now at ~0.5s.

»
5 years ago, # |
  Vote: I like it +8 Vote: I do not like it

How to solve I and J? We wrote the solution of J using heavy-light ,, but I think it has a simple solution :( Can anyone explain ?

  • »
    »
    5 years ago, # ^ |
      Vote: I like it +10 Vote: I do not like it

    How to solve J using heavy-light?

    • »
      »
      »
      5 years ago, # ^ |
        Vote: I like it +3 Vote: I do not like it

      I proved for my opinion , if for $$$k$$$ we take some set of vertices , for $$$k + 1$$$ we need only add some vertices to that set. Now we can solve problem only for $$$k = 1$$$. After we find set of vertices for $$$k = 1$$$ , we can delete vertices of that set and do it again for the new tree.

      For vertex $$$v$$$ we take difference $$$dif[v] = (val[v] - \sum_{to}{dp[to]})$$$ where $$$to$$$ is the child of $$$v$$$ , $$$dp[to]$$$ is the answer for subtree of $$$to$$$ for $$$k = 1$$$ and $$$val[v]$$$ is the value which is given in the statement.

      Now for $$$k = 1$$$ , how we find vertices that we need? The vertex $$$v$$$ is in the set if and only if $$$dp[v] = val[v]$$$ therefore we need that $$$dif[v] >= 0$$$ (We can find this with simple segment tree). After we take this vertex we delete that vertex (give the value of $$$dif[v]$$$ $$$-infinite$$$ in the segment tree) and doing update $$$dif[x]$$$ $$$+$$$$$$=$$$ $$$dif[v]$$$ , for any vertex $$$x$$$ from $$$root$$$ to $$$parent[v]$$$ (update this with heavy-light decomposition).

      P.S. We got WA7 during the contest.

»
5 years ago, # |
  Vote: I like it +97 Vote: I do not like it

Congratulations to Polish Mafia (mnbvmar, Radewoosh, Marcin_smu) for the first place by a huge margin, and also being very close to all solve!

Here is the solution.

I hope you enjoyed the contest!

  • »
    »
    5 years ago, # ^ |
      Vote: I like it -12 Vote: I do not like it

    I think time limit for problem F was very tight, we did the segment-tree variant and it was impossible for us to squeeze it in time.

    • »
      »
      »
      5 years ago, # ^ |
        Vote: I like it +57 Vote: I do not like it

      I have got TL 46, but it wasn't related to the segment tree speed. Try checking maxtest with $$$O(n)$$$ add queries with $$$k = 0$$$ and $$$O(n)$$$ third type queries with $$$x = 0$$$.

  • »
    »
    5 years ago, # ^ |
      Vote: I like it +41 Vote: I do not like it

    I really enjoyed the problemset, thanks for it!

    • »
      »
      »
      5 years ago, # ^ |
        Vote: I like it +10 Vote: I do not like it

      What is your solution in K? I'm curious because of your submission time.

      • »
        »
        »
        »
        5 years ago, # ^ |
          Vote: I like it 0 Vote: I do not like it

        Centroid decomposition on the first tree and subtree compression and two closest ones on the second tree.

  • »
    »
    5 years ago, # ^ |
      Vote: I like it +38 Vote: I do not like it

    In K I also built centroid decomposition for one tree and reduced problem to $$$\min_{u \neq v}(a_v + d(v, u) + a_u)$$$ on the compressed second tree. But for some reason I thought that the fastest way to solve it is also centroid decomposition, so I wrote 2-dimensional centroid decomposition. It's $$$O(n\log(n)^2)$$$ with huge constant, but passed in 9.5 seconds.

    • »
      »
      »
      5 years ago, # ^ |
      Rev. 2   Vote: I like it +10 Vote: I do not like it

      We wrote nlog^2 solution with centroids, but waste a lot of time to fit it in TL. Could anybody, who hasn't such issues post their code?

  • »
    »
    5 years ago, # ^ |
    Rev. 2   Vote: I like it +20 Vote: I do not like it

    An alternative solution for D:

    Consider only positions of 2. If you move all 2s into the correct position, then 1s are in the correct position automatically.

    Let's assume the assignment of 2 are fixed (matching of starting positions and ending positions). Let d_i : |start position of i-th 2 — end position of i-th 2|. Then, the lower bound of the cost is (the sum of ( (d_i/2) * (C+1+1+2) + (d_i%2) * (C+1+2) ) ) + (number of inversions in assignment).

    And, surprisingly, it is always possible to do (you can find at least one optimal move in each situation)

    So, we can use dp[][] for each parity. The time complexity is O(N^2) and you can find the actual order of operations by topological sort.

»
5 years ago, # |
  Vote: I like it +49 Vote: I do not like it

Contest was good, big thanks to authors

»
5 years ago, # |
  Vote: I like it 0 Vote: I do not like it

Auto comment: topic has been updated by ko_osaga (previous revision, new revision, compare).

»
5 years ago, # |
  Vote: I like it 0 Vote: I do not like it

Auto comment: topic has been updated by ko_osaga (previous revision, new revision, compare).