Codeforces и Polygon могут быть недоступны в период с 6 декабря, 22:00 (МСК) по 7 декабря, 00:00 (МСК) в связи с проведением технических работ. ×

Блог пользователя cuom1999

Автор cuom1999, история, 5 лет назад, По-английски

Today I read the problem statements of ICPC Asia Nakhon Pathom 2017. Then, I found a very interesting (to me) problem but still cannot have solved it yet :(

You are given a graph with $$$n$$$ vertices and $$$m$$$ weighted edges. You have to handle $$$q$$$ queries online of the form: Given 2 integers $$$L$$$ and $$$R$$$, what is the size of the largest connected component if we only consider edges having weights in the range $$$[L, R]$$$.

Here $$$n,m,q \leq 100000$$$.

I have never met any problems in this type so any insight ideas or similar problems are very welcomed. Thank you very much!!!

  • Проголосовать: нравится
  • +60
  • Проголосовать: не нравится

»
5 лет назад, # |
Rev. 2   Проголосовать: нравится -7 Проголосовать: не нравится

Can be solved in $$$O(n^\frac{5}{3})$$$ using online Mo's algorithm.

  • »
    »
    5 лет назад, # ^ |
      Проголосовать: нравится +20 Проголосовать: не нравится

    Can you elaborate? What is online Mo’s algorithm? I thought Mo’s needed all queries/updates beforehand (online) so that we can sort them and process them in a special order.

    • »
      »
      »
      5 лет назад, # ^ |
      Rev. 2   Проголосовать: нравится +11 Проголосовать: не нравится

      I'll explain it with reference to this problem.

      Let's first assume that edge weights lie between $$$[1, M] $$$, otherwise we can compress them, and then we can map each query $$$[L, R] $$$ to $$$[l, r]$$$ , $$$l, r \in [1, M]$$$ using simple binary search (Note that, even if two edges have same value, they will be assigned different indices,. Binary search will still work :-) ).

      Now, let's divide the range $$$[1, M]$$$ into $$$\frac{M}{X} $$$ blocks of legnth $$$X$$$. And we compute the answer for each range that starts at a block and end at another, i.e , for ranges, $$$[AX, BX - 1] $$$ for all $$$B > A $$$, and we store two things, first connected components in compressed form, and for each vertices the component to whih it belongs, and the other maximum size of connected component.

      Until now, we have consumed $$$O((N + M)* (\frac{M}{X})^2) $$$ memory and time.

      Now, to answer query $$$[L, R]$$$, we choose the largest subsegment whose answer we already know ( $$$[\lceil \frac{L}{X}\rceil *X, \lfloor \frac{R + 1}{X} \rfloor * X - 1] $$$ ), and then we have to just add some more edges that lie within atmost range of atmost $$$X$$$ on both sides. So now, keeping track of edges we added, we can do a bfs/dfs in just O(2*X) instead of O(N), and the answer will be maximum of current answer and answer of the chosen subsegment (to cover those components that we didn't visit in bfs/dfs). Thus total time complexity is (for simplification, $$$M = Q = O(N)$$$ ).

      $$$ O(\frac{N^3}{X^2} + N * X) $$$

      whose minimum is $$$(N^\frac{5}{3}$$$) and is achieved at $$$X = N^\frac{2}{3}$$$.

  • »
    »
    5 лет назад, # ^ |
      Проголосовать: нравится +8 Проголосовать: не нравится

    Can be solved in O(n^(3/2) * logn) using a mix of Mo + sqrt decomp + some kind of persistent dsu (doesn't need to be fully persistent). For each "start" of buckets, we'll maintain some way to know the dsu from the start to some point to the right. Then, when we get a query we can simply use the dsu starting from the second "bucket" until the end and add O(size of bucket) edges.

    I've never seen O(n^(5/3)) Mo being used for this kind of problem, can you explain it?

    • »
      »
      »
      5 лет назад, # ^ |
        Проголосовать: нравится 0 Проголосовать: не нравится

      But how do you make the persistent DSU run in O(logn)?

      The only way I know that would work is to keep the DSU's links as a persistent segment tree, but then each union/find operation is logn^2 (logn steps and each step is logn because of the persistent segment tree).

      How can this be reduced from O(n^(3/2) * logn^2) to O(n^(2/3) * log) ?

      • »
        »
        »
        »
        5 лет назад, # ^ |
          Проголосовать: нравится +10 Проголосовать: не нравится

        Store in vector/stack changes to DSU arrays after each query. Then you can undo operations.

      • »
        »
        »
        »
        5 лет назад, # ^ |
          Проголосовать: нравится 0 Проголосовать: не нравится

        If you use small to large, each number will have its parent changed at most 1 time, you can keep the point that it changes and go up only if it happens before the point queried. For the size of the component you can just keep them in vectors and use binary search. This also uses only O(N * sqrt(M)) memory

»
5 лет назад, # |
  Проголосовать: нравится +29 Проголосовать: не нравится

First, give the edges some unique ordering such that the weights are non-decreasing. Then, split them into $$$M/K$$$ blocks with size $$$K$$$. For the starting edge of each block, run DSU by adding all edges that follow after this edge one by one and find the DSU tree in $$$O(N \alpha)$$$ time. In total, this preprocessing takes $$$O(MN\alpha/K)$$$ time and $$$O(MN/K)$$$ memory.

Then, when you're processing a query, you only need to take one of these structures (a DSU tree without edges $$$\gt R$$$, i.e. a forest) and add $$$O(K)$$$ edges to it. When building the DSU tree, you could remember the number of vertices after each edge is added. Then, you need to find the components connected by each edge in this forest; since the DSU tree is shallow, this can be done in $$$O(K \log N)$$$. Finally, you have a simplified graph (each component is contracted into a vertex), where you can run BFS that only visits the $$$O(K)$$$ added edges and the vertices adjacent to them and find out how the number of components changed from the precomputed structure by adding these edges. One query can be processed in $$$O(K\log K)$$$ time. With $$$K = \sqrt{NM/Q}$$$, that's $$$O(\sqrt{NMQ}(\alpha + \log))$$$ time in total, and $$$O(\sqrt{NMQ})$$$ memory.

It seems complicated, but answering a query is very simple to implement, it's just one BFS and finding components for vertices in a DSU forest = walking from a starting vertex to the root until reaching an edge with weight $$$\gt R$$$.

Preprocessing bridges could shave off a constant from the running time if it's too slow.

  • »
    »
    5 лет назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится

    If a query $$$[L, R]$$$ includes several blocks, how can you "merge" them?

    • »
      »
      »
      5 лет назад, # ^ |
        Проголосовать: нравится 0 Проголосовать: не нравится

      It can't include several blocks. There is some step in the precomputation that starts (roughly) with the smallest multiple of $$$K$$$ which is $$$\ge L$$$ and runs DSU for ALL edges with greater or equal weights.

»
5 лет назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

Where can I submit the problem?

  • »
    »
    5 лет назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится

    Sorry, I just have the statements. I'm not sure if they posted problems online or not.