Блог пользователя rishichowdhury711

Автор rishichowdhury711, история, 3 месяца назад, По-английски

Hello codeforces , I have a problem which I am unable to solve.The Problem is that you are given an array of length n (1 <= n <= 1e5) . Also you are given q (1 <= q <= 1e5) queries of form (l,r) where (1 <= l <= r <= n) . For each query you have to find the sum of distinct elements in the subarray [l,r].

For example :

if arr[]={1,2,3,4,4} and q=2, and the queries are:

1 3

2 5

so for the above queries the answer would be :

6

9

I am unable to solve this question. Any help regarding this would be helpful .Thank you

  • Проголосовать: нравится
  • +4
  • Проголосовать: не нравится

»
3 месяца назад, # |
  Проголосовать: нравится +13 Проголосовать: не нравится

You can solve it offline.

  1. First sort the queries by increasing order of $$$r$$$.
  2. Then maintain an array or map (depends on the values range of $$$a_i$$$) named as $$$previousOccurence$$$, such that $$$previousOccurence[i]=j\textrm{, such that, }j<i\textrm{ and maximum j such that }a_j=a_i$$$. In short, the latest position before $$$i$$$ where $$$a_i$$$ appeared.
  3. Traverse through the indices in ascending order while maintaining a Binary Indexed Tree. If you are in position $$$i$$$, update the BIT by adding $$$a_i$$$ at index $$$i$$$, and adding $$$-a_i$$$ at index $$$previousOccurence[a_i]$$$. Now, process all queries which end at $$$i$$$, i.e. the queries where $$$r=i$$$. Then answer of the query will be the sum query on BIT in range $$$[l,r]$$$.
  4. You can maintain and update the $$$previousOccurence$$$ array/map while traversing in ascending order.

You can also use segment tree if you want.

»
3 месяца назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

I am wondering if we can use Mo's here? transition from f(l, r) to f(l, r + 1) or f(l, r — 1) is log(n) if set is used but the overall time complexity is O(n*sqrt(n)*log(n)) which I think could be too much

  • »
    »
    3 месяца назад, # ^ |
    Rev. 2   Проголосовать: нравится 0 Проголосовать: не нравится

    Mo's can be optimized to $$$O(Q\sqrt N)$$$ using discretization + SQRT Decomposition.

    • »
      »
      »
      3 месяца назад, # ^ |
        Проголосовать: нравится 0 Проголосовать: не нравится

      What is discretization?

      • »
        »
        »
        »
        3 месяца назад, # ^ |
          Проголосовать: нравится -20 Проголосовать: не нравится

        There is this new invention that data scientists have been working on for decades, and it's finally made available to the public. It's called Google. You can access it by clicking this. When you're at the search page, type in "discretization" and click the first link that appears, which is most likely this. Then, please go read the article.

        • »
          »
          »
          »
          »
          3 месяца назад, # ^ |
            Проголосовать: нравится 0 Проголосовать: не нравится

          Could have explained quicker what it actually is but had to spend efforts on award winning sarcasm ofcourse. This dude is fun

»
3 месяца назад, # |
Rev. 2   Проголосовать: нравится 0 Проголосовать: не нравится

(Edited because I misread the question)

Discretize (remember to keep track of the original values because we want to find sum) and use Mo's algorithm. The transitions are $$$O(1)$$$ so the total time complexity is $$$O(Q\log Q+Q\sqrt N)$$$, or amortized $$$O(\sqrt N)$$$.

»
3 месяца назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

You could make root n buckets of length root n and keep a frequency map for each.

Then just merge according the map according to your requirements from l to r, so it will be worst case

(root n (left excess) + root n (whole blocks) + root n (right excess) ) * (map size) per query

The problem here is that map size can be the number of distinct elements in our array in worst case, do you have a constraint on a[i]?

If you're given all the queries at the start then you can even do a sweep line and difference array type thing, i think.