How to solve this problem.
Given an array of N integers and we have to perform two type of queries ?
1: L,R,K --> find sum of values in range L to R which are less than K ?
2: A,M --> update value at positon Ath index to M i.e arr[A]=M;
N,Q <=1e5.
№ | Пользователь | Рейтинг |
---|---|---|
1 | tourist | 4009 |
2 | jiangly | 3823 |
3 | Benq | 3738 |
4 | Radewoosh | 3633 |
5 | jqdai0815 | 3620 |
6 | orzdevinwang | 3529 |
7 | ecnerwala | 3446 |
8 | Um_nik | 3396 |
9 | ksun48 | 3390 |
10 | gamegame | 3386 |
Страны | Города | Организации | Всё → |
№ | Пользователь | Вклад |
---|---|---|
1 | cry | 167 |
2 | Um_nik | 163 |
3 | maomao90 | 162 |
3 | atcoder_official | 162 |
5 | adamant | 159 |
6 | -is-this-fft- | 158 |
7 | awoo | 157 |
8 | TheScrasse | 154 |
9 | Dominater069 | 153 |
9 | nor | 153 |
How to solve this problem.
Given an array of N integers and we have to perform two type of queries ?
1: L,R,K --> find sum of values in range L to R which are less than K ?
2: A,M --> update value at positon Ath index to M i.e arr[A]=M;
N,Q <=1e5.
Название |
---|
Nvm I'm wrong
I don't think it works, or I don't get you.
This won't work as value of k is not same for each query..
Auto comment: topic has been updated by agrawal117 (previous revision, new revision, compare).
Auto comment: topic has been updated by agrawal117 (previous revision, new revision, compare).
There are many ways of solving this problem with different time complexities:
First, with SQRT Decomposition: Let's partition the array in buckets of size $$$k$$$ (We will see soon which $$$k$$$-value to choose). Let’s keep on each bucket a Data Structure that allows us to get the sum of some prefix and also update single points. This data structure can be a Binary Indexed Tree (also called Fenwick Tree). This consumes $$$O(n\cdot \frac n k)$$$ space. Queries of first type are done in $$$O(\frac n k \cdot \log n + k)$$$ time each one, and queries of second type are done in $$$O(\log n)$$$ time each one. We must choose the $$$k$$$-value that minimizes $$$\frac n k \cdot \log n + k$$$ and this is $$$k=\sqrt{n\cdot \log n}$$$. Time Complexity: $$$O(Q\cdot \sqrt{n\cdot \log n})$$$. Memory complexity: $$$< O(n\cdot \sqrt{n})$$$.
Second: replacing the Binary Indexed Tree by a Balanced Binary Search Tree (BBST). We can easily extend a Red-Black Tree or an AVL or a Treap to support the sum of the sum of first $$$z$$$ elements. this gives us the same time complexity, but let’s see the space. On each one of the $$$\frac n k$$$ buckets we have $$$O(k)$$$ elements, so it consumes $$$O(n)$$$ space. Time Complexity: $$$O(Q\cdot \sqrt{n\cdot \log n})$$$. Space Complexity: $$$O(n)$$$.
third, replacing SQRT Decomposition by Segment Tree: now on each node of the segment tree we have a BBST with elements of the corresponding subarray. Since each leaf has $$$O(\log n)$$$ parents on the Segment Tree, this consumes $$$O(n\cdot\log n)$$$ space, and now it just takes $$$O(\log^2 n)$$$ operations to perform both types of queries. Time Complexity: $$$O(Q\cdot \log^2 n)$$$ Space Complexity: $$$O(n\cdot \log n)$$$
on all these approaches I ignored the build time, cause it can be seen as N updates, although in many cases we can build faster than N updates.
Also, there are some approaches with SQRT decomposition on queries.
Can you elaborate more on the 3rd approach? How are you calculating the sum for a given range from BBST?
first, do you know how to keep a BBST that supports asking for the sum of elements $$$\le x$$$? To do this, we keep on each node of the BBST the sum of elements below it. This is easy to update cause $$$sum(u) = sum(left(u)) + sum(right(u)) + val(u)$$$. Now for querying the sum of elements $$$\le x$$$ we do a Binary Search Trasversal over the BBST, each time we move to the right we add to the answer the sum of left child of the previous node. It's something like this:
The remaining part of the code is the actual implementation of your BBST.
Now if you know how to do that, then just keep a BBST on each node of the segment tree.
Okay, i understood your approach but i still have some doubt how things are working when descending down the tree for a range [l, r]. If you have link to any problem which requires this technique please share, it would be helpful.
thanks :)
well, this problem can be solved with Divide And Conquer on trees + Segment Tree + BBST.
I'll try it. thanks bruh :)
Can you please share link to the problem? I want to test my solution.
You can solve this problem here: https://www.spoj.com/problems/RACETIME/
The fastest (and easiest to implement) approach I know is by using 2D fenwick tree. You would have to know the points in which you do your updates in advance, so it’s an offline solution.
Sir, Will u please share your approach in detail.?
Consider the matrix $$$M(x, y)$$$ where $$$M(x, y)$$$ is $$$y$$$ if $$$v_x = y$$$ and $$$0$$$ otherwise.
Then, you can simulate update $$$pos$$$ $$$val$$$ by doing $$$M(pos, v_{pos}) -= v_{pos}$$$, $$$v_{pos} = val$$$, $$$M(pos, v_{pos}) += v_{pos}$$$.
Similarly, a query $$$l$$$ $$$r$$$ $$$k$$$ is just $$$S(r, k) - S(l-1, k)$$$, where $$$S$$$ denotes (2D) prefix sum.
by 2D fenwick tree I mean this
Both these operations are supported by a 2D fenwick tree if you know all update points in advance (basically, all possible values that $$$v_i$$$ would take during the process, for each $$$i$$$).
Well yes, but this isn’t fully online. However I can replace in my approach the Segment Tree by a BIT and this would be better. We can also replace the BBST for a BIT implemented with
map<int,int>
, but this adds an extra $$$\log n $$$ factor to time complexity.I said it’s not fully online in the beginning. Although, to be fair, in 90% of programming tasks offline solutions works just as well (if not, better).
You are right. My bad.
You can also solve it in nlogn with persistent segment tree and a similar approach, just preprocessing all updates, then the problem can be reduced to something like get the sum of numbers smaller than x in a given interval, and simple updates that can be managed with PST.
How do you manage the updates with PST?
Let's do the following approach: First, how to preprocess the updates: Let's maintain a set S, which has initialy values (i,0,Ai), and cnt[x], which stores the number of updates done previously to position i of the array, then when we see a new update to position u we insert(u,++cnt[u],new_Au) to the set, finaly map all the values of the set so they become 1,2,3...n+q. Then we build a persistent segment tree such that each version of the PST stores the information of a prefix of the mapped set. Then the querys become sum(r,x) — sum(l-1,x), r and l-1 are the versions of the PST which stores the last updates done to positions r and l-1 respectively, now the sum is in a version, the sum of some prefix of the segment tree, and updates (u,Au) are substract Ai from the version u + number_of_updates_done_to_u and add newAu to version u + number_of_updates_done_to_u + 1, this versions actually exist on our PST, so its simple modification update.
Edit: I think my solution will not work due to changing value of k.
Given the constraints, I think that you might be able to nuke this problem using Mo's algorithm with updates. I might be wrong as I didn't give this solution much thought.
Here is my solution using Sqrt decomposition with time complexity for 1) count query $$$O(sqrt(n) * log2(sqrt(n))$$$ and 2) for update $$$O(sqrt(N))$$$.
The idea is simple. Break the array in blocks of size sqrt(N) and sort each block individually. We store them in pair so that for every value it's index is also attached. To improve runtime i didn't used modulo/divide operator for finding blocks and precomputed them in an array in linear time(again without using modulo/divide operator).
Look at the first approach on this comment. You’ll see that is better to choose bucket size = $$$\sqrt{n\cdot\log n}$$$
Any simpler way to do this if there are no updates?
Persistent Segment tree —
O(log n)
Merge Sort Tree —
O(log^2 n)
with good constant