A tree rooted at 1 is given , we have to perform two type of queries . - type 1 (u,x): update value of node node u to x. - type 2 (u,x): count the number of node having value less than x in path from u to 1. n<=100000 q<=100000
# | User | Rating |
---|---|---|
1 | tourist | 4009 |
2 | jiangly | 3823 |
3 | Benq | 3738 |
4 | Radewoosh | 3633 |
5 | jqdai0815 | 3620 |
6 | orzdevinwang | 3529 |
7 | ecnerwala | 3446 |
8 | Um_nik | 3396 |
9 | ksun48 | 3390 |
10 | gamegame | 3386 |
# | User | Contrib. |
---|---|---|
1 | cry | 167 |
2 | Um_nik | 163 |
3 | maomao90 | 162 |
3 | atcoder_official | 162 |
5 | adamant | 159 |
6 | -is-this-fft- | 158 |
7 | awoo | 157 |
8 | TheScrasse | 154 |
9 | Dominater069 | 153 |
9 | nor | 153 |
A tree rooted at 1 is given , we have to perform two type of queries . - type 1 (u,x): update value of node node u to x. - type 2 (u,x): count the number of node having value less than x in path from u to 1. n<=100000 q<=100000
Name |
---|
It's HLD problem, you should have segment trees in every chain and every node of segment tree must be vectors which stores values in interval increasing order. but it is log ^ 3
can we try this using centroid decomp?
I also thought of the same solution but proposed a different one.
Hi god_zoned,
I am able to come up with the following solution only right now.
Let decompose the tree using Heavy Light Decomposition. If you are not familiar with the technique check out this blog first.
Lets decompose the base array formed by the heavy light decomposition of tree in part of size each and also keep a linear table for each of these part say Ti where Ti, j denotes the number of elements in the ith part with value ≤ j. ( Assuming elements are ≤ 105 otherwise we can apply coordinate compression ). Now decompose all table Ti s in part as well. Note that this is done to improve the complexity of query operation by making update operation little costlier.
I meant to say, we can now update any element with complexity and query them with O(1) complexity.
Update: Update operation is very simple in this case as we just need to go to the desired block of size and update the element. In this case the point update will become a range update and this can be done with complexity.
Query: We know that we can traverse root to any node path with in at most chains ( due to heavy light decomposition ). Traverse all chains on the path from 1 to U and query the needed segment of a chain for the values less than x. This can be done in for each chain (as there can be atmost blocks in a chain and a block can be queried with O(1) time ) and therefore the overall complexity of the query part will be .
The above solution might be slower than what is expected. Anyone solution with better complexity and comments on the proposed solution are most welcome.
Auto comment: topic has been updated by Arunnsit (previous revision, new revision, compare).