I am stuck on this following problem:
Statement:
Given an array of size n and have to do q queries.
Query are---
Type-1) Update index k with value v.
Type-2) Print the unique count in range [L..R] which are less than v;
Constraints:
1<=n,q<=100000
1<=v<=1000000000
1<=k<=n
1<=L<=R<=n
Thanks for reading the problem. It will be really helpful if you can give me a solution.








I'm also struggling with questions that has to do with subsequence/segment L..R. Any solution will really be appreciated. Thanks!
O(n^2logn) idea: store at each block a set of what's the values in [iBS, (i+1)BS] and for each previous blocks the set difference, at each query you add the result of the block and for the next you add the result of it minus of previous and other previous etc, ig this can be optimized to nsqrtlogn and we only store power of two sizes somehow? but i don't really see it, we can do update and query in O(sqrtn) but we're forced to build the structure in O(n^2).
edit 2: even without updates i realized this is stupid, you can just O(nlogn) query by making a set each time and updating in O(1), works in O(nqlogn), sorry for the stupidity
Thanks for your time.
I think the solution is related to combination of merge sort tree and order set. But I couldn't come with a optimized solution.
Build a segment tree over the array, and in every node of that tree hold a splay tree of elements belonging to that segment. The updates as well as the queries are now trivial and will work in O(n*logn*logn). The memory usage of this solution will be O(n*logn). With clever implementation it should AC.
If I build a segemnt tree...the update part becomes easy but during the query I cant just return (unique from left child + unique fron right child) right!
Cause there can be elements which belong uniqe in left and right child both... How to seperate the unique then I merge both child together and return it from parent node
Appreciate your valuable time.
I think that I have an $$$O(n^{5/3})$$$ solution. First, compress the numbers. Then, do Mo's algorithm with updates, keeping track of the count of the numbers and do they exist in the current segment. Update the existing-in-segment array using block decomposition (see technique 2 of https://mirror.codeforces.com/blog/entry/100910). Then, each query of type 2 costs $$$O(\sqrt{n})$$$ for getting the distinct count, that's a total of $$$O(n\sqrt{n})$$$ plus $$$O(n^{5/3})$$$ for the main Mo's phase. (Assuming $$$q=O(n).$$$)
If there's no unique elements constraint then you can use a segment tree with an ordered set data structure in each node, to get $$$O(nlog^2(n))$$$ complexity.
Edit 2: There's an asymptotically better solution that runs in $$$O(nlog^3(n))$$$ time. Consider the points $$$(i,nxt_i,a_i)$$$ and use Point Add Cuboid Sum.
Thank You for your time, learned a new use of Mo's update.
You can use the standard trick of employing a wavelet tree (or a merge sort tree; I personally prefer wavelet trees). In this approach, you sort the elements based on the index of the next occurrence of an element with the same value (which I will refer to as the “wavelet value” of that element). If there is no such next occurrence, the wavelet value is set to infinity.
The key idea is to count the number of elements in a range
[L, R]whose wavelet value is greater than the right boundaryRof that range. This allows you to count only the last occurrences of each value within the given range.During query type 1 (an update operation), you only need to update in the wavelet tree:
To efficiently find the indices of these elements (and update them in the wavelet tree), you can use a
map<int, set<int>>. However, there are certainly other ways to achieve this as well.Let me know if there are any mistakes or if something is unclear. I’ll be happy to help!
Wow.
Thats even a faster approach.
Thanks for your time and help.