I have a vector of pairs of length N. Now i have Q queries of type, given range let say L to R, find the minimum value of second element of pair whose first element is equal to given number K.
For example : vector of 6 elements [{2,5},{8,7},{2,3},{8,6},{2,1},{8,4}] Now for query : L=3 R=6 (1-based indexing) and K=8
Answer would be -> 4 which corresponds to this pair {8,4}
There are 10^5 queries and N can be upto 10^5.
can someone give me an idea how to approach this problem more efficiently?
Auto comment: topic has been updated by dreamplayDaddy (previous revision, new revision, compare).
Is k fixed?
no, in each query we are given different L, R and K
We can try like this:
Make a vector for each value of K. The first value of the pair is the second element of the original pair, and the second value is the position in the original array. In the problem it will look like :
2->{5,1}{3,3}{1,5}
8->{7,2}{6,4}{4,6} (each vector is sorted on the basis of position(2nd value of pair))
then build a min-segment tree for each vector[K]. for each query, using lowerbound/upperbound, find the indexes for the search in the segment tree.
I was thinking of the same approach, but is there any another approach which is way more cleaner or efficient in terms of space and time complexity.
I have posted this on stackoverflow also link
I don't think you can do better than $$$O(N \log N)$$$ in this case, since you need $$$O(N \log N)$$$ time for N online
lower_bound
queries on an array of N elements.Better approach is to use sparse table instead of segment tree to perform those range minimum queries.
but we can not build a segment tree for each value due to memory.
If you can solve it offline you could order the queries by increasing
k
.Then, for every different
k
, add to the segment tree all pairs that have the first element equal tok
. Now you could simply query the segment tree for the minimum element of range[l, r]
as we know that the condition will be always met. As soon as we are done processing all queries of the samek
, remove the pairs we previously added.no we have to solve it online
You can probably use square root decomposition. Partition the vector of pairs into $$$\sqrt{N}$$$ blocks.
For each block maintain a map which stores the minimum second element corresponding to the first element in the pair.
For each query you'll be going over $$$ O(\sqrt{N}) $$$ complete blocks and $$$ O(\sqrt{N}) $$$ elements that don't belong to complete block and have to be catered to one by one.
Complexity per Query will be $$$ O(\sqrt{N} log(N)) $$$ which can be reduced to $$$ O(\sqrt{N}) $$$ with coordinate compression ( costs $$$ O(Nlog(N)) $$$ worth of pre-processing) if elements of the pair are large or using lookup array if they are smaller.
There exists better solution for XORMIN, which doesn't use this and finds both answers at same time.
You can just use segment tree, where in every node you need to have a treap that contains all values in that subtree, and every node of the treap contains minimum of the second elements, now you can answer all queries in O(log^2n) time.