I am stuck on this following problem for a pretty long time.
statement
input
output
sample
time limit
It will be really helpful if you can provide me with a solution.
# | User | Rating |
---|---|---|
1 | tourist | 3985 |
2 | jiangly | 3814 |
3 | jqdai0815 | 3682 |
4 | Benq | 3529 |
5 | orzdevinwang | 3526 |
6 | ksun48 | 3517 |
7 | Radewoosh | 3410 |
8 | hos.lyric | 3399 |
9 | ecnerwala | 3392 |
9 | Um_nik | 3392 |
# | User | Contrib. |
---|---|---|
1 | cry | 169 |
2 | maomao90 | 162 |
2 | Um_nik | 162 |
4 | atcoder_official | 161 |
5 | djm03178 | 158 |
6 | -is-this-fft- | 157 |
7 | adamant | 155 |
8 | awoo | 154 |
8 | Dominater069 | 154 |
10 | luogu_official | 150 |
I am stuck on this following problem for a pretty long time.
You are given an array of $$$N$$$ integers. All the numbers are distinct and between $$$[1...N].$$$
You will be given two integers, let’s call them $$$L, R (1 ≤ L ≤ R ≤ N).$$$ You need to find out the length of the largest contiguous sub-array of the given permutation in which every value is between $$$L$$$ and $$$R$$$ inclusive. You will be given many queries like this for the given array.
The first line contains two integers $$$N$$$ and $$$Q (1 ≤ N, Q ≤ 10^5)$$$, size of the array and the number of queries. Then the next line contains a permutation of positive integers between $$$1...N.$$$ The next $$$Q$$$ lines contain pairs of integers: $$$L$$$ and $$$R.(1≤ L≤ R≤ N)$$$
Print $$$Q$$$ lines, each line should contain the length of the largest sub-array of the array which contains all the values only from $$$[L, R]$$$ (inclusive).
6 3
1 5 2 3 6
1 5
1 4
3 4
4
2
1
$$$2s$$$
It will be really helpful if you can provide me with a solution.
Name |
---|
Can we process queries offline? I think we can make Mo's algorithm work here.
YES, we can process queries offline.
Let's call the given permutation $$$A$$$. For every number from $$$1$$$ to $$$N$$$, write down its position in $$$A$$$ to get permutation $$$B$$$. Now we can solve the problem using Mo's algorithm and DSU: Consider a query like $$$[L_i,\,R_i]$$$. Let's call the $$$i$$$-th number in $$$B$$$ as $$$B_i$$$. Now consider an array of size $$$N$$$ such as $$$C$$$ and for each $$$B_j$$$ ($$$L_i \leq j \leq R_i$$$), mark the $$$B_j$$$-th cell in $$$C$$$. The answer of the query is the length of the longest contiguous subarray of $$$C$$$ that all of its cells are marked. You can keep track of the size of components (maximal contiguous subarrays of marked cells) using DSU.
Edit: By "DSU", I don't mean anything complicated. It's enough to keep the endpoints of the components as pairs in a proper data structure (for example:
std::set
).So we need a persistent DSU for this. AFAIK, Persistent DSU works in $$$O(log N)$$$ as we need a stack to take snapshots and do rollbacks and merging from small to large components.
So we are getting a solution in $$$O(n sqrt(n) logn)$$$ which is still costly for us.
No. We don't need these complicated data structures. I edited my comment and explained it more.
But the set is taking extra $$$logn$$$. So what are you trying to come up with?
I didn't mean that we can reduce the order of complexity using a
std::set
. :D I just meant that there's an easier way to implement the solution.Provide the problem link. I will post my solution.
This is the problem but unfortunately, there is no dataset to judge the problem. So you will always get a runtime error.
TL;DR There exist variation of Mo's algorithm with only adds and rollbacks.
I assume that you understand
solution with Mo and DSUideas of how to use Mo and how to use DSU to solve one query in $$$O(N \alpha(N))$$$.This is not Mo, but similar: Queries with length less then $$$\sqrt{N}$$$ we can answer in $$$O(len)$$$ time similar to what I will do next, let's leave it as an exercise. All longer queries we will group by the bucket (of size $$$\sqrt{N}$$$) in which its left border is. For one bucket we will sort all queries by right border. Let's say that borders of bucket are $$$L$$$ and $$$R$$$ (which means left borders of all queries are in $$$[L, R]$$$). We will start with segment $$$[R, R]$$$ and then will move right border to the right (adding the right element). When we encounter right border for some query, we will move left border to the left so that it coincides with left border of query. Then we will answer the query and rollback all the left border movements. The complexity is like in Mo's algorithm, the difference is that we do only adds and rollbacks.
Underlying DS is not DSU, but just remembering left border of segment for right border and vice versa. All the changes are in strict $$$O(1)$$$ time, so total complexity is $$$O(Q\sqrt{N})$$$.
Thank you. This will work.