Hello! I have a problem solving problem 150E — Freezing with Style.
It can be solved by Binary Search & Centroid Decomposition & a bit data structure in $$$\mathcal{O}(n\log^2 n)$$$.
But I'm wondering is there an $$$\mathcal{O}(n\log n)$$$ solution?
I suppose Binary Search should be preserved, so here is a $$$\log n$$$. So the problem is : given a tree with it's edge's weights are either $$$1$$$ or $$$-1$$$, determine whether there is a simple path with non-negative total weight and it's length is in $$$[l,r]$$$.
My idea is to use high-low decomposition, you can learn it here (solution of 1009F — Dominant Indices), it looks like DSU on tree but focus on depth of the subtree instead of size. Because this problem also have something to do with depth, I think this method may help.
I can't go any further this way. So I write this blog and hope someone can help me with this problem, thanks!
I'll just write centroid decomposition then...
Yeah, I think a single logarithm is possible.
First, you indeed don't need centroid decomposition to get $$$\mathcal{O} (n \cdot \log (10^9) \cdot \log n)$$$. We should binary search the answer and then use bottom-up dp where you merge smaller to bigger subtrees (in terms of depth). When you iterate over depth $$$d$$$ in smaller subtree, you know what depth of vertex in the other subtree you need (between $$$l-d$$$ and $$$r-d$$$) and you query a segment tree for maximum in this interval.
My first idea to get rid of one logarithm is that values are $$$-1$$$ and $$$+1$$$ what means that max prefix sums are very special (every next one is greater by at most $$$1$$$). It didn't seem to work.
Instead, let's notice that we always query for an interval of size $$$r-l$$$ or prefix or suffix (if interval $$$[l-d, r-d]$$$ doesn't fully fit in bigger subtree). This can be maintained without a segment tree. We need a data structure that handles the following operations:
And we used the fact that the sum over depth of non-largest children is linear.
crap. I have already used centroid decomposition and accepted it...
I'll check this solution later, thanks a lot!
Well, your goal should be to learn something new rather than just get AC. Then it isn't bad at all that you solved it in one way and will now know (or even implement) another version.
Can you please explain how to solve such problem using a data structure in $$$O(1)$$$ time?
There are modifications, and the queries of type 4 is not always going to one direction, because the current node may have multiple children.
Well, I came up with a $$$O(n\log n)$$$ solution to this problem, although it seems not very useful.
Oops, it seems that my solution has a fatal error...
I don't understand what you're talking about. What are queries of type 4?
Ask for a maximum in an interval of size $$$r−l$$$.
Given a sequence of length $$$N$$$, and a constant $$$K$$$ ($$$K \leq N$$$), we can find maximum in every interval of size $$$K$$$ in $$$O(N)$$$. As you go with the sliding window to the right, maintain a decreasing queue of candidates for future maxima. For more detail, google: sliding window maximum.
The above still works if you keep appending new elements on one side of a sequence. In some sequence/vector, we keep all maxima (i.e. maximum for every interval of size $$$r-l$$$). When query 2 happens (prefix max update), we need to maximize some most recent maxima, and maximize some most recent elements from current monotonic queue.
But I don't think the sliding window trick works on here, that trick is amortized, also, your range maximum query is not going on one direction:
If the current node has many children, while you process one child, your query on this data structure is a sliding window, but there are multiple children, you need to "query -> modify -> query"? The following queries may go back.
Well, I mean it doesn't work because you may have queries like [3,6], [2,5], [1,4], then some modifications, and you go to another child, you query [4,7], [3,6], [2,5]... like this, so the queries are not going to one direction. Also, the amortized sliding window and the modifications makes it difficult to reuse the calculated results?
I don't remember the initial problem but I claim that I can solve the 4 queries described in my previous post. I'm aware that updates and queries are interleaving.
The amortization still works: every element will be removed only once from the monotonic queue/deque.
For max query, I just read from some array of answers.
Sorry for the 18 month later interrupt...
Of course ODT contacted me at first, but I almost forget the problem and the $$$\mathcal O (n \log n)$$$ solution. So I suggest they contacting you.
I think the main issue is we cannot easily process query type 2 and 3 (mentioned by you).
More precisely:
ODT came up with a solution using amortized $$$\mathcal O (1)$$$ DSU. But they didn't fully understand your solution.
I got this solution too, but I don't really understand why the complexity is $$$O(N \log^2 N)$$$.
In fact I think that it is $$$O(N \log^3 N)$$$ because first of all we have $$$1 \log N$$$ factor for the binary search, then for the centroid decomposition for each of the $$$O(N \log N)$$$ paths we look with $$$O(\log N)$$$ complexity in the bit data structure.
Combining this gives $$$O(N \log^3 N)$$$ total time complexity. Can someone tell me where I'm wrong and why the complexity is $$$O(N \log^2 N)$$$. Thanks in advance.
sorry, by "a bit data structure", I mean, truely a bit data structure, not Fenwick Tree. it's actually a non-decreasing queue, to get the minimum value in a sliding window.