Let's say a bag is expensive if $$$a_i \gt c$$$ and free if $$$a_i \leq c$$$.
First, observe that as long as there is at least one free bag remaining, we should destroy a free bag. We will perform all expensive destructions at the end. The goal is to perform as much free destructions as possible.
When there are multiple free bags, which one to choose? The bigger a bag is, the less time you have to destroy it before it becomes expensive. Hence, at each second, you should destroy the biggest remaining free bag.
This process can be naively simulated in $$$\mathcal{O}(n^2)$$$, which was sufficient under given constraints. If you sort the array beforehand and keep the current multiplier in a separate variable (instead of multiplying all weights), it can be done in $$$\mathcal{O}(n \log n)$$$ time.
Instead of taking either the leftmost or rightmost element, think about taking either the minimum or maximum element.
It's possible to ensure that $$$q_1 \lt q_2 \gt q_3 \lt q_4 \gt q_5 \ldots$$$
Let's number turns from $$$1$$$ to $$$n$$$. On odd turns, we take the minimum element and on even turns, we take the maximum element. We claim that $$$q_1 \lt q_2 \gt q_3 \lt q_4 \gt q_5 \ldots$$$ (which is actually stronger than what the statement asked).
Proof: consider taking the minimum on an odd turn : $$$q_i = \min(p_l, p_r)$$$ where $$$l, r$$$ were the endpoints of the remaining array at turn $$$i$$$. Suppose without loss of generality the minimum was on the left, i.e. $$$q_i = p_l \lt p_r$$$. Then at the next turn, $$$p_r$$$ is still available:
For example, if the remaining array at an odd turn is $$$[5, ?, \ldots, 10]$$$, we take the minimum $$$5$$$, and then no matter what the $$$?$$$ is, we take at least $$$10$$$ which is greater than $$$5$$$. This proves $$$q_i \lt q_{i+1}$$$ when $$$i$$$ is odd. Similarly, we can prove $$$q_i \gt q_{i+1}$$$ when $$$i$$$ is even.
Try to come up with a sufficient condition first, then prove it's necessary. Doing some examples with $$$n = 3$$$ might be useful.
Let $$$m_i = \min(b_1, \ldots, b_{i-1})$$$ for every $$$2 \leq i \leq n$$$, which can be computed in linear time. We claim we can achieve $$$a = b$$$ if and only if $$$b_i - m_i \lt m_i$$$ for every $$$2 \leq i \leq n$$$.
Proof that it's sufficient: we can build the array from left to right. For every $$$i$$$, if $$$b_i \lt m_i$$$ then we directly add $$$b_i$$$. Otherwise, we first add $$$b_i - m_i$$$, then $$$m_i$$$.
Proof that it's necessary: suppose we managed to reach $$$b$$$. At every step, $$$a_i \leq b_i$$$ for every $$$i$$$. Consider the last operation $$$x$$$ that incremented $$$a_i$$$. Then, by the definition of $$$i$$$, we must have
Hence, $$$b_i = a_i + x \leq 2x-1 \leq 2m_i - 1$$$.
Let $$$b_i = \max(a_i, a_{i+1})$$$, show that $$$b_1 \geq b_2 \geq \ldots \geq b_{n-1}$$$.
If $$$a_i \lt a_{i+1}$$$, show that $$$a_{i+1} \gt \max(a_{i+2}, a_{i+3}, \ldots a_{n})$$$.
Let $$${i_1, \ldots, i_m}$$$ be the set of $$$i \in [1, n]$$$ such that $$$a_i \gt a_{i+1}$$$. Let us show that the sequence $$$(i_1, \ldots, i_m)$$$ is an LDS:
- It is a decreasing subsequence: let $$$k \in [1, m -1]$$$. If $$$i_{k+1} = i_{k}+1$$$, then $$$a_{i_k} \gt a_{(i_k) + 1} = a_{i_{k+1}}$$$. Else, we have $$$a_{(i_k) + 2} \lt \max(a_{i_k}, a_{(i_k)+1})$$$, but $$$i_k + 1$$$ is such that $$$a_{i_{k}+2} \geq a_{i_{k+1}}$$$, so $$$a_{(i_k) + 2} \lt a_{i_k}$$$. We have also $$$a_{i_k + 1} \lt a_{i_k}$$$ by definition so we can easily prove by induction on $$$j \gt 0$$$ that $$$a_{i_k + j} \lt a_{i_k}$$$ where it makes sense.
- It's optimal : let $$$E$$$ be the set of indexes taken by a LDS. For each i such that $$$a_i \leq a_{i+1}$$$, at least one of the elements of $$${i, i+1}$$$ does not belong to $$$E$$$. Now, these sets are disjoint : if $$$a_i \lt a_{i+1}$$$ we cannot have $$$a_{i+2} \lt a_{i+1}$$$ (otherwise the condition $$$\max(a_i, a_{i+1}) \gt a_{i+2}$$$ is not met. So $$$|E| \leq m$$$.
To calculate the sum of the LDS of the sub-arrays, it is therefore sufficient to count for each $$$i$$$ such that $$$a_i \lt a_{i+1}$$$ the number of sub-arrays $$$a[l,r]$$$ which contain $$$i$$$ and $$$i + 1$$$. It's $$$(i+1) \cdot (n - i - 1)$$$ for $$$0$$$-indexation : a necessary and sufficient condition is that $$$l \leq i$$$ and $$$r \geq i + 1$$$.
2128E1 - Submedians (Easy Version)
You can use binary search on the answer. Why ?
Let's start with a, perhaps, more natural task: finding whether or not $$$v$$$ is a submedian. To do so, we create an array $$$\text{big}$$$ such that $$$\text{big}[i] = 1 \text{ if } \text{a}[i] \geq v \text{ else } -1$$$
We also create an array $$$\text{small}$$$ such that $$$\text{small}[i] = 1 \text{ if } \text{a}[i] \leq v \text{ else } -1$$$
Knowing whether v is a submedian or not requires using both $$$\mathrm{big}$$$ and $$$\mathrm{small}$$$ ; when you want to know if the largest submedian is larger than or equal to v, do you actually need both arrays?
Solve the relaxed problem where we only consider the first three constraints (and neglect the constraint on $$$\mathrm{small}$$$) i.e finding the largest $$$v$$$ such that:
- $$$1 \leq l \leq r \leq n$$$,
- $$$r - l + 1 \geq k$$$,
- $$$\text{big}[l] + \dots + \text{big}[r] \geq 0$$$
Use binary search and prefix sums to rewrite the constraint on $$$\text{big}$$$
Is the resulting value a submedian ?
First, note that the predicate ``there exists a submedian $$$\geq v$$$'' is monotonic. That is, if there exists a submedian $$$\geq v$$$, then for all $$$v' \lt v$$$, there exists a submedian $$$\geq v$$$. Thus, we can binary search to find the largest $$$v$$$ such that there exists a submedian $$$\geq v$$$. Such a $$$v$$$ will be the largest submedian.
Now, what is left is to check is, for a fixed $$$v$$$, whether or not there exists a submedian $$$\geq v$$$.
Let's start with a, perhaps, more natural task: finding whether or not $$$v$$$ is a submedian. To do so, we create an array $$$\text{big}$$$ such that $ \text{big}[i] = 1 \text{ if } \text{a}[i] \geq v \text{ else } -1 $
We also create an array $$$\text{small}$$$ such that $ \text{small}[i] = 1 \text{ if } \text{a}[i] \leq v \text{ else } -1 $
Using these notations, $$$v$$$ is a submedian iff there exists $$$(l, r)$$$ such that:
- $$$1 \leq l \leq r \leq n$$$,
- $$$r - l + 1 \geq k$$$,
- $$$\text{big}[l] + \dots + \text{big}[r] \geq 0$$$
- $$$\text{small}[l] + \dots + \text{small}[r] \geq 0$$$
Let's try to solve a relaxed problem where we only consider the first three constraints (and neglect the constraint on $$$\text{small}$$$ i.e finding the largest $$$v$$$ such that:
- $$$1 \leq l \leq r \leq n$$$,
- $$$r - l + 1 \geq k$$$,
- $$$\text{big}[l] + \dots + \text{big}[r] \geq 0$$$
Such a problem is standard and can be easily solved in $$$\mathcal{O}(n \log n)$$$ by binary search and prefix sums. Indeed, for a fixed $$$v$$$, we can iterate over the value of $$$r$$$. Now: $$$\text{big}[l] + \dots + \text{big}[r] \geq 0$$$ iff $$$\text{pref}[r] - \text{pref}[l-1] \geq 0$$$ i.e $$$\text{pref}[r] \geq \text{pref}[l-1]$$$ so taking $$$l$$$ that minimizes $$$\text{pref}[l-1]$$$ is optimal. Thus, it is enough to maintain such an $$$l$$$ in constant time (when you increase $$$r$$$, there is only one new candidate for $$$l$$$).
Now, I claim that the value of $$$v$$$ we get happens to be the largest submedian. Indeed, if it doesn't satisfy $$$\text{small}[l] + \dots + \text{small}[r] \geq 0$$$, then $$$v + 1$$$ would still satisfy $$$\text{big}[l] + \dots + \text{big}[r] \geq 0$$$ which contradicts the maximality of $$$v$$$.
2128E2 - Submedians (Hard Version)
The set of submedians is a range. Try to prove it.
For $$$l_1 \leq r_1$$$ and $$$l_2, r_2$$$ such that $$$|l_1 - l_2| + |r_1 - r_2| \leq 1$$$, both subarrays share at least one median. How to prove it ? Knowing this fact, how to prove Hint 1 ?
Find a discretely ``continuous'' path from the subarray of the minimum submedian, to the subarray of the maximum submedian
Read the editorial of E1 first.
In such problems, it can be useful to try an easier version, for example, finding maximal/minimal values (which is why the subtask is helpful).
Now, one can observe that if $$$x \leq y$$$ are submedians, then any $$$z$$$ such that $$$x \leq z \leq y$$$ is also a submedian ! One can prove this using the intermediate value theorem. Let's fix $$$x \leq z \leq y$$$ and $$$[l_1, r_1]$$$ (resp. $$$[l_2, r_2]$$$) be a range where $$$x$$$ (resp. $$$y$$$) is a submedian. wlog, assume $$$l_1 \lt l_2$$$. Start by sweeping $$$r_1$$$ to $$$l_2$$$. Then $$$r_1$$$ to $$$r_2$$$. And finally, $$$l_1$$$ to $$$l_2$$$. Then one of the ranges that we covered has submedian $$$z$$$. Intuitively, when incrementing one of the ends of a range, the +1/-1 in the arrays big and small are not changing that much (the norm 1 distance is bounded by 2) so we can think of using the intermediate value theorem (see proof below). Also, note that at any time, the covered range is of length at least $$$k$$$.
Thus, it is enough to find the smallest and largest submedians (using the solution of E1) and maintain the set of submedians while sweeping. One way to do that is to use two fenwick trees to keep track of the $$$big$$$ and $$$small$$$ arrays. Alternatively, you can use two sets (one with the lower half of the values of the current range, and one with the upper half).
For each $$$[l, r]$$$ that our algorithm covers, we consider the point $$$(|{i \in [l,r] \mid a[i] \geq z}| - |{i \in [l,r] \mid a[i] \lt z}|, |{i \in [l,r] \mid a[i] \leq z}| - |{i \in [l,r] \mid a[i] \gt z}|)$$$ where $$$|A|$$$ is the cardinal of set $$$A$$$. Essentially, you can think of this point as an encoding of both conditions on small and big for $$$z$$$ to be a submedian.
Note that, when moving one end of the range, each coordinate of the point changes by at most 1 (in absolute value). Also note that, by a simple computation, the sum of both coordinates is positive (it is equal to twice the number of $$$i$$$ such that $$$a[i] = z$$$).
I claim that one of these points will have positive coordinates (thus, $$$z$$$ will be a submedian). Indeed: our path starts at a point with (negative first coordinate, positive second coordinate) and ends at a point with (positive first coordinate, negative second coordinate). Each move allows moving to a point at distance at most 2. However, you can see on the drawing below that one must pass through points with positive coordinates to achieve this (otherwise, you would be stuck in (negative first coordinate, positive second coordinate)).

In red, the area with points such that $$$x+y \geq 0$$$, below the green line are forbidden points. The blue points are our starting and ending points. Finally, the arrows are the possible moves after incrementing/decrementing one end of the range.
We abbreviate $$$\text{dist}_w$$$ by $$$d$$$. We denote $$$d_L$$$ as the distance when every edge has weight $$$l_i$$$, and $$$d_R$$$ as the distance when every edge has weight $$$r_i$$$.
Say an edge is green if its weight is $$$l_i$$$ and red if its weight is $$$r_i$$$. If the answer is YES, prove there exists a solution that colors a path from $$$1$$$ to $$$n$$$ in green and the rest of the edges in red.
Consider an arbitrary solution $$$w$$$ and let $$$P$$$ be the shortest path from $$$1$$$ to $$$n$$$ with respect to $$$w$$$. Decreasing the weight of an edge in $$$P$$$ will decrease $$$d(1, n)$$$ by exactly one and will increase $$$d(1, k) + d(k, n)$$$ by at most one. Increasing the weight of an edge outside $$$P$$$ will not increase $$$d(1, n)$$$ and will not decrease $$$d(1, k) + d(k, n)$$$. By repeating these operations, we can make $$$P$$$ green and the rest red.
Let $$$P$$$ be the chosen green path. Find a necessary condition for the corresponding weight assignment to be correct. Then, prove if it's sufficient.
The condition is $$$d_L(u, v) \lt d_R(u, k) + d_R(k, v)$$$ for all $$$u, v \in P$$$.
It's obviously necessary; otherwise, $$$1 \rightarrow_L u \rightarrow_R k \rightarrow_R v \rightarrow_L n$$$ would have length $$$d(1, n)$$$. Conversely, if there exists a $$$1-n$$$ shortest path going through $$$k$$$, if you consider the last node in $$$P$$$ you visited before reaching $$$k$$$ and the first node in $$$P$$$ you visited after reaching $$$k$$$, this pair of nodes will violate the condition.
Try to build a correct path using a Dijkstra-like algorithm. It might be helpful to have an intuitive understanding of the condition found in the previous hint in order to \textbf{quantify with a single integer how good the current path is}. Imagine you're a robber going from $$$1$$$ to $$$n$$$ and you don't want to be caught by cops.
Imagine nodes are cities, $$$k$$$ is a police station, and you're some robber. Traversing the $$$i$$$-th edge takes $$$l_i$$$ seconds for the robber, and $$$r_i$$$ seconds for the citizens and the cops.
Each time you go through a city, the citizens of this city will run to the police station to warn the cops. Once the cops are warned, they start dispatching in all directions as fast as they can to catch up to you.
The current danger can be represented by an integer timer $$$x$$$. If $$$x \lt 0$$$, it means the cops will be warned in $$$-x$$$ seconds. If $$$x \geq 0$$$, it means the cops have been warned since $$$0$$$ seconds. It's the only information you need to remember.
Try to find for each node $$$u$$$ what is the corresponding minimum timer $$$t_u$$$. As the robber runs through the graph, the timer always increases, so you can do this with Dijkstra.
Please read all the hints first.
We first make a classical Dijkstra from $$$k$$$ with $$$r$$$-weighted edges. Let $$$d_u$$$ be the distance between $$$u$$$ and $$$k$$$ for citizens/cops.
We then make a modified Dijkstra, starting from the situation $$$(1, t_1)$$$ with $$$t_1 = -d_1$$$ (meaning the cops will be warned in $$$d_1$$$ seconds).
When you traverse the $$$i$$$-th edge to go from $$$(u, t)$$$ to $$$v$$$, the new timer will be
We have $t' \geq t+l_i$ because the previous timer continued to advance while the robber was traversing the edge, and $$$t' \geq -d_v$$$ because a new citizen started running from $$$v$$$ to the police station.
Each time, we take the situation with the minimum timer from the priority queue. If the node has already been processed, or if $$$d_u \leq t_u$$$ (meaning the cops can catch up to the robber in $$$v$$$), the situation is skipped. The answer is YES if and only if we processed the node $$$n$$$.
The time complexity is $$$\mathcal{O}((n + m) \log (n+m))$$$.



