Idea: i_love_penguins

Preparation: i_love_penguins

Editorial: i_love_penguins

**Hints**

**Hint 1**

The answer will always have either the prefix $$$s$$$, or the reversed string $$$s$$$.

**Hint 2**

Adding the string to the end is required no more than once.

**Solution**

Let $$$t$$$ be the reversed string $$$s$$$. Notice that it is advantageous for us to use operation 1 (adding the reversed string at the end) no more than once. Indeed, having obtained some string, we will simply spend the remaining operations on flipping the string. Thus, we will get the original string or the reversed one, depending on the parity of the number of remaining operations.

It is easy to see that the answer will always have either the prefix $$$s$$$, or $$$t$$$. Then, we find two lexicographically minimal strings with the prefix $$$s$$$ and $$$t$$$. These will be strings:

$$$s$$$, flip the string $$$n$$$ times (since $$$n$$$ is even, every 2 operations we return the string to its original) $$$t + s$$$, initially flip the string, add the reversed string to the end, then flip the string $$$n - 2$$$ times. The answer will be the lexicographically minimal string out of $$$s$$$ and $$$t + s$$$.

**Implementation on C++**

```
#include <bits/stdc++.h>
int main() {
std::ios_base::sync_with_stdio(false);
std::cin.tie(nullptr);
int tt;
std::cin >> tt;
while (tt--) {
int n;
std::cin >> n;
std::string s;
std::cin >> s;
std::string t = s;
reverse(t.begin(), t.end());
std::cout << std::min(s, t + s) << "\n";
}
return 0;
}
```

**Implementation on Python**

```
t = int(input())
for _ in range(t):
n = int(input())
s = input()
t = s[::-1]
print(min(s, t + s))
```

Idea: AndreyPavlov

Preparation: AndreyPavlov, i_love_penguins

Editorial: AndreyPavlov

**Hints**

**Hint 1**

What is the minimum $$$k$$$ that can be in a division?

**Hint 2**

Suppose $$$\operatorname{MEX}(x, y) = \operatorname{MEX}(y + 1, z)$$$, what can be said about $$$\operatorname{MEX}(x, z)$$$?

**Tutorial**

Suppose we correctly divided the array into $$$k > 2$$$ segments — $$$(1, r_1), (l_2, r_2), \ldots, (l_k, r_k)$$$. Then, note that we can merge first two subsegments, as the numbers from 0 to $$$mex - 1$$$ are encountered in these two segments and the number $$$mex$$$ is not encountered in them. Therefore, if there is a division into $$$k > 2$$$ segments, then there is also for $$$k - 1$$$ segments.

Therefore, it is sufficient to check whether there is a division of the array into $$$k = 2$$$ segments, which can be done in $$$O(n)$$$ by precalcing $$$\operatorname{MEX}$$$ on the prefixes and suffixes, then we need to find some $$$i$$$ for which $$$\operatorname{MEX}(1, i) = \operatorname{MEX}(i + 1, n)$$$.

**Implementation on C++**

```
/* Includes */
#include <bits/stdc++.h>
/* Using libraries */
using namespace std;
/* Defines */
#define fast ios_base::sync_with_stdio(false);cin.tie(0);cout.tie(0)
#define vc vector
#define pii pair <int, int>
#define int long long
void solve () {
int n;
cin >> n;
vc <int> a(n);
vc <int> cnt1(n + 1), cnt2(n + 1);
for (int i = 0; i < n; ++i) {
cin >> a[i];
cnt2[a[i]]++;
}
int mex1 = 0, mex2 = 0;
while (cnt2[mex2])
++mex2;
for (int i = 0; i < n; ++i) {
cnt1[a[i]]++;
if (--cnt2[a[i]] == 0 && mex2 > a[i]) {
mex2 = a[i];
}
while (mex2 && !cnt2[mex2 - 1])
--mex2;
while (cnt1[mex1])
++mex1;
if (mex1 == mex2) {
cout << "2\n";
cout << 1 << " " << i + 1 << "\n";
cout << i + 2 << " " << n << "\n";
return;
}
}
cout << "-1\n";
}
signed main() {
fast;
int t;
cin >> t;
while (t--) {
solve();
}
return 0;
}
```

**Implementation on Python**

```
def solve():
n = int(input())
a = list(map(int, input().split()))
cur_mex = 0
cur_have = [0] * (n + 1)
for el in a:
cur_have[el] += 1
while cur_have[cur_mex]:
cur_mex += 1
another_mex = 0
another_have = [0] * (n + 1)
for i in range(n):
cur_have[a[i]] -= 1
if cur_have[a[i]] == 0 and cur_mex > a[i]:
cur_mex = a[i]
another_have[a[i]] += 1
while another_have[another_mex]:
another_mex += 1
if cur_mex == another_mex:
print(2)
print("1 " + str(i + 1))
print(str(i + 2) + " " + str(n))
return
print(-1)
t = int(input())
for _ in range(t):
solve()
```

Idea: i_love_penguins

Preparation: i_love_penguins

Editorial: i_love_penguins

**Hints**

**Hint 1**

Try to find the answer to the problem by hand. How can changing the order of the messages reduce time?

**Hint 2**

Let the order in the response be $$$p_1, p_2, \ldots, p_k$$$. It is always advantageous for the response to satisfy $$$b_{p_1} \leq b_{p_2} \leq \ldots \leq b_{p_k}$$$.

**Hint 3**

Greedy?

**Solution**

Let the order in the response be $$$p_1, p_2, \ldots, p_k$$$. Note that in the optimal response, it will hold that $$$b_{p_1} \leq b_{p_2} \leq \ldots \leq b_{p_k}$$$. It's not hard to prove that such an order minimizes the sum of the absolute differences of adjacent elements in the array $$$b$$$. Then, this sum will turn into $$$b_{p_k} - b_{p_1}$$$, that is, the difference between the highest and lowest value of $$$b$$$ in the set of messages.

Let's sort the pairs $$$(a_i, b_i)$$$ in ascending order of $$$b_i$$$. Fix the minimum value in the set of messages $$$b_l$$$ and the maximum value $$$b_r$$$. Note that the sum of the absolute differences of $$$b$$$ in the response will not change if taking values $$$b_l \leq b_i \leq b_r$$$. Thus, the task reduces to finding the maximum number of messages so that $$$\sum a \leq L - (b_r - b_l)$$$.

Iterate over $$$b_l$$$ and $$$b_r$$$. Apply a greedy algorithm, sort all values of $$$a_i$$$ ($$$l \leq i \leq r$$$), and keep adding values until the time exceeds $$$L$$$. This solution works in $$$O(n^3\log{}n)$$$, which is too slow.

To speed up the solution, while iterating over $$$b_r$$$, maintain a data structure that allows adding an element, removing the maximum, getting the maximum (in C++, there are $$$\textit{multiset}$$$ and $$$\textit{priority_queue}$$$). In this data structure, maintain the minimum values of $$$a$$$, so the sum of times does not exceed $$$L$$$. Then, if the current time exceeds $$$L$$$, remove from the structure until the current time becomes no more than $$$L$$$. There will be no more than $$$n$$$ such removals. We have obtained a solution in $$$O(n^2\log{}n)$$$, which is feasibly fast.

**Implementation on C++**

```
#include <bits/stdc++.h>
signed main() {
std::ios_base::sync_with_stdio(false);
std::cin.tie(nullptr);
int t = 1;
std::cin >> t;
while (t--) {
int n, L;
std::cin >> n >> L;
std::vector<std::pair<int, int>> v(n);
for (int i = 0 ; i < n ; i++) {
std::cin >> v[i].second >> v[i].first;
}
std::sort(v.begin(), v.end());
int ans = 0;
for (int l = 0 ; l < n ; l++) {
std::multiset<int> s;
int cur = 0;
for (int r = l ; r < n ; r++) {
s.insert(v[r].second);
cur += v[r].second;
while (!s.empty() && v[r].first - v[l].first + cur > L) {
int max_value = *s.rbegin();
cur -= max_value;
s.extract(max_value);
}
ans = std::max(ans, (int) s.size());
}
}
std::cout << ans << "\n";
}
return 0;
}
```

**Implementation on Python**

```
import heapq
import sys
input = sys.stdin.readline
t = int(input())
for _ in range(t):
n, L = map(int, input().split())
v = []
for i in range(n):
a, b = map(int, input().split())
v.append((b, a))
v.sort()
ans = 0
for l in range(n):
pq = []
heapq.heapify(pq)
cur = 0
size = 0
for r in range(l, n):
heapq.heappush(pq, -v[r][1])
size += 1
cur += v[r][1]
while size and v[r][0] - v[l][0] + cur > L:
maxx = -heapq.heappop(pq)
cur -= maxx
size -= 1
ans = max(ans, size)
print(ans)
```

Idea: IzhtskiyTimofey

Preparation: IzhtskiyTimofey

Editorial: IzhtskiyTimofey

**Hints**

**Hint 1**

The principle of inclusion-exclusion.

**Hint 2**

The equation $$$x+y = s_i$$$, $$$y-x=s_j$$$ has $$$0$$$ or $$$1$$$ solutions with integers $$$x, y$$$.

**Tutorial**

Applying the formula of inclusion-exclusion, the answer to the problem will be: $$$\mathrm{cnt}(x, y) - \mathrm{cnt}(x, y: x + y \in s) - \mathrm{cnt}(x, y: y - x \in s) + \mathrm{cnt}(x, y: x + y, y - x \in s)$$$. Let's calculate each value separately.

The number of possible pairs $$$x, y$$$ is $$$\frac{(c+1)\cdot(c+2)}{2}$$$.

The number of pairs $$$x, y: x + y \in s$$$. We iterate over the sum value $$$s_i$$$, let $$$x + y = s_i$$$, then for $$$0 \leq x \leq \lfloor \frac{s_i}{2} \rfloor$$$ there will correspond exactly one $$$y$$$, i.e., the number of pairs with such a sum is $$$\lfloor \frac{s_i}{2} \rfloor + 1$$$.

The number of pairs $$$x, y: y - x \in s$$$. We iterate over the difference value $$$s_i$$$, let $$$y - x = s_i$$$, then for $$$s_i \leq y \leq c$$$ there will correspond exactly one $$$x$$$, i.e., the number of pairs with such a difference is $$$c - s_i + 1$$$.

The number of pairs $$$x, y: x + y, y - x \in s$$$. Let $$$x+y=s_i$$$, $$$y-x=s_j$$$, then $$$x = \frac{s_i - s_j}{2}$$$, and $$$y = \frac{s_i+s_j}{2}$$$. If $$$s_i, s_j$$$ have different parities, then such $$$x, y$$$ will not be suitable because they will be non-integers; otherwise, such $$$x, y$$$ are suitable, and we need to add this pair. Then, let's calculate the number of even and odd numbers in $$$s$$$ — $$$even$$$ and $$$odd$$$ respectively. Thus, the number of such pairs is $$$\frac{even\cdot(even+1)}{2}+\frac{odd\cdot(odd+1)}{2}$$$.

With all these quantities, we can calculate the answer. The final complexity is $$$O(n)$$$. The problem can also be solved in $$$O(n \cdot \log n)$$$ or $$$O(n)$$$ using other methods.

**Implementation on C++**

```
#include <bits/stdc++.h>
using namespace std;
using ll = long long;
void solve_case() {
int n, c;
cin >> n >> c;
vector<int> s(n);
for (int i = 0; i < n; i++) cin >> s[i];
int odd = 0, even = 0;
ll ans = 1ll * (c + 1) * (c + 2) / 2;
for (int i = 0; i < n; i++) {
ans -= s[i] / 2 + 1;
ans -= c - s[i] + 1;
if (s[i] % 2 == 0) {
even++;
} else {
odd++;
}
}
ans += 1ll * even * (even + 1) / 2;
ans += 1ll * odd * (odd + 1) / 2;
cout << ans << "\n";
}
int main() {
ios::sync_with_stdio(false);
cin.tie(0), cout.tie(0);
int T = 1;
cin >> T;
while (T--) {
solve_case();
}
}
```

**Implementation on Python**

```
def solve_case() :
n, c = map(int, input().split())
s = list(map(int, input().split()))
ans = (c + 1) * (c + 2) // 2
even, odd = 0, 0
for i in range(n):
ans -= s[i] // 2 + 1
ans -= c - s[i] + 1
if s[i] % 2 == 0:
even += 1
else:
odd += 1
ans += even * (even + 1) // 2
ans += odd * (odd + 1) // 2
print(ans)
t = int(input())
for test in range(t):
solve_case()
```

1935E - Distance Learning Courses in MAC

Idea: IzhtskiyTimofey

Preparation: AndreyPavlov

Editorial: AndreyPavlov

**Hints**

**Hint 1**

Try to solve the problem if $$$x_i = 0$$$ for each $$$i$$$.

**Hint 2**

What will be the answer for two segments $$$(0, 2^i), (0, 2^i)$$$?

**Hint 3**

Which bits will definitely be included in the answer?

**Tutorial**

Let's solve the problem when $$$x = 0$$$. Then we will iterate over the bits from the most significant to the least significant and try to include them in the answer. Suppose we are iterating over bit $$$i$$$, then if such a bit occurs $$$c$$$ times in $$$y$$$ numbers there are several cases:

- $$$c = 0$$$ — this bit cannot be included in the answer
- $$$c = 1$$$ — this bit will be included in the answer, we add it
- $$$c > 1$$$ — a special case, because for one number where bit x is present, we can set it, and for another number, we can set all bits $$$j < i$$$.

Therefore, if we encounter some bit $$$i$$$ that occurs more than once, then all bits $$$j \le i$$$ will also be included in the answer. Now let's solve the original problem, for this we take a pair $$$(x_i, y_i)$$$ and find the bitwise largest common prefix — let it be number $$$w$$$. Then it is clear that all bits from $$$w$$$ will be included in the answer, then we make a new pair $$$(x'_i, y'_i)$$$ = $$$(x_i - w, y_i - w)$$$, and remember the number $$$w_i$$$. Now note that the number $$$y'_i - 1 \ge x'_i$$$. Why do we need this fact? Remember, in the case when some bit occurred more than once, we added it and all smaller bits to the answer. For this, we would set at this position a number equal to $$$2^i - 1$$$ (and other larger bits $$$i$$$), but if $$$y'_i - 1 \ge x'_i$$$, then we can always add all such bits.

Therefore, the solution algorithm for request $$$j$$$ is as follows:

- Take the Bitwise OR of all $$$w_i$$$ for $$$l_j \le i \le r_j$$$, let this be number $$$W$$$
- Iterate over bit $$$i$$$ and similarly to the case $$$x = 0$$$ consider the same cases, but for the array $$$y'$$$. Also, take into account that the bit occurs in $$$W$$$.

This solution can be implemented in $$$O(n \cdot \log n)$$$ using prefix sums for each bit. There is also a solution using a segment tree, so the problem can be solved with modification requests as well.

**Implementation on C++**

```
/* Includes */
#include <bits/stdc++.h>
#include <ext/pb_ds/tree_policy.hpp>
/* Using libraries */
using namespace std;
/* Defines */
template <class T>
using vc = vector <T>;
using ll = long long;
using ld = long double;
using pii = pair <int, int>;
template<class T>
void output(T &a) {
for (auto i : a)
cout << i << ' ';
cout << '\n';
}
template<class T>
bool chmin(T &a, T b) {
if (a > b) {
a = b;
return true;
}
return false;
}
template<class T>
bool chmax(T &a, T b) {
if (a < b) {
a = b;
return true;
}
return false;
}
const int N = 2e5;
const int bit = 30;
struct segtree {
vc <int> t; int n;
segtree (int n) : n(n) {
t.resize(n * 2);
}
void upd (int i, int x) {
for (t[i += n] = x; i > 1; i >>= 1) {
t[i >> 1] = t[i] | t[i ^ 1];
}
}
int get (int l, int r) {
int res = 0;
for (l += n, r += n; l < r; l >>= 1, r >>= 1) {
if (l & 1)
res |= t[l++];
if (r & 1)
res |= t[--r];
}
return res;
}
};
int n;
int l[N], r[N];
int w[N];
void fix () {
for (int i = 0; i < n; ++i) {
if (l[i] == r[i]) {
w[i] = l[i];
l[i] = r[i] = 0;
continue;
}
int pref = (1 << (__lg(l[i] ^ r[i]) + 1)) - 1;
w[i] = r[i] - (r[i] & pref);
r[i] &= pref;
l[i] &= pref;
}
}
void solve() {
cin >> n;
for (int i = 0; i < n; ++i) {
cin >> l[i] >> r[i];
}
fix();
segtree t(n);
vc <vc <int>> bits(bit, vc <int> (n + 1));
for (int i = 0; i < n; ++i) {
t.upd(i, w[i]);
for (int j = 0; j < bit; ++j) {
bits[j][i + 1] = bits[j][i];
if (r[i] >> j & 1) {
bits[j][i + 1]++;
}
}
}
int q;
cin >> q;
while (q--) {
int x, y;
cin >> x >> y; --x;
int ans = t.get(x, y);
for (int j = bit - 1; j >= 0; --j) {
int cnt = bits[j][y] - bits[j][x] + (ans >> j & 1);
if (cnt > 1) {
ans |= (2 << j) - 1;
break;
} else if (cnt == 1) {
ans |= 1 << j;
}
}
cout << ans << ' ';
}
cout << "\n";
}
signed main() {
ios::sync_with_stdio(false); cin.tie(0); cout.tie(0);
int t = 1;
cin >> t;
while (t--) {
solve();
}
}
```

Idea: AndreyPavlov

Preparation: IzhtskiyTimofey

Editorial: IzhtskiyTimofey

**Hints**

**Hint 1**

What different values can the answer for a vertex take?

**Hint 2**

Edges with a weight of 1 can connect all vertices $$$[1, v - 1]$$$ and all vertices $$$[v + 1, n]$$$.

**Hint 3**

Consider the edges of the form $$$(mn_u, mn_u - 1)$$$, $$$(mx_u, mx_u + 1)$$$, where $$$mn_u$$$, $$$mx_u$$$ are the minimum and maximum in the subtree of vertex u with respect to $$$v$$$.

**Tutorial**

Let's fix some vertex $$$v$$$ for which the answer is being calculated. Suppose the degree of the vertex $$$v$$$ in the tree is $$$deg_v$$$, then it's clear that it's necessary to add $$$deg_v - 1$$$ edges. Consider the components that appear after removing $$$v$$$. Then, the goal is to use the new edges to unite all the components into one, using the minimum total cost. This is the same as finding the minimum spanning tree in a graph, where the vertices are the components that resulted from removing $$$v$$$, and for every $$$a, b$$$, an edge with a weight of $$$|a - b|$$$ is drawn between the components containing $$$a$$$ and $$$b$$$.

Let's simulate Kruskal's algorithm for this graph. Consider all the single-weight edges in this graph: $$$(1, 2), (2, 3), ..., (v - 2, v - 1), (v + 1, v + 2), ..., (n - 1, n)$$$. It's clear that using the single-weight edges, the vertices with numbers $$$[1, v - 1]$$$ will definitely end up in one component, and the vertices with numbers $$$[v + 1, n]$$$ will also end up in one component. To unite these two components, it would be optimal to add an edge $$$(v - 1, v + 1)$$$.

It turns out that it's sufficient to consider only all the single-weight edges and the edge $$$(v - 1, v + 1)$$$. Let's limit the number of single-weight edges to $$$O(deg_v)$$$. For this, in each component $$$u$$$, calculate $$$mn_u$$$ and $$$mx_u$$$ — the minimum and maximum in the component, respectively. Claim: among the single-weight edges, it's sufficient to consider edges of the form $$$(mn_u, mn_u - 1)$$$, $$$(mx_u, mx_u + 1)$$$.

**Proof**

First, understand when it's necessary to add the edge $$$(v - 1, v + 1)$$$. Note that if there's at least one component $$$u$$$ such that $$$mn_u < v < mx_u$$$, then the edge $$$(v - 1, v + 1)$$$ won't be needed; otherwise, it will be. This is quite easy to show by simulating Kruskal's algorithm.

Let $$$v = 1$$$. We'll show that using edges $$$(mn_u, mn_u - 1)$$$, all components will unite. Go through the vertices $$$x$$$ from $$$2$$$ to $$$n$$$ and maintain the invariant that all vertices from $$$2$$$ to $$$x$$$ are in one component. At $$$x = 2$$$, this holds. When $$$x$$$ is the minimum in some component, then the edge $$$(x - 1, x)$$$ will be added, and since $$$x - 1$$$ is in one component with $$$2$$$, $$$x$$$ will now also be. When $$$x$$$ is not the minimum in some component, then $$$mn$$$ — the minimum in the component $$$x$$$ will be in one component with $$$2$$$ ($$$mn < x$$$, the invariant holds), meaning $$$x$$$ will also be in one component with $$$2$$$. Thus, it turns out that all will be in one component.

Now consider an arbitrary $$$v$$$. Separately consider the prefix of vertices $$$[1, v - 1]$$$ and the suffix $$$[v + 1, n]$$$. Then, similarly to $$$v = 1$$$, it can be shown that for the prefix of vertices $$$[1, v - 1]$$$, using edges of the form $$$(mn_u, mn_u - 1)$$$, you can unite $$$[1, v - 1]$$$. Similarly, for the suffix of vertices $$$[v + 1, n]$$$, using edges of the form $$$(mx_u, mx_u + 1)$$$, you can unite $$$[v + 1, n]$$$.

Now, if the edge $$$(v - 1, v + 1)$$$ is necessary, then add it to the answer. Otherwise, there's at least one component $$$u$$$ such that $$$mn_u < v < mx_u$$$, meaning the prefix of vertices $$$[1, v - 1]$$$ and the suffix $$$[v + 1, n]$$$ will unite into one component.

Finding $$$mn_u$$$, $$$mx_u$$$ for each component is straightforward; what remains is to determine which components are connected by the edges $$$(mn_u, mn_u - 1)$$$, $$$(mx_u, mx_u + 1)$$$. This can be done with binary search through the Euler tour of the tree. After that, Kruskal's algorithm can be initiated to calculate the answer.

Let's estimate the time complexity. For a specific vertex $$$v$$$, the time complexity will be $$$deg_v \cdot \log n$$$, so the total time complexity is $$$O(n \cdot \log n)$$$.

Depending on the implementation of the last step, the problem can be solved in $$$O(n)$$$, $$$O(n \cdot \alpha(n))$$$, where $$$\alpha(n)$$$ is the inverse Ackermann function relative to $$$n$$$.

**Implementation on C++**

```
#include <bits/stdc++.h>
using namespace std;
struct DSU {
vector<int> p, r;
int comp;
DSU(int n) : p(n), r(n) {
iota(p.begin(), p.end(), 0);
comp = n;
}
int find(int v) {
return (p[v] == v) ? v : p[v] = find(p[v]);
}
bool join(int a, int b) {
a = find(a), b = find(b);
if (a == b) return false;
comp--;
if (r[a] < r[b]) swap(a, b);
p[b] = a;
if (r[a] == r[b]) r[a]++;
return true;
}
};
int main() {
ios::sync_with_stdio(false);
cin.tie(0), cout.tie(0);
int T;
cin >> T;
while (T--) {
int n;
cin >> n;
vector<vector<int>> g(n);
for (int i = 0; i < n - 1; i++) {
int x, y;
cin >> x >> y;
x--, y--;
g[x].push_back(y);
g[y].push_back(x);
}
vector<int> tin(n), tout(n), l0(n), r0(n);
vector<int> lup(n), rup(n);
int timer = 0;
vector<int> ord;
function<void(int, int)> dfs = [&](int v, int p) {
l0[v] = r0[v] = v;
tin[v] = timer++;
ord.push_back(v);
for (int u : g[v]) {
if (u == p) continue;
dfs(u, v);
l0[v] = min(l0[v], l0[u]);
r0[v] = max(r0[v], r0[u]);
}
tout[v] = timer - 1;
};
dfs(0, 0);
function<bool(int, int)> ancestor = [&](int v, int u) {
return tin[v] <= tin[u] && tin[u] <= tout[v];
};
vector<int> pref_min_ord(n + 1, n), suf_min_ord(n + 1, n);
vector<int> pref_max_ord(n + 1, -1), suf_max_ord(n + 1, -1);
for (int i = 0; i < n; i++) {
pref_min_ord[i + 1] = min(pref_min_ord[i], ord[i]);
pref_max_ord[i + 1] = max(pref_max_ord[i], ord[i]);
}
for (int i = n - 1; i >= 0; i--) {
suf_min_ord[i] = min(suf_min_ord[i + 1], ord[i]);
suf_max_ord[i] = max(suf_max_ord[i + 1], ord[i]);
}
for (int id = (int)ord.size() - 1; id >= 0; id--) {
int v = ord[id];
lup[v] = min(pref_min_ord[tin[v]], suf_min_ord[tout[v] + 1]);
rup[v] = max(pref_max_ord[tin[v]], suf_max_ord[tout[v] + 1]);
}
vector<vector<pair<int, int>>> ans(n);
vector<int> weight(n);
function<void(int, int)> dfs2 = [&](int v, int p) {
vector<int> all_id, all_tin;
int idpar = -1;
for (int id = 0; id < g[v].size(); id++) {
int u = g[v][id];
if (u != p) {
dfs2(u, v);
all_id.push_back(id);
all_tin.push_back(tin[u]);
} else {
idpar = id;
}
}
function<int(int)> get_subtree = [&](int u) {
if (!ancestor(v, u)) return idpar;
return all_id[upper_bound(all_tin.begin(), all_tin.end(), tin[u]) - all_tin.begin() - 1];
};
DSU dsu(g[v].size());
function<void(int, int)> try_add = [&](int x, int y) {
if (x == -1 || y == n || x == v || y == v) return;
if (dsu.join(get_subtree(x), get_subtree(y))) {
ans[v].push_back({x, y});
weight[v] += abs(x - y);
}
};
for (int u : g[v]) {
if (u != p) {
try_add(l0[u] - 1, l0[u]);
try_add(r0[u], r0[u] + 1);
} else {
try_add(lup[v] - 1, lup[v]);
try_add(rup[v], rup[v] + 1);
}
}
if (dsu.comp != 1) {
try_add(v - 1, v + 1);
}
assert(dsu.comp == 1);
};
dfs2(0, 0);
for (int i = 0; i < n; i++) {
cout << weight[i] << " " << ans[i].size() << "\n";
for (auto [a, b] : ans[i]) {
cout << a + 1 << " " << b + 1 << "\n";
}
cout << "\n";
}
}
}
```

We hope you liked our problems!

Very fast editorial

I think D is easier than C lol. It takes me nearly 50 mins to come up with C, but only 25 mins on D.

YES

D was alot harder for me but that may be just cause I'm bad at math.

I feel for you

Man you are Pro!!

True, my stupid ass got stuck on C but D took me like 10 mins.

Me too, I only used 10 mins actually lol

菜就多练

菜就多练

I have a doubt in the editorial of problem C. When I fix L and R i.e the border values, then why checking only summation of all a_i such that it does not exceed l-(b[r]-b[l]) is sufficient? Shouldn't we also ensure that we have taken a[l] and a[r] in that summation?

Technically speaking, you are right, but it doesn't matter here whether you take it or not,

Say a[L] and a[R] are not in the

`cur`

(from editorial's implementation), say the a[L+d] and a[R-e] (R-e >= L+d) are the leftmost and right most a's included in the cur, then when the loops are at i = L+d and j = R-e this case is going to counted or maybe even bigger subset of a's (since difference of b's is smaller here, than when i = L and j = R, since b's are sorted), hence the`multiset s`

doesn't exactly contain the a's which are considered in the current loop but the answer comes right (a clever implementation !!)PS : My submission in contest also works exactly on your line of thinking, you can check that out here 249816460, Although the code is very messy :(

can you please elaborate using an example , I am really struggling to get this point !

thanks, you solved my doubts

Can you plz explain, how your solution is not n^3.logn as you have used while loops (which can go for n iterations every time) inside n^2 loop

what is mistake in this code for problem c It would kind of you if you can figure it out include <bits/stdc++.h> using namespace std;

define fast ios_base::sync_with_stdio(false), cin.tie(nullptr), cout.tie(nullptr); define ll long long define pb push_back int main() { fast int t; ll int n, x, a, b, a1, y, j; cin >> t; while (t--) { cin >> n >> x; a = 0; map<ll int, int> m; vector<pair<ll int, ll int>> v(n); for (int i = 0; i < n; ++i) { cin >> v[i].first >> v[i].second; m[v[i].second]++; a += v[i].first; } b = ((*(--m.end())).first — (*m.begin()).first); b += a; while (b > x) { if (v.size() == 1) { v.clear(); break; } j = 0; m[v[0].second]--; if (m[v[0].second] < 1) m.erase(v[0].second); a1 = a — v[0].first + ((*(--m.end())).first — (*m.begin()).first); m[v[0].second]++; for (int i = 1; i < v.size(); ++i) { m[v[i].second]--; if (m[v[i].second] < 1) m.erase(v[i].second); y = a — v[i].first + ((*(--m.end())).first — (*m.begin()).first); if (y < a1) { a1 = y; j = i; } m[v[i].second]++; } a -= v[j].first; m[v[j].second]--; if (m[v[j].second] < 1) m.erase(m[v[j].second]); b = a1; v.erase(v.begin() + j); } cout << v.size() << endl; } }

Python implementation, love to see it :)

C is much harder than D,I even used a segment tree to solve it.

can you share the solution using segment tree

I solve C,in contest time using a o(n^3) algorithmn.But someone hack me.However,the second day,I solve the problem in a o(n^3) algorithmn again.Using 2.1s to solve the 3s limited problem

Problem C can be solved in $$$O(n \log^2{n})$$$ if we use binary search to find the answer.

UPD: Sorry, I was wrong. Yesterday I had an idea of using binary search and I was sure that would work. I even saw some solutions using binary search, so I did not code it. But today I found out they had different time complexities. My bad.My code with such an asymptotic received a TLE.

Can you explain how?

how

Binary search $$$k$$$ — the size of the subset and check if there is a subarray of length $$$k$$$ with cost no larger than $$$l$$$. The rest is the same with the editorial.

why to check only subarray when we can choose subsequence also ???

I did think in binary search approach but then couldn't prove why subarray selection can be considered optimal instead of subsequence selection. Like, if I am starting at $$$b_{5}$$$ as first element and want to select $$$3$$$ elements, the optimal selection may be by chosing $$$(a_5, a_8, a_9)$$$ instead of $$$(a_5, a_6, a_7)$$$.

This can happen because $$$a_6, a_7$$$ can be insanely high while $$$b_9$$$ may be only a bit higher than $$$b_7$$$.

i also couldn't figure out why selecting only the subarray works, why not the subsequence??

we are not checking whole subarray here we are checking some elements in the subarray.Here we can use priority_queue or multiset in order to get the maximum number of elements whose sum is less than l-(dif) where dif is the difference between bi value of first and last element of subarray :)

It seems that your own submission is not using binary search. Can you show us your code?

it also can be solved in O(n^2)

249837146

Binary Search can be applied: 249807781

DisclaimerThis solution is very stupid, the prefix minima dp solution is much easier to think of and much easier to code.

Main Idea:

If we can select a subsequence of size k, so can we select a subsequence of size k' < k. (monotonicity is obvious.)

Now remains to check if a subsequence of length k exists that doesnt exceed the limit.

Let's assume we fixed the left and right borders. As pointed out, only the sum of the remaining a-values matters because the b-values are only important if they are on the borders.

So we need to compute for all subarrays the sum of the smallest k-2 a-values not included in the borders.

To achieve this, let's iterate over R from 2 to n, set L = R-1 and maintain a multiset that stores the smallest k-2 values currently included in our window. To expand our window, we just need to update the multiset with the a-value of index L.

Total asymptotics is O(n^2logn). If you have any further questions please don't refrain from asking.

IMO C was too hard

Problem E is a fairly interesting and educational problem, I like it!!!

Only if I read D before C... xD

Really liked the problem B.

Why?Because if mex of every subarray is same say

`7`

, then it means all subarrays have numbers`{0, 1, 2, 3, 4, 5, 6}`

but`7`

is missing in all of them. Hence if we combine all sub-arrays, the original array will still not have`7`

but contain the rest of numbers from`0 to 6`

.`O(N)`

we can find the mex of entire array.`[0, mx)`

.Pseudo CodeNice! Same here...for the second one though, I just maintained a prefix array of mexes going from n-1 -> 0. Then, I compute the mex for each $$$i$$$ from $$$0, n$$$ and compare if

`leftmex == suffixmex[i+1]`

, which is mostly the same as what you're saying, I think! I think the second observation that you only need 2 subarrays was the most useful for me.bro really called python pseudocode damn

Lol. There's subtle differences which make it more valid to call this pseudo-code.

If it were python,

1. It would be

`def solve`

and not`function solve`

.2.

`set = {}`

would just create a dictionary object making`set.insert(i)`

invalid.3. I am assuming

`set = {}`

is a set that only storesdistinctintegers, which help me in calculating if I have seen all numbers from`0 to mx-1`

.4.

`getMex`

is assumed to be well understood and implemented by the user.Thank You for breaking this down! I was having a hard time understanding the editorial but this makes so much sense.

Bro if 0 is more that two in the array,then 1 is also same but 2 is just 1 then answer should be no?

Yeah, but that approach is not scalable or simplify-able to write code IMO.

It has lots of corner cases too, I also started with thinking if there's any non-negative number which occurs only once, then answer will be no.

"the mex of every subarray will actually be the mex of entire array if answer exists.",

If mex is k and k-1 is just one time in the whole array,then answer must be no ?

Yes, that is correct, but it's not useful. Even if $$$k$$$ is the mex and $$$k-1$$$ appears twice, or even if all values in $$$[0, k-1]$$$ appear twice, there might still be no solution, for example $$$a = [0, 0, 1, 1]$$$.

Yeah,after that I will just build my two segment 0 to k-1 first then again 0 to k-1 for the second time.If I can build answer is yes otherwise no.I have already get ac in that way.But thank you for your analysis.

I did it exactly like this and thought authors really messed up with it. Turns out there's a much simpler solution.

why does 0 1 1 return -1?

The mex of the array is 2

You can't divide it into 2 segments

[0,1] = 2, [1] = 0

Not okay. So -1.

Wow u back to cp?

A typo?

`set.insert(i)`

should be`set.insert(arr[i])`

thanks

Unbelievably fast editorial PLUS 2 language implementations?! Wow -- what an effort by the organizers, thank you!!!

What is wrong with this solution?

You are only checking for the first character of the string. You should check the whole string. Your code will fail on this string "adba"

Thanx

You are comparing the first and the last character of the string instead of comparing the complete string with its reverse.

Consider the string "acba". Take n to be any even number. Now, your code will output "acba", even though "abcaacba" is lexicographically smaller.

To correct this, check for reverseStr<str instead.

Yeah , the problem was if the last and first are equal I have to keep checking for the condition (s[s.size()-i-1]<s[i])

Thank you for answering!

thats the same solution as mine

You can try the example

acbaProblem C was a little too difficult for me to understand. Great round anyways

C is quite easy for me but unfortunately forgets the case where x+y and y-x equal s in D :(

I am a bit confused, while excluding x + y in S, won't there be some pairs where y — x is also in S, and those are doubly excluded when we calculate the answer for those again?

You can see that when subtracting the set containing (x + y), its common part with the set (y — x) is subtracted once, when subtracting the set containing (y — x), its common part with (x + y) is also subtracted once, meaning we have subtracted the common part twice (with one excess), so we need to add their common part once more

Got it, thanks! :)

B was very nice.

C has DP solution in $$$O(n^2)$$$. 249790851

not offending you but what do you expect you are contributing with these comments...most of the people don't understand the code because you know what logic you wrote...it doesn't have comments either. Some understand wrong solutions and concepts get unclear...please if u comment then atleast give a small explanation of ur code

We can solve $$$C$$$ with easy $$$O(n^2)$$$ dp solution. First, sort all pairs by $$$b_i$$$ value.

Now, sum of $$$|b_i - b_{i-1}|$$$ is equal to $$$b_{last} - b_{first}$$$ Let's say $$$dp_{len}$$$ is minimum value of sum $$$a_i$$$. Answer is maximum $$$len$$$, if $$$d_{len - 1} + a_i + b_i \le L$$$.

Base is $$$dp_1 = min(a_1 - b_1)$$$

The transition is $$$dp_{len} = min(self, dp_{len - 1} + a_i)$$$

please explain your solution clearly by writing the states then the transitions so that it helps us...these types of comments create in us confusion related to the topics...please answer

I think I explained it good, maybe some pseudo-code will help you. Ask if it is not

We must choose maximum lenght subsequence such sum of $$$\sum a_i + max(b) - min(b)$$$ is minimum (we want to choose minimum sum, because we want this sum be not greater than $$$L$$$, if minimum sum is greater than $$$L$$$ it means every sum will be greater than $$$L$$$).

So, let's iteratate in non-decreasing order of values $$$b_i$$$, for $$$len = 1$$$ we store base $$$d_1 = min(d_1, a_i - b_i)$$$ (because $$$b_i$$$ is minimal value, that is, $$$min(b)$$$. And in formula we get $$$\sum a_i + max(b) - min(b)$$$, so current value $$$b_i$$$ will be $$$min(b)$$$ and we must substract it).

Now transition is simple: choose previous length and try to add new value in it, increasing lenght by $$$1$$$

Upd: my submission during the contest [here]

Thanks...I got it !! sorry if my comment sounded rude

Bro doesn't even try to understand. You just want fast food.

Did someone manage to pass a solution with complexity $$$O(n \log A \log n)$$$, where $$$A$$$ is $$$\max(x_i, y_i)$$$, for problem E? My first two solutions with that complexity gave TLE on test 30, but with the TL and the constraints I believe it should be able to pass. Here is my last submission with that complexity, in case anyone is interested: 249817071. I ended up solving it in $$$O(n \log A)$$$ doing the same but with sweepline.

maybe you can try to use scanf to read and printf to print answer. I think your code is correct in time complicity.

Yes, my $$$O(n \log A( \log n+ \log A))$$$ passed with the help of pragmas.

249812906 is $$$n \log n \log A$$$, it passes in 1300ms without any extra optimizations. Feel free to hack if you think you can make it TLE (I might try hacking it myself later)

My O(nlogÂlogn) submission passed in 0.2s with no optimizations.

Was n^2log^2 not intended to pass for C? Was going to CM then FST lol.

My n^2log^2 solution passed, but it most likely would not have passed, had I used multiset or some other heavy O(logn) data structure, instead of priority queue.

Yeah I used a PST so it's kinda my fault, but I think the TLE case should have been in pretests atleast.

feels bad :I

I used a multiset for solving in $$$n^2 (log(n))^2$$$ time and it passed, I think the constant of the time complexity would make a huge difference in this case.

Can someone help me in problem C? I am trying to upsolve C and looking at the editorial would be my last option. Till now I have understood the problem and have thought of taking input for the message set as an array of pairs, and will try to sort it firstly on the basis of Api and then on the basis of Bpi and then form sets using dp to find the size of messgaes, Now should I move forward or should I change my approach ? Any hints?

Maybe you should try to sort pairs by $$$b_i$$$ because if you select some pairs and the optimal way to arrange them is sort them with $$$b_i$$$

Thanks for the lightning fast editorial

C can also be solved using dp, dp[i][j] -> minimin time needed to read exactly j messages where i being the last one. This solution will be O(n^3) TLE but we can optimize it by using prefix dp .AC

I really liked problem E, but I misread C so I didn't have the time to implement it :(.

ngl idgaf fr

:(

My problem E solution(AC after contest) should be $$$O(nlog^3A)$$$ where $$$A$$$ is max grade value (maybe smaller than that, the constant should be really small though). I assumed that for interval that end at $$$r$$$, number of $$$l$$$ that will change the max value is at most $$$log_2(A)$$$, and for each interval $$$[l, r]$$$, the grade that we must consider is also at most $$$log_2(A)$$$ when transition to $$$[l -1, r]$$$.

For those who are interested: 249836964. Feel free to hack!

Meh, I had an $$$O(n \log^3)$$$ solution but decided not to implement because I thought it would TLE.

I have a quite different solution of

C. First, sort the array in thenon-increasingorder of $$$b_i$$$ ($$$b_i >= b_{i+1}$$$).If we can get a set of size $$$k$$$, we can get a size $$$k-1$$$, so we can binary search on the final answer.

Let the final answer be $$$k$$$ and the set is $$$p1 p_2 ... p_k$$$ => $$$a_{p1} + b_{p1} - b_{p2} + a_{p2} + b_{p2} -b_{p3} .... + a_{pk} + p_{bk}$$$.

The final set is $$$a_{p1} + a_{pk} + b_{p1} - b_{pk} + $$$ the sum of the minimum $$$k-2$$$ values of

$$$a$$$of the selected segement which gives $$$a_{p_2} a_{p_3} ... a_{p_{k-1}}$$$ and we can use a priority_queue or multiset to handle it.same as mine. We are goofs, the O(n^2) solution was so much easier :skull:

Agreed. We got lucky with the constraints of $$$n$$$. I wasn't able to come up with the O(n^2) solution :'(

why does 0 1 1 array in B return -1

Because if you select 0 and 1 1, the mex for first segment is 1, and for the second segment is 0, or if you select 0 1 and 1, the mex on the first segment is 2 and in the second is 0, so again mexs are different.

thank you,i didnt get that part at first

Thanks for fast editorial

In problem C, when I use a vector I get TLE, while when i use an array it gets accepted. Can anyone explain the reason?

https://mirror.codeforces.com/contest/1935/submission/249844469

https://mirror.codeforces.com/contest/1935/submission/249843979

Maybe you encountered this bug?

Can anyone explain what's wrong with this idea.

CodeHere dpi1 denotes the maximum number of messages that can be read if ith message is the last one read. dpi0 denotes the minimum time needed to read dpi1 number of messages if ith message is the last one read.

So the answer will be max(dpi1) over all i.

Please someone tell me what's wrong with the above idea/code.

C has a simpler solution in O(nsq) where the solution before i can be maintiained in just a vector storing better answer upto i-1

Array a is sorted by b. best stores the min val of (ai+aj+ak...)-(bi) for cnt elements to make ans with a new term az+bz need to be added

for C, why the check

`v[r][0] - v[l][0] + cur > L`

works.For the case when we pop a message from the heap that belongs to l or r index, shouldn't we change

`v[r][0] - v[l][0]`

to`v[r - 1][0] - v[l][0]`

or`v[r][0] - v[l + 1][0]`

Exactly my doubt..pls someone clear.. even if we delete some ai in l to r why are we still considering V[r] — V[l] ?

See This Thread

I messed up C, made it $$$O(n^3)$$$. I didn't realise that for a given start value of $$$b$$$. If an element has to be discarded, it just has to be discarded!

I initially thought of a 2 pointer like thing, but discarded it later, and kind of brute forced without considering this fact.

Aside from that: Seems like the demographics have changed!

I remember getting severe negative deltas at such performances (such as mine today, in terms of rank).

time travel editorial

Can someone help with Problem C ?

The tutorial's

`O(N^3logN)`

solution seems correct to me, but I have some hiccups in accepting the`O(N^2logN)`

solution.My question is in transitioning for

`(l, r)`

to`(l, r+1)`

`(l, r)`

would be picking as many smaller`a's`

as possible with sum`<= L-(a[l]+a[r]+(b[r]-b[l]))`

, lets say you maintain the set`S`

of best`a's`

that fit this equation.`(l, r+1)`

pair, using`a[r]`

is optional. What if`a[r]`

was too high that caused us to remove some good`a's`

in previous iteration from`S`

that could have been used for the solution of`(l, r+1)`

.I know the solution lies where if

`a[r]`

was too high, the multiset would have already removed that first, but then, let's be honest, we are not really calculating the`exact`

answer for`(l, r)`

pair in the first place.Could someone help with a precise explanation here?

In the solution, for the

`[l,r]`

interval, we are not maintaining number of smaller`a's`

with`sum <= L - (a[l] + a[r] + (b[r] - b[l]))`

.Rather, for every interval

`[l, r]`

, our priority_queue / multiset maintains the list of a's which lie in`[l, r]`

, with`sum <= L - (b[r] - b[l])`

. only (which means we do not fix`a[l]`

and`a[r]`

to always be included in the sum).Now, 4 cases arise for interval

`[l, r]`

:`a[l]`

and`a[r]`

are included in the set of the smallest sum.`a[l]`

is included but`a[r]`

is not included`a[r]`

is included but`a[l]`

is not included`a[l]`

and`a[r]`

are not included.For case 1, we know that this is the best set of

`a's`

maintained for`[l, r]`

, so we can do`mx = max(mx, count)`

here. For every other case, we have some other interval`[l_inner, r_inner]`

where`l_inner`

and`r_inner`

are indices of the leftmost and the rightmost included`a's`

.For every

`[l, r]`

interval where case 1 is not applicable, we say that the current count is less than or equal to the count of`[l_inner, r_inner]`

(This is because`b[r_inner] - b[l_inner] <= b[r] - b[l]`

increasing the possible number of items we can pick for the sum within limit).So, for any`[l, r]`

where cases 2, 3 and 4 exist, the answer will always be <= answer of`[l_inner, r_inner]`

.Also, we can see that for the interval

`[l_inner, r_inner]`

, elements at l_inner and r_inner will be always be chosen in the sum. Since we are looping through all the possible values of l and r, it is ensured that we will calculate`[l_inner, r_inner]`

.So, this is why we do not need to calculate the exact sum for

`[l, r]`

, but rather`b[r] - b[l] + (sum of a's)`

<= L.Hope this helps and is not too confusing.

In other words $$$\sum a <= L - (b_r - b_l) <= L - (b_j - b_i)$$$ where $$$l <= i <= j <= r$$$. Since we are iterating all possible $$$(l, r)$$$ we are also going over all $$$(i, j)$$$

Yes, correct

Great explanation.

Great Explanation!!!

So our first priority is to maintain all possible gaps/distance

`[l,r]`

, and for that particular distance we are calculating maximum possible count. (Make Sense).I was thinking greddily and why we we not updating

`[l,r]`

in the case if we are removing first and last element of subset we are choosing.(We don't required it, we can simply calculate max soln for all possible distance).Why no implementations of E and F in python

Can someone tell me what am I missing in C problem ? I sorted in increasing order of b and found the required time for n messages , then I removed the messages one by one , checking which would reduce the time by maximum .. as soon as this sum was getting less than l , I was printing the current number of messages in the set...

249827661

Same issue, unsure what 591st tc in 2nd test case is? 249827779

For problem F, it can be proved that for any node $$$u$$$, there always exists an optimal set of edges such that there is at most one edge of the form $$$(mx_v, mx_v + 1)$$$, and all other edges are of the form $$$(mn_v - 1, mn_v)$$$ (and possibly one $$$(u - 1, u + 1))$$$.

In fact, we only require an edge of the form $$$(mx_v, mx_v + 1)$$$ when there is some neighbor $$$v$$$ of $$$u$$$ with $$$mn_v = u + 1$$$. In all other cases, it is sufficient to add all the valid $$$(mn_v - 1, mn_v)$$$ edges (and possibly one edge of the form $$$(u - 1, u + 1)$$$) to unite the entire tree.

The phase of the coding journey where you are done with the first div 2 question in maximum 5-10 mins and then spend the next two hours coming up with O(n^5) solutions

C n^2 DP: https://mirror.codeforces.com/contest/1935/submission/249854472

Same Solution

Can you explain what you did?

first of all we will store values in a vector<pair<int,int>> v and sort according to bi. (if you don't know why, ask)

now, my dp[i][j] has following arguments i and j.

i: chosen subset will end at i. j: chosen subset will have j values

and dp[i][j] is the minimum cost of choosing optimal subset such that it ends at i and have j values in it.

answer will be maximum value of all js such that dp[i][j]<=l.

now transition: well I lied above dp[i][j] is not the minimum cost of choosing the optimal subset but rather it is dp[i][j] = minimum cost — v[x].second. where x is the index of the first value in the subset.

and it's not only ending at i its minimum cost for all indexes <=i with values count j —

transition works as follows dp[i][j] = dp[i-1][j-1] + v[i].first + v[i].second.

a lot of things happened here dp[i-1][j-1] = minimum cost — v[x].second(explained above) is the amongst all the ending indexes < i, such that we get minimum cost — v[x].second so dp[i][j] = minimum cost + v[i].first + (v[i].first-v[x]) now since the b values are sorted only contribution we will get from b is (v[i].second-v[x].second)

now we will do dp[i][j]= min(dp[i-1][j], dp[i][j]-v[i].second) (because we want to make it (minimum cost — v[x].second)).

what made you choose to sort in accordance with bi

got it :<)

Wow...your solution was great. I mean thinking of such states and transitions are great. please share some more prblms like this if you have where the state definition is not trivial and transitions require critical thinking...i Specially liked the part where you are using ur dp array to check if it is <= l...and at the next step making it such that it helps in recalculation of next dp states.

please share more problems like this

this is a common technique where we have to separate f(i) and f(j) if we are dealing with pairs. (here essentially we are dealing with every thing before index j and j) let me give you a trivial example.

suppose we have been given a array "arr" with some values and we have to calculate no of pairs such that arr[i]*arr[j] = 1 (mod 1e9+7)

we can just loop in the array store 1/arr[i] (mod 1e9+7) in a hashmap and add and+=mp[arr[i]]; code:

map<int,int> mp; int ans = 0;

int inverseMod(int val) for(int i=0;i<n;i++){ ans+=mp[arr[i]]; mp[inverseMod(arr[i])]++; }

the main idea being used in that dp solution is that only.

upsolving three months later , your explanation is helpful thank you brother

i am glad.

well i did not explain it very clearly if you have any doubt just ask me.

now we will do dp[i][j]= min(dp[i-1][j], dp[i][j]-v[i].second) (because we want to make it (minimum cost — v[x].second)).i don't get it when are you choosing to have v[x].second in dp[i][j] because if choosing ith as optimal then only we should add v[i].second to dp[i][j] so then if it's no optimal it shouldn't be added and shouldn't be subtracted in future

i don't clearly understand your question.

dp[i][j] before this code "dp[i][j]= min(dp[i-1][j], dp[i][j]-v[i].second)"

is taking the following form -> sum of all ai's in the subset + (v[i].second-v[x].second)

now i am making it dp[i][j] = min(dp[i-1][j], sum of all ai's in the subset-v[x].second)

Another solution for C using DP and binary search on the solution. https://mirror.codeforces.com/contest/1935/submission/249784754

Can someone explain why in the editorial of problem C when extracting elements from the multiset the value of

`v[r].first - v[l].first`

doesn't get updated? According to the logic, the value of`v[r].first - v[l].first`

will decrease as we remove the greatest element from the multiset.if you think deeply it still covers all the cases. If we're removing a value of the element and still considering it's b value, this case has already been covered earlier at index with smaller b value which will be optimal than the current one.

I agree. Posted the same query here.

I solved B using two pointers: 249886727. It works because mex is monotone by inclusion. Imo your editorial code is quite confusing for beginners. I would scan the array forward and backward instead of inventing how to update mex while removing a number.

Problem C

Can anyone tell me what's wrong in my code ? 249887053

logic -> for any range of l and r, we must add a[l] + a[r] + b[r] — b[r] in sum and for remaining we can use priority queue.

In problem C, Can someone tell me whether the approach below works somehow?

I thought if I made this array a complete graph, and tried to find the minimum spanning tree starting at each vertex, then the maximum diameter amongst all the MST's should be the answer.

For the solution of problem 1935C, when iterating over all $$$(l, r)$$$, the messages $$$l$$$ and $$$r$$$ are certainly read. However, when extracting elements from set

`s`

(which contains the $$$a$$$ values of the messages), no care is taken that messages $$$l$$$ and $$$r$$$ are not extracted. Can someone explain this?If messages $$$l$$$ or $$$r$$$ are removed from the set, we're now calculating the cost incorrectly: the cost we calculate is

largerthat the true cost, and this might mean that our answer is too small.But that is actually never a problem. If $$$l$$$ or $$$r$$$ gets removed when considering the range $$$[l, r]$$$, the range $$$[l, r]$$$ is definitely not optimal ($$$[l, r-1]$$$ or $$$[l+1, r]$$$, depending on which element got removed, would be at least as good as $$$[l, r]$$$). Since we comsider

allranges, including $$$[l, r-1]$$$ and $$$[l+1, r]$$$, we won't miss the optimal solution.Yes, but it doesn't seem satisfying IMO, I have posted similar doubt comment above.

Also if element

`l`

gets removed, note that though`l+1`

may have less value in`a`

but it definitely also has bigger`b`

value which helps when its the smallest`b`

. Now the answer to that is, but we will include`l+1`

in our for loop iteration, so that case gets covered.The point here is, this seems to be working only because of many other reasons behind the scenes which are skipped in editorial.

Would like a cleaner general approach to handle such situations with more clarity without worrying over the behind-the-scenes forces making this work.

Just to avoid all this I didn't include

`lth`

element and`rth`

element in the multiset. Instead, I added them explicitly. And guess what I got WA this way. And I ended up not being able to solve the problem in the contest.Here is my code : Submission It would be very helpful if you could point out where this code is failing.

I tried the same way and my submission fails at the same test case. This

needsrectification.I was not able to do anything today just to figure out why it is going wrong.

Take a look at Ticket 17407 from

CF Stressfor a counter example.Thanks buddy

I just printed that test case that it was failing for.

The thing is that when we are fixing

`lth`

and`rth`

element we might remove a smaller element that might be useful afterwards just to keep`rth`

element.Thanks for the clarification

why not in c we have taken 1st and 2nd minimum value of b . why we have taken max and min value?

I tried to use dp for the C, passed the sample cases but got wrong answer. Need some help(´･_･`) My code

I am still confused about the formula , $$$\sum_{i = 1} ^ {i = n - 1} |B_i - B_{i+1} |$$$. The formula always gets minimum , as the sequence is ordered. How to proof the formula clearly ???

What proof?

When the sequence of B is odered, why the formula always get minimum ? I want to the reason.Please teach me ，i am willing to know!

I think you misunderstood this. Let's observe formula:

$$$\sum_{i=1}^{k} a_{p_i} + \sum_{i=1}^{k-1} (b_{p_i} - b_{p_{i-1}})$$$

Sum of values $$$a_i$$$ does not require any order, we can just sum them.

What about sum of differences of $$$b_i$$$, of course, we want to

minimizethis sum (so that sum is not greater than $$$L$$$)Now, we can forget about values $$$a_i$$$ (their sum does not depend on order we choose values). But if we got values $$$b_i$$$, it's always optimal to

sortthem. For example, $$$b = [1,6,1,5]$$$ gives sum $$$6-1 + 6 - 1 + 5 - 1 = 14$$$, but $$$b=[1,1,5,6]$$$ gives sum $$$1-1 + 5 - 1 + 6 - 5 = 5$$$. Why? You can see abs function $$$|x-y|$$$ as distance beetween two points $$$x,y$$$ on straight line. It's easy to see, that sorting gets the minimum sum.Now, we can sort values $$$(a_i, b_i)$$$ by $$$b_i$$$ and choose sum values $$$b$$$ to our answer. Let's closer look at $$$b_j - b_{j-1}$$$, it's equal to $$$b_2 - b_1 + b_3 - b_2 + b_4 - b_3 + ... + b_{k} - b_{k-1}$$$, there are pairs $$$b_2 - b_2 + b_3 - b_3$$$ etc for all $$$i$$$ from $$$2$$$ to $$$k-1$$$, and now this sum equal to $$$max(b) - min(b)$$$

i am appreciate your reply! The words resolve my question!

For example, b=[1,6,1,5] gives sum 6−1+6−1+5−1=14 , but b=[1,1,5,6] gives sum 1−1+5−1+6−5=5 . Why? You can see abs function |x−y| as distance beetween two points x,y on straight line. It's easy to see, that sorting gets the minimum sum.Also , you assist me to deepen insigth about the problem. Thank you !

NP! (Not like NP-problems by these authors, but "No problem")

"NP" means "No problem" ? if my guess is right , it is fun and straightforward！

I have one more doubt. While we are selecting the l and r and checking whether the sum of elements in between them is less than l-(b[r]-b[l]), we are taking the sum of some a[i]'s, but the problem is that as soon as we fix l and r, shouldn't we also make sure that a[l] and a[r] are always there in our sum? But proceeding greedily without taking care of them is getting accepted. How is this correct? Please help me with reason why we don't need to make sure that the sum always includes a[l] and a[r].

Thank You

I must say, I don't know why greedy works. My solution is DP

Can you explain your dp solution please.

Yes, I explained it in comments, here https://mirror.codeforces.com/blog/entry/126662?#comment-1125442

You can solve A without n at all. You just need to find the minimum lexicographic string between the regular string, the inverted string + the usual string, the usual string + the inverted string

You have to use $$$n$$$ in your proof, author's C++ solution don't use $$$n$$$

in second how to prove this that if we can not separate array into two segments then it is also not possible for us to do it if the number of segments are increased ?

If there is answer in such $$$k$$$ segments we can always reduce it into $$$2$$$ segments. So, it is not possible to find answer in $$$2$$$ segments, we won't find it in any bigger segments (if there is answer in bigger number of segments, we would reduce it into $$$2$$$ segments and found it there).

Let us assume for array $$$a$$$, we have a valid partition into $$$k$$$ subsegments $$$S_1, S_2, ... S_k$$$ each having $$$mex(S_j) = m$$$. If we consider merging subsegments $$$S_i$$$ and $$$S_{i+1}$$$ ($$$i < k$$$), the $$$mex$$$ of the new subsegment would also be $$$m$$$ (as it too would contain all elements from $$$0$$$ to $$$m-1$$$ but would not feature $$$m$$$). Continuing this process will lead us to partition array $$$a$$$ into any number of subsegments less than $$$k$$$.

We can conclude that if there is a valid partition of $$$a$$$ into $$$k$$$ subsegments, a valid partition into $$$x (< k)$$$ subsegments must exist. As a corollary, if $$$a$$$ cannot be partitioned into $$$x$$$ segments, it cannot be partitioned into $$$k (> x)$$$ segments. This would prove your argument for $$$x = 2$$$.

I solved C using DP and lazy propagation (I know overkill). Basically $$$dp[i][j]$$$ is the minimum cost to get $$$j$$$ elements in the range $$$[i,n]$$$. Let's sort the elements by $$$b[i]]$$$. In that case $$$dp[i][j] = \min_{k=i+1,k=i+2,...k=n+1} b[k] -b[i] + a[i]$$$. This so far is $$$O(n^3)$$$. We can optimize it by making a segment tree that supports lazy propagation for each $$$j$$$, however that gets MLE. If we iterate over $$$j$$$ we notice that we only care about $$$j-1$$$ so we can use only 2 segment trees. Why do we need lazy? When go from $$$i+1$$$ to $$$i$$$ , the difference between $$$b[i]$$$ and all $$$b[j]$$$ such that $$$j>i$$$ will increase by $$$b[i+1]-b[i]$$$ so we can update them using lazy propagation. The answer is the maximum $$$j$$$ such that there is some $$$dp[i][j] \le l$$$, thus the solution is $$$O(n^2logn)$$$. Code: 249828732

Can anyone help me on why I'm getting munmap_chunk() RTE on pretest 1 for Problem C? If I run each test case on the 1st pretest one by one, then there is no RTE! Submission: 249970089

Can someone help me with B i have the same idea as editorial but i dont know why iam getting WA here is my code: 249976418

Consider the test case:

This might help.

in problem C, when i remove the max element multiset, if i use erase, i WA but i use extract, it AC. Why Ex: mst.erase(mx); (WA) mst.extract(mx); (AC) but when i use extract, my complier ERROR

i found my mistake, sorry everybody

In the author's solution to problem C, can anyone please explain how it is ensured that the end points

`v[l].second`

and`v[r].second`

are covered in the multiset and also in the sum $$$cur$$$ (as we are simply removing values greedily)?I think if the multiset is not empty, there must be

`v[l+a].second`

($$$a>=0$$$) and`v[r-b].second`

($$$b>=0$$$) in the multiset, so $$$v[r-b].first - v[l+a].first + cur <= v[r].first - v[l].first + cur <= L$$$in problem c, if we add the 'a' value of left element to the multiset , while removing the max element arent we messing up the boundary element? like if left is 0 and its value is (a,b)=(10,1), in future if we remove 10 from multiset we r still holding 0 as left element in the remaining run of the second loop, which seems wrong to me, plz anyone help me to understand this.

this is my code ,the only difference is i am not adding the boundary element to multiset,i dont know why am i getting wa, can someone help me?

for(i=0;i<n;i++) {

found what's wrong , i assumed taking the current right element is always best , but its not the case, some times we have to skip the current right element,

ac code

Same doubt I also have.

Why I am getting Runtime Error in the solution of C. I have checked that I am not accessing an unallocated memory. Please help me.

My Submission

error in comp funtion

try this

bool comp(vector&u,vector&v){ return u[1]<v[1]; }

https://mirror.codeforces.com/blog/entry/70237

Thank you very much .

There is a nice dp way to solve the problem C. After sorting the pairs according to b[i]. lets dp[pos][len][2] be the state. Here pos means any prefix of array dp[pos][len][0] refers minimum possible total cost of any subsequence of length len of the prefix pos and we have stopped taking new value here. and dp[pos][len][1]refers minimum possible total cost of any subsequence of length len of the prefix pos and we will take one or more element to the choosen subsequence. here is my solution link : https://mirror.codeforces.com/contest/1935/submission/249842011

My Dp uses only 1 state

yeah its possible to reduce one dimension. Actually i have done this in my solution. I have use this extra dimension for simplification.

You don't have to use DSU or binary search machinery in problem F. You can simply join components as follows:

Has anyone done this for C?

For j such that b[j] < b[i]:

For j such that b[j] > b[i]:

Answer = max cnt over all dp(i, cnt) such that dp(i, cnt) < l

For finding min(dp(j, cnt — 1) — b[j] & j < i), use range min segment trees over compressed b[i] values. Each cnt will have its own 2 segment trees (one with +b[j], one with -b[j]). Feeling lazy to implement

Update: Misinterpreted that that the chosen indices $$$p_i$$$ need to be in ascending order (Should've seen the first test case explanation smh). This trivially reduces the solution search space as you would obviously sort $$$b_i$$$ values.The above solution solves the problem when $$$p_i$$$ must be chosen in increasing order

Anyway, enjoyed solving this (much) tougher variant although that required me to take 3 hours off a Sat for implementing and debugging

Code for the variant: https://mirror.codeforces.com/contest/1935/submission/250334354

For problem E, if n = 2 and y1 = 5 and y2 = 9 and let's consider cases when x1 = 0 and x2 = 0. If we look at the highest bit = 3, then this bit appears twice in numbers <= y2 [number 8 (1000) and 9 (1001)]. I don't think we can set all bits less than 3 in the answer. You can only pick one number out of the two. Here we choose 9. And in order to set all bits to 1 we will have to choose 6 (110) from other number which is not possible. Can somebody rephrase what the editorial is trying to say?

"Suppose we are iterating over bit $$$i$$$, then if such a bit occurs $$$c$$$ times in $$$y$$$ numbers there are several cases:"

This $$$c$$$ means the number of times that a bit occur in $$$y$$$, not in $$$[0, y]$$$. In your example, when you have $$$y_1 = 5$$$ and $$$y_2 = 9$$$, when we are looking for the highest bit = $$$3$$$, then $$$c = 1$$$, so you put the bit $$$3$$$ in the answer. After that, the next bit is $$$2$$$, that appears once in $$$5$$$, so you put the bit $$$2$$$ in res, then, bit $$$1$$$ doesn't appears in any of them, and bit $$$0$$$ appears both in $$$5$$$ and $$$9$$$, so, the answer is $$$(1101)_2$$$.

Ok, I got confused in the language.

Another thing is that after removing the values of $$$w_i$$$ we are ignoring the values of $$$x^{'}_i$$$ and only considering $$$y^{'}_i$$$. What allows us to do that? I believe that editorial does not explain that clearly.

Edutorial solution of problem B works without

while (mex2 && !cnt2[mex2 — 1]) --mex2; lines.

can you explain the code a little bit

I got it now after reading the code maybe 5 time this part that you talked about is just to make sure that mex2 is correct but it's always going to be correct at this step

I say also that it is always going to be correct

Hey, in case any of you are looking for a slightly detailed editorial on Problems B to E, here's my attempt: https://www.youtube.com/watch?v=TrnshSV0qy0&t=5061s&ab_channel=DecipherAlgorithmswithSaptarshi

Btw I got a great comment seeking solution of C in O(n^2) (that I didn't care thinking myself because my O(n^2 * log(n)) worked during contest. But then, putting a thought, I could see that the a very common DP approach — direct Knapsack — would work in O(n^2). I'll consider uploading one video on that, as we all know how much DP knowledge is valuable in these contests.

e done using 2d dp :- https://mirror.codeforces.com/contest/1935/submission/250851075

clean implementation

what is mistake in this code for problem c

## include <bits/stdc++.h>

using namespace std;

## define fast ios_base::sync_with_stdio(false), cin.tie(nullptr), cout.tie(nullptr);

## define ll long long

## define pb push_back

int main() { fast int t; ll int n, x, a, b, a1, y, j; cin >> t; while (t--) { cin >> n >> x; a = 0; map<ll int, int> m; vector<pair<ll int, ll int>> v(n); for (int i = 0; i < n; ++i) { cin >> v[i].first >> v[i].second; m[v[i].second]++; a += v[i].first; } b = ((*(--m.end())).first — (*m.begin()).first); b += a; while (b > x) { if (v.size() == 1) { v.clear(); break; } j = 0; m[v[0].second]--; if (m[v[0].second] < 1) m.erase(v[0].second); a1 = a — v[0].first + ((*(--m.end())).first — (*m.begin()).first); m[v[0].second]++; for (int i = 1; i < v.size(); ++i) { m[v[i].second]--; if (m[v[i].second] < 1) m.erase(v[i].second); y = a — v[i].first + ((*(--m.end())).first — (*m.begin()).first); if (y < a1) { a1 = y; j = i; } m[v[i].second]++; } a -= v[j].first; m[v[j].second]--; if (m[v[j].second] < 1) m.erase(m[v[j].second]); b = a1; v.erase(v.begin() + j); } cout << v.size() << endl; } }

Can anyone figure out what is wrong in my solution of problem c

my code:

Update: I found the mistake, I was using a map, instead of map I should use multimap.AC

.

Is it necessary to have this condition

while (mex2 && !cnt2[mex2 — 1]) --mex2;in problem B implementation in C++, Can anyone explain this.Sample solution for 1935C - Messenger in MAC should get WA due to integer overflow, but passes tests...

Hack data:

Answer is 1 but prints 2.

Hm