Sorry for the delay in publishing the editorial. If anything is unclear or missing, please let me know in the comments.

In all the problems, reading the hints is a must as the solution continues from there.

Idea: Dominater069

Preparation: Dominater069

Editorial: Dominater069

**Hint 1**

What is the minimum number of bridges to burn if we want to make exactly $$$i$$$ islands visitable from $$$1$$$?

**O(n) Solution**

Atleast $$$i \cdot (n - i)$$$ bridges need to burnt (the bridges connecting the $$$i$$$ reachable islands and the $$$n - i$$$ non-reachable islands).

A simple $$$O(n)$$$ solution is for every $$$i$$$ from $$$1$$$ to $$$n$$$, check if $$$i \cdot (n - i) \le k$$$, in which case print $$$i$$$ and break.

**Hint 2**

What is the answer when $$$k \ge n - 1$$$.

**Hint 3**

When $$$k < n - 1$$$, is it possible to make any island non-visitable?

**O(1) Solution**

When $$$k \ge n - 1$$$, the answer is $$$1$$$ since we can just destroy all bridges $$$(1, i)$$$ for $$$2 \le i \le n$$$.

Otherwise, suppose we tried to make some set of $$$i$$$ islands non-visitable, and the other $$$n - i$$$ nodes reachable from $$$1$$$. Then we need to burn atleast $$$i \cdot (n - i)$$$ bridges (the bridges connecting the $$$2$$$ sets). It is not hard to see that this function attains the minimum value when $$$i = 1$$$ or $$$i = n - 1$$$ for $$$1 \le i < n$$$. Hence the minimum number of bridges to burn always exceeds $$$n - 1$$$.

**Formal Proof**

The function $$$f(x) = x \cdot (n - x)$$$ is a quadratic function in $$$x$$$, which attains maximum value at $$$x = \frac{n}{2}$$$, and the value decreasing proportionally as the distance from $$$\frac{n}{2}$$$ increases. This means that $$$f(1) = f(n - 1)$$$, and $$$f(1) > f(i)$$$ for all ($$$2 \le i \le (n - 2)$$$).

**Code (O(n))**

```
#include <bits/stdc++.h>
using namespace std;
int main(){
int t; cin >> t;
while (t--){
int n, k; cin >> n >> k;
for (int i = 1; i <= n; i++){
if (i * (n - i) <= k){
cout << i << "\n";
break;
}
}
}
return 0;
}
```

**Code (O(1))**

```
#include <bits/stdc++.h>
using namespace std;
int main(){
int t; cin >> t;
while (t--){
int n, k; cin >> n >> k;
if (k >= n - 1) cout << 1 << "\n";
else cout << n << "\n";
}
return 0;
}
```

Idea : satyam343

Preparation : satyam343

Editorial : Dominater069

**Hint 1**

Group numbers according to how many times they occur in $$$a[1...n]$$$.

**Hint 2**

The group of numbers having $$$0$$$ occurrences in $$$a[1...n]$$$ is of the same size as the group of numbers having $$$2$$$ occurences in $$$a[1...n]$$$.

**Hint 3**

Try to use the $$$0$$$ and $$$2$$$ occurrence numbers first, and then if we still need more, we can use the $$$1$$$ occurence numbers. Remember that we have to form sequences of size $$$2 \cdot k$$$ which is even.

**Solution**

We can append any $$$2$$$ occurrence numbers to our sequence $$$l$$$ and any $$$0$$$ occurrence numbers to our sequence $$$r$$$ without any issue because the xor value will cancel out. We do this while our sequence sizes are less than $$$2 \cdot k$$$. At the end of this process, $$$l$$$ and $$$r$$$ will have the same size due to Hint $$$2$$$.

Now, we use as many $$$1$$$ occurrence numbers appending to both $$$l$$$ and $$$r$$$ as needed. Since we append to both sequences, the xor value of the $$$2$$$ sequences will be the same.

If we had to solve for odd sequence sizes, we could take a $$$1$$$ occurrence number at the very start to make it even, and then run the same process, but if there are no $$$1$$$ occurrence numbers at all, we fail with this method.

**Code**

```
#include <bits/stdc++.h>
using namespace std;
int main(){
int t; cin >> t;
while (t--){
int n, k;
cin >> n >> k;
k = 2 * k;
vector <int> a(2 * n), occ(n + 1, 0);
for (auto &x : a) cin >> x;
for (int i = 0; i < n; i++) occ[a[i]]++;
vector <int> g0, g1, g2;
for (int i = 1; i <= n; i++){
if (occ[i] == 0) g0.push_back(i);
else if (occ[i] == 1) g1.push_back(i);
else g2.push_back(i);
}
int v = 0;
for (auto x : g2){
if (v < k){
v += 2;
cout << x << " " << x << " ";
}
}
for (auto x : g1){
if (v < k){
v++;
cout << x << " ";
}
}
cout << "\n";
v = 0;
for (auto x : g0){
if (v < k){
v += 2;
cout << x << " " << x << " ";
}
}
for (auto x : g1){
if (v < k){
v++;
cout << x << " ";
}
}
cout << "\n";
}
return 0;
}
```

Idea : Dominater069

Preparation : Dominater069

Editorial : Dominater069

**Hint 1**

Alice can adapt to Bob's strategy. Try to keep that in mind.

**Big Hint 2**

Whenever Bob chooses $$$i$$$, and if there are any copies of $$$i$$$ left, Alice can take $$$i$$$ on her next move.

**Hint 3**

Let $$$f$$$ be the frequency array of $$$a$$$. You can ignore all $$$f(i) \ge 2$$$ due to the previous hint. Now the answer is some simple casework.

**Solution**

For any $$$i$$$ s.t. $$$f_i \ge 2$$$, whenever Bob chooses to remove an occurence of $$$i$$$, on the next move Alice simply chooses $$$i$$$ herself (if she hasn't already taken $$$i$$$ before that). Thus, we only focus on $$$f_i \le 1$$$ now.

The answer is min(first $$$i$$$ s.t. $$$f(i) = 0$$$, second $$$i$$$ s.t. $$$f(i) = 1$$$).

Obviously, Alice can't do better than the mex of array (first $$$i$$$ s.t. $$$f(i) = 0$$$). Further, among $$$f(i) = 1$$$, Alice can save atmost $$$1$$$ $$$i$$$ after which Bob will remove the smallest $$$f(i) = 1$$$ he can find. This is optimal for Bob as well because he cannot do better when Alice removes the (first $$$i$$$ s.t. $$$f(i) = 1$$$) on her first move.

**Code**

```
#include <bits/stdc++.h>
using namespace std;
int main(){
int t; cin >> t;
while (t--){
int n; cin >> n;
vector <int> f(n + 1, 0);
for (int i = 0; i < n; i++){
int x; cin >> x;
f[x]++;
}
int c = 0;
for (int i = 0; i <= n; i++){
c += (f[i] == 1);
if ((c == 2) || (f[i] == 0)){
cout << i << "\n";
break;
}
}
}
return 0;
}
```

1943B - Non-Palindromic Substring

Idea : errorgorn, Dominater069

Preparation : Dominater069

Editorial : Dominater069

**Hint 1**

When is a string not $$$k$$$-good? (Ignore the trivial edge cases of $$$k = 1$$$ and $$$k = n$$$).

**Hint 2**

What happens when $$$s[i....j]$$$ and $$$s[i + 1....j + 1]$$$ are both palindromes?

**Solution**

We first try to find the answer for a string

Let $$$k = j - i + 1$$$, $$$s[i....j]$$$ -> I and $$$s[i + 1...j + 1]$$$ -> II are both palindromes.

Then $$$s_i = s_j$$$ (due to I)

$$$s_j = s_{i + 2}$$$ (due to II)

$$$s_{i + 2} = s_{j - 2}$$$ (due to I)

$$$s_{j - 2} = s_{i + 4}$$$ (due to II)

and so on.... you can see that it forces $$$s_i = s_{i + 2} = s_{i + 4} = ....$$$. A similiar reasoning gives you $$$s_{i + 1} = s_{i + 3} = s_{i + 5}$$$.

Further, if $$$k$$$ is even, $$$i$$$ and $$$j$$$ have different parities, but $$$s_i = s_j$$$ implies that all characters must be equal actually.

We mentioned that the edge cases are $$$1$$$ and $$$n$$$, but why exactly? How does the analysis fail for them?(Left as a exercise)

So, the condition for a string to be $$$k$$$-good can be written as follows : $$$k = 1$$$ : never possible

$$$1 < k < n$$$, odd : not an alternating string

$$$1 < k < n$$$, even : not all characters same

$$$k = n$$$ : non-palindromic string

Now onto substring queries. The second and third things are easy to handle, you can store the next position where $$$s_i \ne s_{i + 2}$$$ and $$$s_i \ne s_{i + 1}$$$ respectively. Checking if a substring is a palindrome is standard with various methods such as string hashing or manacher's algorithm.

**Code**

```
#include <bits/stdc++.h>
using namespace std;
#define INF (int)1e18
mt19937_64 RNG(chrono::steady_clock::now().time_since_epoch().count());
vector<int> manacher_odd(string s) {
int n = s.size();
s = "$" + s + "^";
vector<int> p(n + 2);
int l = 1, r = 1;
for(int i = 1; i <= n; i++) {
p[i] = max(0, min(r - i, p[l + (r - i)]));
while(s[i - p[i]] == s[i + p[i]]) {
p[i]++;
}
if(i + p[i] > r) {
l = i - p[i], r = i + p[i];
}
}
return vector<int>(begin(p) + 1, end(p) - 1);
}
vector<int> manacher(string s) {
string t;
for(auto c: s) {
t += string("#") + c;
}
auto res = manacher_odd(t + "#");
return vector<int>(begin(res) + 1, end(res) - 1);
}
#define int long long
void Solve()
{
int n, q; cin >> n >> q;
string s; cin >> s;
auto v = manacher(s);
for (auto &x : v) x--;
// we also need to know if all same, and all alternating
set <int> s1, s2;
for (int i = 0; i < n - 1; i++){
if (s[i] != s[i + 1]) s1.insert(i);
if (i != n - 1 && s[i] != s[i + 2]) s2.insert(i);
}
while (q--){
int l, r; cin >> l >> r;
l--;
r--;
if (l == r){
cout << 0 << "\n";
continue;
}
int len = r - l + 1;
int ans;
auto it = s1.lower_bound(l);
if (it == s1.end() || (*it) >= r){
ans = 0;
} else {
it = s2.lower_bound(l);
if (it == s2.end() || (*it) >= r - 1){
ans = ((len - 1)/ 2) * (((len - 1) / 2) + 1);
} else {
ans = len * (len - 1) / 2 - 1;
}
}
if (v[l + r] < (r - l + 1)) ans += len;
cout << ans << "\n";
}
}
int32_t main()
{
auto begin = std::chrono::high_resolution_clock::now();
ios_base::sync_with_stdio(0);
cin.tie(0);
int t = 1;
// freopen("in", "r", stdin);
// freopen("out", "w", stdout);
cin >> t;
for(int i = 1; i <= t; i++)
{
//cout << "Case #" << i << ": ";
Solve();
}
auto end = std::chrono::high_resolution_clock::now();
auto elapsed = std::chrono::duration_cast<std::chrono::nanoseconds>(end - begin);
cerr << "Time measured: " << elapsed.count() * 1e-9 << " seconds.\n";
return 0;
}
```

Idea: Everule

Preparation: Dominater069

Editorial: Dominater069

**Hint 1**

Try to solve for line case.

**Hint 2**

You may have to use more than $$$1$$$ node for certain cases.

**Hint 3**

Extend the solution for the line to the general tree version (consider the diamater).

**Solution**

For a line, an obvious bound on the answer is $$$\lceil \frac{n}{2} \rceil$$$, as we can colour atmost $$$2$$$ nodes per operation. I claim this is achieveable except for when $$$n$$$ mod $$$4 = 2$$$, where we do $$$1$$$ worse. That is however still provably optimal as you can bicolour the line and operations only colours nodes black which are in the same bicolouring.

**Construction for line**

When $$$n$$$ mod $$$2 = 1$$$, simply use the centre of the line and do operations of the form $$$(centre, i)$$$ ($$$0 \le i \le \lfloor \frac{n}{2} \rfloor$$$).

When $$$n$$$ mod $$$4 = 0$$$, for convenience let the line be $$$1, 2, ...., n$$$. Then, we can do operations like $$$(2, 1), (3, 1), (6, 1), (7, 1)....$$$.

When $$$n$$$ mod $$$4 = 2$$$, either of the above $$$2$$$ methods can be adapted to work because we are allowed $$$1$$$ "extra" operation.

Now that we have the solution for the line case, lets divide into $$$2$$$ cases based on parity of diamater (maximum number of nodes on a path) :

diameter mod $$$2 = 1$$$ : Find the centre of the diamater. Then we can simply do operations of the form $$$(centre, i)$$$ (for all $$$0 \le i \le \lfloor \frac{diameter}{2} \rfloor$$$). If this doesn't colour all nodes, then one can easily check that the diamater we found is not the real diamater, as the node which is not coloured is an endpoint of a larger diameter.

diamater mod $$$2 = 0$$$ : Find the $$$2$$$ centres of the diameter. Then the following set of operations satisfy the requirements : $$$(centre1, i)$$$ and $$$(centre2, i)$$$ for all odd $$$i$$$ satisfying $$$1 \le i \le \frac{diameter}{2}$$$. The intuition behind this is to basically split the nodes into $$$2$$$ sets according to a bicolouring, and then $$$1$$$ centre colours all nodes of a certain colour, while the other centre colours all nodes of the other colour.

**Code**

```
#include <bits/stdc++.h>
using namespace std;
#define int long long
#define INF (int)1e18
#define f first
#define s second
mt19937_64 RNG(chrono::steady_clock::now().time_since_epoch().count());
void Solve()
{
int n;
cin >> n;
vector<vector<int>> E(n);
for (int i = 1; i < n; i++){
int u, v; cin >> u >> v;
u--; v--;
E[u].push_back(v);
E[v].push_back(u);
}
auto bfs = [&](int s){
vector<int> d(n, -1);
d[s] = 0;
queue<int> Q;
Q.push(s);
while (!Q.empty()){
int v = Q.front();
Q.pop();
for (int w : E[v]){
if (d[w] == -1){
d[w] = d[v] + 1;
Q.push(w);
}
}
}
return d;
};
vector<int> d1 = bfs(0);
int a = max_element(d1.begin(), d1.end()) - d1.begin();
vector<int> d2 = bfs(a);
int b = max_element(d2.begin(), d2.end()) - d2.begin();
vector<int> d3 = bfs(b);
int diam = d3[max_element(d3.begin(), d3.end()) - d3.begin()] + 1;
//if 3 we want 1, 1 if 4 we want 1 2
vector <int> ans;
for (int i = 0; i < n; i++){
if ((d2[i] + d3[i] == diam - 1) && ((d2[i] == diam/2) || (d3[i] == diam/2)))
ans.push_back(i);
}
if (diam & 1) assert(ans.size() == 1);
else assert(ans.size() == 2);
vector <pair<int, int>> ok;
if (diam & 1){
//print everything from 0 to diam/2
for (int i = 0; i <= diam/2; i++){
ok.push_back({ans[0], i});
}
} else {
//2 => 2 ops, 4 => 2 ops , 6 => 4 ops, 8 => 4 ops
int ops = ((n - 2)/4) + 1;
int need = (diam/2) - 1;
while (need >= 0){
ok.push_back({ans[0], need});
ok.push_back({ans[1], need});
need -= 2;
}
}
cout << ok.size() << "\n";
for (auto [u, r] : ok){
cout << u + 1 << " " << r << "\n";
}
}
int32_t main()
{
auto begin = std::chrono::high_resolution_clock::now();
ios_base::sync_with_stdio(0);
cin.tie(0);
int t = 1;
cin >> t;
for(int i = 1; i <= t; i++)
{
//cout << "Case #" << i << ": ";
Solve();
}
auto end = std::chrono::high_resolution_clock::now();
auto elapsed = std::chrono::duration_cast<std::chrono::nanoseconds>(end - begin);
// cerr << "Time measured: " << elapsed.count() * 1e-9 << " seconds.\n";
return 0;
}
```

1943D1 - Counting Is Fun (Easy Version)

Idea : satyam343

Preparation : satyam343

Editorial : Dominater069

**Hint 1**

Try to come up with some necessary and sufficient conditions of a good array.

**Hint 2**

Apply dp once you have a good condition.

**Solution**

All operations which include $$$i$$$ must also either include $$$i - 1$$$ or $$$i + 1$$$. Hence $$$a_i \le a_{i - 1} + a_{i + 1}$$$ must hold. Throughout the editorial treat $$$a_i = 0$$$ for $$$(i \le 0)$$$ or $$$(i > n)$$$.

But, is this sufficient? Infact, it is and we can prove it by strong induction.

**Proof**

Group arrays according to the sum of the array. We will now apply strong induction on the sum of the array.

Base Cases (sum = 0, 1 or 2) are all trivial.

Now, assume that the condition is sufficient for all arrays with sum $$$< x$$$ ($$$x \ge 3$$$).

Let us consider some array $$$a_1, a_2, ...., a_n$$$ with sum $$$x$$$. Let $$$a_i$$$ be the first non-zero element of $$$a$$$.(Observe that $$$a_{i + 1}$$$ can't be $$$0$$$).

We claim that either operating on $$$[i, i + 1]$$$ or $$$[i, i + 2]$$$ will give still satisfy the condition $$$a_j \le a_{j - 1} + a_{j + 1}$$$ for all $$$j$$$.

Let's check it. The only time $$$[i, i + 1]$$$ operation causes an issue is when $$$a_{i + 2} > a_{i + 1} - 1 + a_{i + 3}$$$, i.e. it should necessarily hold that $$$a_{i + 2} > a_{i + 3}$$$, but then $$$a_{i + 3} \le a_{i + 2} - 1$$$, and so, $$$a_{i + 3} \le a_{i + 2} - 1 + a_{i + 4}$$$, meaning $$$[i, i + 2]$$$ is valid.

Now that we have the condition, we can define a dynamic programming as follows :

$$$dp(i, a, b)$$$ = number of ways to make an array upto the $$$i$$$-th index with $$$a_{i - 1} = a$$$, $$$a_i = b$$$ (since only the last $$$2$$$ elements are relevant).

Let the new element be $$$c$$$, then it is a valid way iff $$$b \le a + c$$$. So we have a $$$\mathcal{O}(n^4)$$$ by iterating over all possibilities.

As a final optimization, we can speed it up to $$$\mathcal{O}(n^3)$$$ by using prefix sums. Note that the valid values of $$$a$$$ for fixed $$$b$$$ and $$$c$$$ satisfy $$$a \ge max(0, b - c)$$$, and hence maintaining prefix sums over $$$a$$$, we can speed it up.

**Code**

```
#include <bits/stdc++.h>
using namespace std;
#define int long long
#define INF (int)1e18
#define f first
#define s second
mt19937_64 RNG(chrono::steady_clock::now().time_since_epoch().count());
void Solve()
{
int n, k, mod; cin >> n >> k >> mod;
vector<vector<int>> dp(k + 1, vector<int>(k + 1, 0));
dp[0][0] = 1; // dp[a][b]
for (int i = 1; i <= n + 2; i++){
vector<vector<int>> ndp(k + 1, vector<int>(k + 1, 0)); // dp[b][c]
vector<vector<int>> pref(k + 1, vector<int>(k + 1, 0)); // pref[b][a]
for (int b = 0; b <= k; b++){
pref[b][0] = dp[0][b];
for (int a = 1; a <= k; a++){
pref[b][a] = (pref[b][a - 1] + dp[a][b]) % mod;
}
}
for (int b = 0; b <= k; b++){
for (int c = 0; c <= k; c++){
if (b > c){
// a must be atleast b - c
ndp[b][c] = (pref[b][k] - pref[b][b - c - 1] + mod) % mod;
} else {
ndp[b][c] = pref[b][k];
}
}
}
dp = ndp;
}
cout << dp[0][0] << "\n";
}
int32_t main()
{
auto begin = std::chrono::high_resolution_clock::now();
ios_base::sync_with_stdio(0);
cin.tie(0);
int t = 1;
// freopen("in", "r", stdin);
// freopen("out", "w", stdout);
cin >> t;
for(int i = 1; i <= t; i++)
{
//cout << "Case #" << i << ": ";
Solve();
}
auto end = std::chrono::high_resolution_clock::now();
auto elapsed = std::chrono::duration_cast<std::chrono::nanoseconds>(end - begin);
cerr << "Time measured: " << elapsed.count() * 1e-9 << " seconds.\n";
return 0;
}
```

1943D2 - Counting Is Fun (Hard Version)

Idea : satyam343

Preparation : satyam343

Editorial : Dominater069

**Hint 1**

Try to apply Principle of Inclusion Exclusion (PIE).

**Hint 2**

You do not need to store both last elements! Only the last is enough.

**Solution**

Since there are $$$n^3$$$ states in our dp, we will have to optimize the number of states somehow.

Let us consider all arrays and not just good arrays. An element is bad if $$$a_i > a_{i - 1} + a_{i + 1}$$$.

Suppose, $$$f(x_1, x_2, ...., x_k)$$$ gave us the number of arrays where each of $$$x_i$$$ are distinct and bad. (Note that other positions may be bad as well). Then, by PIE, the answer is $$$(-1)^k \cdot f(x_1, x_2, ....., x_k)$$$.

For example, for $$$n = 3$$$, we would want to compute $$$f([]) - f([1]) - f([2]) - f([3]) + f([1, 2]) + f([1, 3]) + f([2, 3]) - f([1, 2, 3])$$$. Note that $$$f([])$$$ is simply $$$(k + 1)^n$$$ as there are no restrictions placed on the array.

This has $$$2^n$$$ calculations, so we need to optimize it.

First optimization : obviously, only $$$k$$$ matters and not the exact indices. This means we only have to maintain the count of the number of indices we have made sure are bad.

Second optimization : only parity of $$$k$$$ matters, due to the only dependence of $$$k$$$ being $$$(-1)^k$$$.

We now define $$$dp(i, last, x)$$$ = number of arrays such that $$$a_i = last$$$ and the parity of bad elements (that we know of) till $$$i$$$ is $$$x$$$.

Transitions :

Without (necessarily) creating a bad element : $$$dp(i, last, x) += dp(i - 1, y, x)$$$ for all $$$0 \le y \le k$$$. We might accidentally create more bad elements, but remember that PIE allows us to not worry about that.

With creating a bad element : We view it as a transition from index $$$(i - 2)$$$, $$$a_{i - 1} > a_i + a_{i - 2}$$$ for it to bad, so fix $$$a_i = l1$$$, $$$a_{i - 2} = l2$$$, and you get $$$dp(i, l1, x) += dp(i - 2, l2, 1 - x) \cdot (max(0, k - l1 - l2))$$$.

The $$$max(0, k - l1 - l2)$$$ term comes from the ways to choose $$$a_{i - 1}$$$ such that $$$a_{i - 1} > l1 + l2$$$.

Both the transitions are optimizable with some prefix sums and running totals to get a $$$\mathcal{O}(n^2)$$$ solution.

**Code**

```
#include <bits/stdc++.h>
using namespace std;
#define int long long
#define INF (int)1e18
#define f first
#define s second
mt19937_64 RNG(chrono::steady_clock::now().time_since_epoch().count());
void Solve()
{
int n, k, mod; cin >> n >> k >> mod;
vector<vector<int>> dp(2, vector<int>(k + 1, 0));
auto prev2 = dp;
dp[0][0] = 1;
for (int i = 1; i <= n + 1; i++){
vector<vector<int>> ndp(2, vector<int>(k + 1, 0));
vector<int> sum(2, 0);
for (int j = 0; j < 2; j++) for (int x = 0; x <= k; x++){
sum[j] += dp[j][x]; sum[j] %= mod;
}
for (int j = 0; j < 2; j++){
int s1 = 0, s2 = 0;
for (int x = k; x >= 0; x--){
ndp[j][x] += sum[j]; // normal transition
ndp[j][x] += s2; ndp[j][x] %= mod; // with one wrong
s1 += prev2[j ^ 1][k - x]; s1 %= mod;
s2 += s1; s2 %= mod;
}
}
prev2 = dp;
dp = ndp;
}
int ans = (dp[0][0] - dp[1][0] + mod) % mod;
cout << ans << "\n";
}
int32_t main()
{
auto begin = std::chrono::high_resolution_clock::now();
ios_base::sync_with_stdio(0);
cin.tie(0);
int t = 1;
// freopen("in", "r", stdin);
// freopen("out", "w", stdout);
cin >> t;
for(int i = 1; i <= t; i++)
{
//cout << "Case #" << i << ": ";
Solve();
}
auto end = std::chrono::high_resolution_clock::now();
auto elapsed = std::chrono::duration_cast<std::chrono::nanoseconds>(end - begin);
cerr << "Time measured: " << elapsed.count() * 1e-9 << " seconds.\n";
return 0;
}
```

1943E1 - MEX Game 2 (Easy Version)

Idea: Dominater069

Preparation: Dominater069

Editorial: Dominater069

**Hint 1**

It might be optimal for Bob to reduce multiple elements at the same time, thus making Alice choose between which element she wants to take.

**Hint 2**

Suppose you are only checking whether $$$ans \ge i$$$ for now, what would Alice's strategy be?

**Hint 3**

Try fixing the element that Bob wants to make sure Alice is not able to get. What would be his strategy then?

**Solution**

For now, lets try to see if $$$ans \ge i$$$. This is equivalent to Alice being able to take $$$1$$$ occurence of everything when the game is reduced to numbers $$$[0, i - 1]$$$.

Alice's strategy here is actually easy to find. At every step, Alice will choose the minimum $$$f_j$$$ such that $$$0 \le j < i$$$, and she hasn't chosen $$$i$$$ yet. You can reason this with greedy stays ahead, exchange argument, whatever you want.

This gives us a nice definition of Alice's moves, however we seemingly have to maintain the sorted sequence of $$$f_i$$$ always. But we can actually rewrite Bob's moves such that it does not affect the sorted order of $$$f_i$$$ and always keeps it sorted. Here, by sorted order, we mean some permutation $$$p = [p_1, p_2, ...., p_i]$$$ of $$$[0, i - 1]$$$ such that $$$f_{p_i} \le f_{p_j}$$$ whenever $$$i \le j$$$.

First, instead of subtracting $$$k$$$, we will do $$$k$$$ subtractions of only $$$1$$$. Then, the only case when sorted order can be destroyed is when there exists $$$k1$$$ and $$$k2$$$ such that $$$f_{k1} = f_{k2}$$$ and we do an operation on $$$k1$$$ but $$$k2$$$ occurs before $$$k1$$$ in the sorted order. This issue can simply be fixed by doing the operation on the smallest $$$x$$$ (according to sorted order) such that $$$f_x = f_{k1}$$$.

Now, we have a good way of representing Alice moves. Suppose we fixed the element that Bob "wins" on. Then, Bob's strategy will obviously be to make the frequency of that element as small as possible, but he must make sure to never violate sorted condition. Since Bob will make at most $$$m$$$ moves, you can just simulate his moves.

The main details of the simulation is that you need to figure out upto what index all values will become equal when doing k operations (or nearly equal, off by 1), and then first take all elements to that state. Let $$$w$$$ be the remaining operations from the $$$k$$$ operations after this, and $$$l$$$ the length of the equal sequence. Then you will reduce every frequency by $$$w / l$$$, and finally reduce the first $$$w$$$ mod $$$l$$$ numbers of this sequence. Check the code for more details.

A naive implementation takes $$$O(m)$$$ per move, so $$$O(m^2)$$$ per element we fix => $$$O(m^3)$$$ total to check $$$ans \ge i$$$. With a binary search on the answer, you get $$$O(m^3 logm)$$$. It can be optimized further, but it wasnt needed to pass. Most other polynomial solutions should pass this.

**Code**

```
#include <bits/stdc++.h>
using namespace std;
#define int long long
#define INF (int)1e18
#define f first
#define s second
mt19937_64 RNG(chrono::steady_clock::now().time_since_epoch().count());
void Solve()
{
int n, k; cin >> n >> k;
vector <int> a(n + 1);
for (auto &x : a) cin >> x;
auto check = [&](int x){
vector <int> b;
for (int i = 0; i < x; i++){
b.push_back(a[i] - k);
}
sort(b.begin(), b.end());
for (int fix = 1; fix < x; fix++){
// this is element where bob wins
deque <int> c;
for (int i = 0; i <= fix; i++){
c.push_back(b[i]);
}
assert(c.size() >= 2);
while (c.size() != 2){
c.pop_front();
// find suffix which works
int sz = c.size();
int works = 0;
int sum = 0;
for (int j = 1; j < sz; j++){
// sum(elements of c - current element)
// this shud be >= k
sum += c[sz - j];
int loss = sum - c[sz - j - 1] * j;
if (loss >= k){
works = sz - j;
break;
}
}
int have = k;
// make everything = c[works]
for (int j = works + 1; j < sz; j++){
have -= c[j] - c[works];
c[j] = c[works];
}
assert(have >= 0);
for (int j = works; j < sz; j++){
c[j] -= have / (sz - works);
}
have %= (sz - works);
for (int j = works; j < sz; j++){
if (have){
c[j]--;
have--;
}
}
for (int j = 0; j < sz - 1; j++){
assert(c[j] <= c[j + 1]);
}
}
c.pop_front();
if (c[0] <= 0) return false;
}
return true;
};
int l = 1, r = n + 1;
while (l != r){
int mid = (l + r + 1) / 2;
if (check(mid)) l = mid;
else r = mid - 1;
}
cout << l << "\n";
}
int32_t main()
{
auto begin = std::chrono::high_resolution_clock::now();
ios_base::sync_with_stdio(0);
cin.tie(0);
int t = 1;
// freopen("in", "r", stdin);
// freopen("out", "w", stdout);
cin >> t;
for(int i = 1; i <= t; i++)
{
//cout << "Case #" << i << ": ";
Solve();
}
auto end = std::chrono::high_resolution_clock::now();
auto elapsed = std::chrono::duration_cast<std::chrono::nanoseconds>(end - begin);
cerr << "Time measured: " << elapsed.count() * 1e-9 << " seconds.\n";
return 0;
}
```

1943E2 - MEX Game 2 (Hard Version)

Idea: Dominater069

Solution : ffao

Preparation: Dominater069

Editorial: errorgorn

**Solution**

Instead of doing the check for $$$ans \geq i$$$ in $$$O(m^3)$$$, we will do it in $$$O(m)$$$.

For an array $$$f$$$ of length $$$n$$$. Let $$$s=f_1+f_2+\ldots+f_n$$$ be the sum of the element. $$$f$$$ will be called *flat* if $$$f_i = \lfloor \frac{s+i-1}{n} \rfloor$$$. That is, it has the form $$$x, \ldots, x, x+1, \ldots x+1$$$. All flat array can be characters by $$$2$$$ integers $$$(n,s)$$$ only.

Imagine simulating Bob's strategy without simulating Alice's moves of removing the first element of the array. For some prefix of the moves, the actual simulation will be a suffix of this simulation. This is because to subtract something from index $$$\leq i$$$, we must have $$$f_{i+1} = f_{i+2} = \ldots = f_n$$$.

As an example, let $$$k=4$$$.

With Alice moves: $$$[1,2,3,5,5] \to [2,3,3,3] \to [1,2,2]$$$

Without Alice moves: $$$[1,2,3,5,5] \to [1,2,3,3,3] \to [1,1,2,2,2]$$$

$$$[1,2,2]$$$ is not a suffix of $$$[1,1,2,2,2]$$$ and is the first time such a thing happens.

Suppose that the first time this happens is after $$$p$$$ moves. Then the resulting array is the flat array $$$(n-p,f_{p+1}+f_{p+2}+\ldots+f_n-pk)$$$. To find the necessary value of $$$p$$$, we can binary search or run $$$2$$$-pointers to check if the suffix of the array can reach $$$f_p$$$ with the amount of subtraction operations till then. (What we basically did is find the suffix of the array that actually gets operated on, since that makes it much more easier to solve)

Then, since the flat array $$$(n,s)$$$ becomes $$$(n-1,s-\lfloor \frac{s}{n} \rfloor - k)$$$, we can figure out whether each flat array evantually becomes a losing or winning state as we can calculate for each $$$n$$$, the minimum $$$s$$$ such that $$$(n,s)$$$ is a winning state.

**Code**

```
#include <bits/stdc++.h>
using namespace std;
#define int long long
#define ll long long
#define ii pair<ll,ll>
#define iii pair<ii,ll>
#define fi first
#define se second
#define endl '\n'
#define debug(x) cout << #x << ": " << x << endl
#define pub push_back
#define pob pop_back
#define puf push_front
#define pof pop_front
#define lb lower_bound
#define ub upper_bound
#define rep(x,start,end) for(int x=(start)-((start)>(end));x!=(end)-((start)>(end));((start)<(end)?x++:x--))
#define all(x) (x).begin(),(x).end()
#define sz(x) (int)(x).size()
mt19937 rng(chrono::system_clock::now().time_since_epoch().count());
int n,k;
int arr[200005];
int temp[200005];
bool solve(vector<int> v){
sort(all(v));
rep(x,0,sz(v)) temp[x]=1e18;
int l=1,curr=0;
rep(x,1,sz(v)){
curr+=v[x];
while (l<x && (curr-v[l])-(x-l)*v[l] >= l*k){
curr-=v[l];
l++;
}
temp[x-l]=min(temp[x-l],curr-l*k);
}
rep(x,sz(v),1) temp[x-1]=min(temp[x-1],(temp[x]-temp[x]/(x+1))-k);
return temp[0]>0;
}
signed main(){
ios::sync_with_stdio(0);
cin.tie(0);
cout.tie(0);
cin.exceptions(ios::badbit | ios::failbit);
int TC;
cin>>TC;
while (TC--){
cin>>n>>k;
rep(x,0,n+1) cin>>arr[x];
// solve(vector<int>(arr,arr+n+1));
// continue;
int lo=0,hi=n+2,mi;
while (hi-lo>1){
mi=hi+lo>>1;
if (solve(vector<int>(arr,arr+mi))) lo=mi;
else hi=mi;
}
cout<<lo<<endl;
}
}
```

1943F - Minimum Hamming Distance

Idea: satyam343

Preparation: satyam343

Editorial: satyam343

**Hint 1 / Claim 1**

Assumption: 0 is the mode of string t. If 1 occurs more times than 0 in t, we will flip all characters of s and t so that 0 is the mode of string t.

Let us say index $$$i$$$ *nice* if there exists $$$l$$$ and $$$r$$$($$$1 \le l \le i \le r \le n$$$) such that $$$s_i$$$ is the mode of substring $$$t[l,r]$$$.

So we might have some not *nice* indices $$$i$$$ such that $$$s_i =$$$ $$$\mathtt{1} $$$. We will not have any index $$$i$$$ such that i is not *nice* and $$$s_i =$$$ $$$\mathtt{0} $$$, as $$$\mathtt{0} $$$ is the mode of $$$t$$$. So, we need to fix the not *nice* indices.

Now we will start changing some $$$\mathtt{0} $$$ to $$$\mathtt{1} $$$. So it might be possible that in final $$$t$$$, the frequency of $$$\mathtt{1} $$$ is more than that of $$$\mathtt{0} $$$, and $$$\mathtt{0} $$$ is no longer the mode. So should we worry about indices $$$ i $$$ such that $$$ i $$$ was nice in the beginning, and now that we made some flips, $$$ i $$$ may become not *nice*?

No! It will never be possible.

In case $$$\mathtt{1} $$$ occurs more times than $$$\mathtt{0} $$$ in the updated $$$t$$$, we will have $$$frequency[1] = frequency[0] + 1$$$ and $$$t[1] = t[n] =$$$ $$$\mathtt{1} $$$(we will have such cases for pairs like ($$$s =$$$ $$$\mathtt{011} $$$, $$$t =$$$ $$$\mathtt{100} $$$; for this pair final $$$t$$$ should be $$$t =$$$ $$$\mathtt{101} $$$). So the substrings $$$t[1, n - 1]$$$ and $$$t[2, n]$$$ will have equal numbers of $$$\mathtt{0} $$$ and $$$\mathtt{1} $$$, and thus all indices should be *nice*. So our claim is that we should change some $$$\mathtt{0} $$$ to $$$\mathtt{1} $$$, without caring about indices $$$i$$$(which were *nice* initially) becoming not *nice* such that $$$s_i =$$$ $$$\mathtt{0} $$$.

**Hint 2**

We can use dynamic programming here.

So let us have $$$dp$$$, such that $$$dp[i][j]$$$ gives the minimum number of flips required to make $$$t[1, i]$$$ *friend* of $$$s[1, i]$$$ and the maximum suffix sum is $$$j$$$.

String $$$x$$$ is called to be *friend* of string $$$y(|x| = |y|)$$$, if for every $$$i$$$($$$1 \le i \le |x|$$$), there exists indices $$$l$$$ and $$$r$$$ such that:

1. $$$1 \le l \le i \le r \le |x|$$$

2. $$$x_i$$$ is a mode of the string $$$y[l,r]$$$

**Solution**

Please read hints $$$1$$$ and $$$2$$$ if you haven't as they contain some claims and definitions.

Note that when we find some sum, we add $$$1$$$ when we get $$$\mathtt{1} $$$ and subtract $$$-1$$$ when we get $$$\mathtt{0} $$$.

Suppose we have found $$$dp$$$ values for the first $$$i - 1$$$ indices, and we want to find $$$dp[i][j]$$$ for $$$0 \le j \le n$$$. Now, we need to perform the transitions.

Let us try to have a $$$O(n^3)$$$ solution first, which we can optimise after making some observations.

Take some $$$l$$$($$$0 \leq l \leq i - 1$$$). We will iterate over $$$suff$$$_$$$sum = 0$$$ to $$$n$$$, here $$$suff$$$_$$$sum$$$ is the maximum suffix sum of substring $$$t[1, l]$$$, and use $$$dp[l][suff$$$_$$$sum]$$$ to find optimal values for $$$dp[i][x]$$$ for some $$$x$$$.

So we need to do some flips to substring $$$t[l + 1, i]$$$, as $$$s[1, l]$$$ and $$$t[1, l]$$$ are already *friends*. So we only care to make all indices $$$j$$$ ($$$l + 1 \le j \le i$$$) *nice*. So there are two possibilities(either $$$\mathtt{1} $$$ occurs in substring $$$s[l + 1, i]$$$ or not):

If $$$\mathtt{1} $$$ does not occur, we can perform the transition without making any flips.

Assume $$$\mathtt{1} $$$ occurs in substring $$$s[l + 1, i]$$$. So firstly, find the sum(say $$$cur$$$_$$$sum$$$) of substring $$$t[l + 1, i]$$$. Now, if we do some flips to substring $$$t[l + 1, i]$$$, $$$cur$$$_$$$sum$$$ will change accordingly. We will do a minimum number of flips such that $$$suff$$$_$$$sum + cur$$$_$$$sum \ge 0$$$. Note that we are talking here about updated $$$cur$$$_$$$sum$$$. So we can find the minimum number(say $$$cost$$$) of flips, which will be $$$\lfloor \frac{\max(0, 1 -d )}{2} \rfloor$$$, where $$$d=suff$$$_$$$sum + initial$$$_$$$cur$$$_$$$sum$$$. So we know how many flips to make.

But which ones to flip? Here is one more claim. We should only flip the last $$$cost$$$ $$$\mathtt{0} $$$ of substring $$$t[l + 1, i]$$$.

So this is a sufficient condition, as we can certainly say that $$$t[1, i]$$$ will be *friend* of $$$s[1, i]$$$ now. So we know the required number of flips, which is $$$dp[l][suff$$$_$$$sum] + cost$$$. We need to find one more thing — what would be the maximum suffix sum if we flip the last $$$cost$$$ characters of $$$t[l + 1, i]$$$? We can precompute.

But we have an issue now. We know that what we performed is sufficient. But is it necessary? What if we did not need to flip cost characters of $$$t[l + 1, i]$$$?

It might be possible that we could have done less number of flips and still made all indices $$$l + 1 \le j \le i$$$ *nice*. The reasoning behind this is we made sure that $$$suff$$$_$$$sum + cur$$$_$$$sum \ge 0$$$, but what if it was not needed?

Like it is possible that total sum is negative, but all indices $$$j$$$($$$l + 1 \le j \le i$$$) such that $$$s_j =$$$ $$$\mathtt{1} $$$ are satisfied. So here, we can use exchange arguments and conclude that all cases will be covered if we check for all pairs of ($$$l, suff$$$_$$$sum$$$) $$$0 \le l, suff$$$_$$$sum \le i - 1$$$.

Now we need to optimise this to $$$O(n^2)$$$.

Notice that when we do the flips, there will be a suffix(possibly empty when $$$cost=0$$$) of $$$t[l + 1, i]$$$ containing only $$$\mathtt{1} $$$ s. Suppose we are at index $$$i$$$ and need to find $$$dp[i][j]$$$ for $$$0 \le j \le i$$$. We can iterate over all $$$j$$$($$$1 \le j \le i$$$), assume that all the characters in substring $$$t[j,i]$$$ are $$$\mathtt{1} $$$ s, and find the $$$dp$$$ values. Maximum suffix sum will be $$$i-j+1+max$$$_$$$suffix$$$_$$$sum[j-1]$$$. So we can find the smallest index $$$p$$$ such that the sum of the elements in substring $$$t[p,l]$$$ is greater than or equal to $$$0$$$ if we make all the characters in substring $$$t[j,i]$$$ $$$\mathtt{1} $$$.

Notice that we already have the new suffix maximum, and we know the $$$cost$$$ too, which is equal to the number of $$$\mathtt{0} $$$ s in the original substring $$$t[j,i]$$$. So our transition will be $$$dp[i][new$$$_$$$suffix$$$_$$$max]=\max(dp[i][new$$$_$$$suffix$$$_$$$max], \min\limits_{k = p-1}^{i} best[k] + cost)$$$, where $$$best[i]= \min\limits_{k = 0}^{i} dp[i][k]$$$.

So, our final complexity will be $$$O(n^2)$$$, as we can perform the transition in $$$O(1)$$$ if we precompute the needed things.

**Code**

```
#include <bits/stdc++.h>
#include <ext/pb_ds/tree_policy.hpp>
#include <ext/pb_ds/assoc_container.hpp>
using namespace __gnu_pbds;
using namespace std;
#define ll int
#define pb push_back
#define mp make_pair
#define nline "\n"
#define f first
#define s second
#define pll pair<ll,ll>
#define all(x) x.begin(),x.end()
const ll MOD=998244353;
const ll MAX=10005;
ll dp[MAX][MAX];
ll suffix_min[MAX];
ll suffix_max[MAX];
ll can_go_till[MAX][MAX+5];
void solve(){
ll n; cin>>n;
ll shift=n;
for(ll i=-n;i<=0;i++){
can_go_till[0][shift+i]=0;
}
string s,t; cin>>s>>t;
s=" "+s,t=" "+t;
ll sum=0;
for(ll i=1;i<=n;i++){
sum+=2*(t[i]-'0')-1;
}
suffix_max[0]=0;
if(sum>=0){
for(ll i=1;i<=n;i++){
s[i]='0'+'1'-s[i];
t[i]='0'+'1'-t[i];
}
}
ll max_sum=0;
for(ll i=1;i<=n;i++){
max_sum+=2*(t[i]-'0')-1;
max_sum=max(0,max_sum);
suffix_max[i]=max_sum;
}
for(ll i=1;i<=n;i++){
sum=0;
for(ll j=-n;j<=0;j++){
can_go_till[i][shift+j]=i+1;
}
for(ll j=i;j>=1;j--){
sum+=2*(t[j]-'0')-1;
ll use=min(sum,0);
can_go_till[i][shift+use]=j;
}
for(ll j=n-1;j>=0;j--){
can_go_till[i][j]=min(can_go_till[i][j],can_go_till[i][j+1]);
}
}
for(ll i=0;i<=n+1;i++){
for(ll j=0;j<=n+1;j++){
dp[i][j]=MOD;
}
}
dp[0][0]=0;
vector<ll> best(n+5,MOD);
best[0]=0;
for(ll i=1;i<=n;i++){
for(ll l=0;l<=i-1;l++){
dp[i][l+1]=dp[i-1][l]+(t[i]=='0');
}
for(ll l=0;l<=i;l++){
ll new_sum=l+2*(t[i]-'0')-1;
if(s[i]=='1' and new_sum<=-1){
continue;
}
new_sum=max(0,new_sum);
dp[i][new_sum]=min(dp[i][new_sum],dp[i-1][l]);
}
suffix_min[i]=MOD;
for(ll j=i-1;j>=0;j--){
suffix_min[j]=min(suffix_min[j+1],best[j]);
}
ll cnt=0;
for(ll j=i;j>=1;j--){
cnt+=(t[j]=='0');
ll now=i-j+1;
ll cur_suff_max=now+suffix_max[j-1];
ll pos=max(0,can_go_till[j-1][shift-now]-1);
dp[i][cur_suff_max]=min(dp[i][cur_suff_max],suffix_min[pos]+cnt);
}
for(ll j=0;j<=n;j++){
best[i]=min(best[i],dp[i][j]);
}
}
ll ans=MOD;
s[0]='1';
for(ll i=n;i>=0;i--){
ans=min(ans,best[i]);
if(s[i]=='1'){
cout<<ans<<nline;
return;
}
}
return;
}
int main()
{
ios_base::sync_with_stdio(false);
cin.tie(NULL);
#ifndef ONLINE_JUDGE
freopen("input.txt", "r", stdin);
freopen("output.txt", "w", stdout);
freopen("error.txt", "w", stderr);
#endif
ll test_cases=1;
cin>>test_cases;
while(test_cases--){
solve();
}
cout<<fixed<<setprecision(10);
cerr<<"Time:"<<1000*((double)clock())/(double)CLOCKS_PER_SEC<<"ms\n";
}
```

no comment ?

The editorial shows as created 31 hours ago but it was only published 1 — 2 hours ago.

Quite clear editorial!

Blah

I solved 1st and 3rd but got stuck in 2nd and 4th , thanks for the editorial guys it was a fun round

I have another proof for D1. If there is a 0 at the beginning of the array — remove it, otherwise substract 1 from the longest non-decreasing prefix. It is easy to check that this algorithm is correct.

Can you explain your solution?

Nice

It’s great that this editorial can give me hints first.

Why do you do

in solution of C to check if the substring is a palindrome ?

Yes, Look at this.

No, my question was why use l+r ? He has made l and r 0 indexed while he computed the array for manacher considering 1 indexing. So, the center of any substring would get shifted towards left by 2 units. so, why use l+r ? Why not l+r+2 ?

Oh sorry, I thought u'r question was : "Do you .."

So, Look at the function

`manacher_odd`

and`manacher`

return values.U'r welcome.

Well, the author has used the code from Link. The return values are that way because of these two lines in them —

It is mentioned there that — "Note that d[2i] and d[2i+1] are essentially the increased by 1 lengths of the largest odd- and even-length palindromes centered in i correspondingly." That means the array returned from

`manacher`

function is still 1 indexed.Two modifications has been made: one in

`manacher_odd`

and one in`manacher`

.Yeah,got it. The return value in manacher function makes it zero indexed. Thanks!

"Note that d[2i] and d[2i+1] are essentially the increased by 1 lengths of the largest odd- and even-length palindromes centered in i correspondingly." Precisely!

Note that cp-algorithms (to the best of my knowledge) usually follows 0-indexing convention.

Why didn't you take max(d[l+r],d[l+r+1]) in your code to find the largest length among odd and even length palindromes centered in i ? Why looking for odd length only (d[l+r]) suffices ?

Because the substring [l...r] has a well defined centre ((l + r)/2), why would i use smth else?

No, what I meant is this — The centre of the substring is (l+r)/2. In your code, you have used array v to denote the manacher array. According to cp algorithms, d[2i] and d[2i+1] are the lengths of the largest odd- and even-length palindromes centered in i correspondingly.In our case v = d and (l+r)/2 = i. Now,

((l+r)/2)*2 = l+r, which is 2*i

((l+r)/2)*2+1 = l+r+1, which is 2*i+1

So, v[l+r] tells us the length of odd length palindrome whose centre is at index (l+r)/2 in the original string.

And, v[l+r+1] tells us the length of even length palindrome whose centre is at index (l+r)/2 in the original string.

In your code, you are doing

which means that if the length of odd length palindrome which is centered at index (l+r)/2 in the original string has a length which is lesser than the length of the substring, then the substring will not be a palindrome.

But such a case can also exist when v[l+r] < (r — l + 1) but v[l+r+1] >= (r — l + 1), which means the odd length palindrome centered at index (l+r)/2 doesn't have a length which is atleast equal to the length of the substring but the even length palindrome centered at index (l+r)/2 does have a length which is atleast equal to the length of the substring. In that case the substring [l,r] will be a palindrome.So, we should not be doing

`ans+=len;`

in that case. Right ?Odd length palindromes are centered at a character, even length palindromes are centered between two characters. $$$d[2i+1]$$$ refers to the longest even-length palindrome centered between $$$i$$$ and $$$i+1$$$. It makes no sense to check both $$$d[2i]$$$ and $$$d[2i+1]$$$ as they have different centers.

C was really cool problem

Nice Div2 Round

Hey guys, I'm not very familiar with div2-Cs, But isn't this round's div2-C felt like a normal 800-1000 rated problem? It felt very simple!

It was pretty hit-or-miss, evidenced by the fact that a lot of people in Div 1 (including me) failed to solve it.

Other than the editorial solution, an obvious greedy idea also works..... (and i was nice enough to not cut it as div2C)

Just binary search and take smallest frequency each time, nothing hit or miss about this solution atleast

I think (at least for me) weak samples made it harder. But I also understand that if samples were too strong, then the risk of people guessing the correct answer increases. In summary: just a skill issue.

Created a 10 minute video editorial for

Div2D : Non-Palindromic Substrings. Experimented with a different format wherein I talk about how to arrive at the solution using 10 steps that logically follow from one another (instead of discussing the entire solution in detail). Let me know your feedback.Nice problem related to Div2A: For graphs with $$$n$$$ vertices what is the minimum number of edges needed to guarantee the graph is connected? ie. find the smallest $$$k$$$ such that any graph with $$$n$$$ vertices and $$$k$$$ edges is connected.

I have yet another proof for D1. Let's traverse the histogram from left to right. In other words, imagine that we have a rectangle $$$a_i\times 1$$$ standing vertically on the segment $$$[i-1, i]$$$ of the real line. We are a little ant, we go from $$$(0, 0)$$$ to $$$(n, 0)$$$, but when we encounter a rectangle, we have to go up, and when it ends, we have to go down. Let $$$m$$$ be the total distance we had to go up (which is the same as the total distance down as the total height difference is 0), let $$$i$$$-th step up by $$$1$$$ have happened at position $$$l_i$$$ and the $$$i$$$-th step down have happened at position $$$r_i$$$. Clearly, $$$l_i\leq r_i$$$. If $$$l_1 + 2\leq r_i$$$, then we can use segments $$$[l_i, r_i]$$$ for our problem, and we are good. Otherwise, if $$$l_i + 1 = r_i = j$$$ for some $$$i$$$, then it is not hard to see that $$$a_j > a_{j-1} + a_{j+1}$$$.

Could anyone clarify problem C? From my pov, given an array of numbers: 0 1 1 2 2 3 3 3, then the answer should be 2 since Bob could draw every "2" before Alice could pick.

If the game starts as follows

Then after this Alice won't take 1 which she still has time to do later, but rather

After this Alice can always take 1 and 3 in some order. In particular, if Alice is determined to take all the numbers from $$$0$$$ to $$$k$$$ (and if she can do it), then at each move she can take the one among these numbers that she hasn't got yet and that there are the fewest number of copies of at the moment.

Nice explaination, thanks!

In MEX Game 01, for the test case 2 (08). 9 (1 0 7 7 7 6 1 2 0). the answer should be 2 instead of 3.. Since Bob is trying to minimize the score && there is only one 2 in the array so, he'll wipe out 2 first so that the score could be 2... why he would let Alice to have the score 3???

in this problem Bob's approach should be finding the smallest integer (x) which occurrence in the array is less than x+1.. and this smallest number should be the answer...

Wrong, Alice goes first and she will take 2 for herself

You can also think in terms of game symmetry. 0 0 1 1 2 2 3 4 5 If Bob takes 1, Alice can mirror him and take 1. However, if Bob takes 3, Alice cannot mirror Bob and take 3. Thus Alice should take 3 first. However, after she takes 3, Bob will take 4 and Alice will be restricted to a MEX of 4.

Why did you make too small $$$n$$$ limit in div.1 C? I think Div. 1 participants can find tree diameter in $$$O(n)$$$ time.

Why not? It doesnt allow simpler solutions (plus helps troll people who assume complexity) and im quite lazy to write O(nlogn) checkers

Ok

Trolling people who assume complexity. Genious strategy =)

How can one prove that given a string S isn't alternating or the same repeated character, we can always obtain a k-length substring that is not a palindrome for all k from 2 to |S| — 1 inclusive?

For 1943A — MEX Game 1, What about the test case:

1

6

0 0 1 2 2 3

Shouldn't the output be 1 instead of 3? I'm sure I've gone wrong but I've checked multiple times and can't see where I am wrong.

Optimally Alice will pick 1 first, not 0 because she can always follow Bob if he chooses to pick a number that is present more than once within the array. For instance, if Alice picks 1 first, she is never worried about not being able to pick a 0 because if Bob picks 0 on the next turn, she can follow suit afterward. So given Alice picks a 1 on the first turn she will always be able to pick up a 0 and 2 no matter what Bob does, resulting in the answer being 3.

Of course, that makes sense I was blinded by Alice picking 0 for some reason. Thank you for the clear explanation.

But why are the constraints for Div1C $$$2 \cdot 10^3$$$? Because of the checker? Or is there an $$$O(N^2)$$$ solution?

Because of the checker.

what's a checker?

the program that checks your answer

Dominater069 thanks for such good quality editorial.

Can someone explain in detail the transitions for Div1 D2? I am unable to understand the editorial. What do u mean by ‘don’t need to worry about creating extra bad indices because of PIE?

Some comments on editorial of F2 (as I am having hard time understandig it)

If f(x) is the number of arrays with atleast x bad elements then the answer should simply be f(0) — f(1). The claim that answer will be sum of (-1)^i f(i) is incorrect.Or I am wrong?

Seems like the editorial is in bad shape here. I can see some review comments (“I am not satisfied with the definition”) which have not been removed.

Also, the complexity should be 0(nk) right? At least by reading the code it seems 0(nk) and not 0(n^2)

Is the editorial partial? I can see a line saying “let b denote the number bad elements in the array” but b is not used anytime. Has the full editorial not been published?

Also, you have defined dp(i, last element, x) and in the definition you have not mentioned about what last element is? Hopefully it is a[i]?

I feel transitions have also not bern explained.

As someone who is doing PIE for first time I am struggling here

Thanks

im sorry, ill try to rewrite it, and indeed complexity is nk, but k <= n anyways

Thanks

i tried to make it clearer, if its still not clear please comment the part, ill try to fix it

Shouldn't in E2 example be: 1, 2, 3, 5, 5 -> !!!2, 3, 3, 3!!! -> 1, 2, 2 and, if not, can someone explain why?

K = 4 so 4 subtractions, [2, 2, 3, 3] would need 5 subtractions

I typed 2,3,3,3. Not 2,2,3,3. Shouldn't by same logic, with example given, not be possible to go from 2,2,3,4 to 1,2,2(we need 3 operations to make 3 and 4 equal to 2, and then the one operation we make to reduce some number to 1 will Alice just take)

You are absolutely correct, sorry for the error.

In MEX Game 01, for the test case 2 (08).

9

(1 0 7 7 7 6 1 2 0).

the answer should be 2 instead of 3.. Since Bob is trying to minimize the score && there is only one 2 in the array so, he'll wipe out 2 first so that the score could be 2... why he would let Alice to have the score 3???

in this problem Bob's approach should be finding the smallest integer (x) which occurrence in the array is less than x+1.. and this smallest number should be the answer...

I don't think it should be 3, since Alice on her first turn will choose 2 first since that is the only 2 that exits. After that Bob will play 6. After these two turns are done it doesn't really matter what they choose, however, Alice will never get 3. Thus that is the answer.

In 1943A,I got confused in the test case [ 0 0 1 1 2 ] Alice is going to pick 0 then Bob will pick 1 and then Alice will pick 1 and Bob will pick 2.

So the output should be 2 but

Expected Answer is 3.Isn't this the most optimal way they both will be playing?

Alice pick 0 then Bob will pick 2 and then Alice will pick 1.

What do you mean by "We might accidentally create more bad elements, but remember that PIE allows us to not worry about that." in solution of Div. 1 D2?

I am not familiar very much to PIE, so if it is very basic rule sorry for that but.

I mostly understand PIE intuitively and it is clear to me when and how it is applicable in a problem (when a thing is a subset of another thing we do something with $$$(-1)^k$$$ or $$$(-1)^{k + 1}$$$). I've actually only once formally (mathematically) proved a solution involving non-trivial PIE. Here is how I would do it in this case (tho it definitely isn't the simplest method).

Recall the Wikipedia definition of the formula:

So we need to use sets somehow. Let $$$A_i$$$ be the set of arrays where position $$$i$$$ is bad. Note that $$$|A_i| = f([i])$$$ from the editorial.

Notice that $$$|A_1 \cup A_2 \cup \dots \cup A_n|$$$ is precisely the number of bad arrays. So the answer is $$$(k + 1)^n - |A_1 \cup A_2 \cup \dots \cup A_n|$$$.

Now let's use the formula. Consider $$$|A_{x_1} \cap A_{x_2} \cdots \cap A_{x_k}|$$$. This is the number of arrays where all of the elements on positions $$$x_1, x_2, \cdots, x_k$$$ are bad. It is $$$f([x_1, x_2, \cdots, x_k])$$$ from the editorial. So we get that $$$ans = (k + 1)^n - |A_1 \cup A_2 \cup \dots \cup A_n|$$$ $$$ans = (k + 1)^n - \sum_\limits{i} |A_i| + \sum_\limits{i < j} |A_i \cap A_j| - \sum_\limits{i < j < k} |A_i \cap A_j \cap A_k| - \cdots + (-1)^{n} |A_1 \cap A_2 \cap \cdots \cap A_n|$$$ $$$ans = f([]) - \sum_\limits{i} f([i]) + \sum_\limits{i < j} f([i, j]) - \sum_\limits{i < j < k} f([i, j, k]) - \cdots + (-1)^{n} f([1, 2, \cdots, n])$$$

Which is what is claimed in the editorial. You absolutely shouldn't ever do this in a real contest, however, it might be useful for understanding the approach.