1975A - Bazoka and Mocha's Array

Idea: Bazoka13 Solution: Bazoka13 Prepared by: Bazoka13

**Hint 1**

We can't do any insertions by the operation.

**Hint 2**

If the answer is yes, we can make $$$a$$$ become non-decreasing in no more than one operation.

**Solution**

Read the hints.

If $$$a$$$ is non-decreasing initially, the answer is yes.

If $$$a$$$ isn't non-decreasing initially, we can find the smallest $$$i$$$ such that $$$a_i> a_{i+1}$$$. Then we choose $$$x=[a_1,a_2,...,a_i]$$$ and $$$y=[a_{i+1},a_{i+2},...,a_n]$$$. If the array $$$y+x$$$ is non-decreasing, the answer is yes. Otherwise, the answer is no.

**Code**

```
#include<bits/stdc++.h>
using namespace std;
const int N = 1e5+10;
int a[N];
int main(){
int t;
cin>>t;
while(t--){
int n;
cin>>n;
for(int i=1;i<=n;i++){
cin>>a[i];
}
int pos=0;
for(int i=1;i<n;i++){
if(a[i]>a[i+1]){
pos=i;
break;
}
}
if(!pos)cout<<"Yes\n";
else{
int fl=0;
for(int i=pos+1;i<=n;i++){
int j=(i%n)+1;
if(a[i]>a[j])fl=1;
}
if(!fl)cout<<"Yes\n";
else cout<<"No\n";
}
}
}
```

1975B - 378QAQ and Mocha's Array

Idea: Bazoka13 Solution: Bazoka13 Prepared by: Bazoka13

**Hint 1**

How to solve the problem if we only need to find a number $$$i$$$($$$1\leq i\leq n$$$) such that $$$a_k$$$ is divisible by $$$a_i$$$ for all $$$k$$$($$$1\leq k\leq n$$$)?

**Hint 2**

We only need to check whether all elements are divisible by the minimum element of the array.

**Solution**

Read the hints.

Suppose the minimum element of $$$a$$$ is $$$x$$$. Then we iterate over $$$a$$$, if an element is not divisible by $$$x$$$, then we add it to $$$b$$$ ($$$b$$$ is initially empty).

If $$$b$$$ is still empty after iterating $$$a$$$, the answer is yes.

If $$$b$$$ isn't empty, we check whether all elements of $$$b$$$ are divisible by the minimum element of $$$b$$$. If so, the answer is yes. Otherwise, the answer is no.

**Code**

```
#include<bits/stdc++.h>
using namespace std;
#define ll long long
const int N = 1e5+10;
int a[N];
int main(){
int t;
cin>>t;
while(t--){
int n;
cin>>n;
int fl=0;
for(int i=1;i<=n;i++){
cin>>a[i];
if(a[i]==1)fl=1;
}
if(fl)cout<<"Yes\n";
else{
sort(a+1,a+1+n);
vector<int> b;
for(int i=2;i<=n;i++){
if(a[i]%a[1])b.push_back(a[i]);
}
sort(b.begin(),b.end());
n = b.size();
for(int j=1;j<n;j++){
if(b[j]%b[0]){
fl=1;
break;
}
}
if(!fl)cout<<"Yes\n";
else cout<<"No\n";
}
}
}
```

1975C - Chamo and Mocha's Array

Idea: Bazoka13 Solution: Bazoka13 Prepared by: Bazoka13

**Hint 1**

If a subarray of length at least $$$2$$$ contains only the same elements, we can change all elements of the array to that element by operations.

**Hint 2**

Suppose the answer is $$$x$$$, we can perform no more than one operation on the original array $$$a$$$ so that there is a subarray of length at least $$$2$$$ that contains only $$$x$$$.

**Hint 3**

If we can make all elements of a subarray become $$$x$$$ in one operation, then there must be a subarray of length $$$3$$$ with a median of $$$y$$$ ($$$y\geq x$$$).

**Solution**

Read the hints.

If $$$n=2$$$, the answer is the minimum element.

If $$$n\geq 3$$$, we iterate over all subarrays of length $$$3$$$, and the answer is the maximum value of the median of all subarrays of length $$$3$$$.

**Code**

```
#include<bits/stdc++.h>
using namespace std;
const int N = 1e5+10;
int a[N];
int main(){
int n,t;
cin>>t;
while(t--){
cin>>n;
for(int i=1;i<=n;i++)
cin>>a[i];
if(n==2)cout<<min(a[1],a[2])<<"\n";
else{
int ans = min(a[1],a[2]);
for(int i=1;i<=n-2;i++){
vector<int> tmp;
for(int k=0;k<=2;k++)
tmp.push_back(a[i+k]);
sort(tmp.begin(),tmp.end());
ans = max(ans,tmp[1]);
}
cout<<ans<<"\n";
}
}
}
```

Idea: SanweiTreap Solution: Atomic-Jellyfish Prepared by: Bazoka13

**Hint 1**

If two pieces overlap at the beginning, can you solve the problem?

**Hint 2**

Consider the first time a vertex is painted blue. After this event occurs, what happens next?

**Solution**

Read the hints.

In subsequent movements after the first time a vertex is painted blue, we can ignore the process of painting vertices red and then painting them blue.

We call the first vertex painted blue $$$r$$$. Then it is not difficult to find that $$$P_A$$$ arrived at this vertex earlier than $$$P_B$$$. Considering all subsequent movements of $$$P_B$$$, $$$P_A$$$ can restore these movements one by one after reaching $$$r$$$, then $$$P_B$$$ will pass through all vertices have been painted red.

If we know which vertex is $$$r$$$, this will be a classic problem, assuming the distance between the farthest vertex on the tree from $$$r$$$ and $$$r$$$ is $$$d$$$, then the answer is $$$2(n-1)-d$$$. Then we consider the strategies of $$$P_A$$$ and $$$P_B$$$ at this time. The two must be close to each other, and then until the first vertex is painted blue. If another vertex is $$$r$$$, although the value of $$$d$$$ may increase, every time the value of $$$d$$$ increases by $$$1$$$, the time when $$$P_A$$$ and $$$P_B$$$ meet will also increase by at least $$$1$$$, so the answer will not decrease.

**Code**

```
#include<bits/stdc++.h>
using namespace std;
#define ll long long
const int N = 2e5+10;
vector<int> g[N];
int dep[N],f[N],mx,n,a,b;
void dfs(int x,int fa){
dep[x]=dep[fa]+1;
mx = max(mx,dep[x]);
f[x]=fa;
for(auto i:g[x]){
if(i==fa)continue;
dfs(i,x);
}
}
vector<int> move(int x,int y){
if (dep[x] > dep[y]) swap(x, y);
vector<int> track,ano;
int tmp = dep[y] - dep[x], ans = 0;
track.push_back(y);
while(tmp--){
y = f[y];
track.push_back(y);
}
if (y == x) return track;
ano.push_back(x);
while (f[x] != f[y]) {
x = f[x];
y = f[y];
ano.push_back(x);
track.push_back(y);
}
track.push_back(f[y]);
reverse(ano.begin(),ano.end());
for(auto i:ano)track.push_back(i);
return track;
}
int main(){
int t;
cin>>t;
dep[0]=-1;
while(t--){
mx = -1;
cin>>n;
for(int i=1;i<=n;i++)g[i].clear();
cin>>a>>b;
for(int i=1;i<n;i++){
int u,v;
cin>>u>>v;
g[u].push_back(v);
g[v].push_back(u);
}
if(a==b){
dfs(a,0);
cout<<2*(n-1)-mx<<"\n";
continue;
}
dfs(1,0);
auto tr = move(a,b);
int m = tr.size();
if(tr[0]!=a)reverse(tr.begin(),tr.end());
int x = tr[(m-1)/2];
mx = -1;
dfs(x,0);
cout<<2*(n-1)-mx+(m-1-(m-1)/2)<<"\n";
}
}
```

Idea: Bazoka13 Solution: Serval Prepared by: Bazoka13

**Hint 1**

Suppose the tree is a rooted tree, we only need to care about the number of black child vertices of each vertex.

**Hint 2**

If the black vertices form a chain, there is no black vertex that has three or more black child vertices. And there is at most one black vertex that has two black child vertices.

**Hint 3**

If the black vertices form a chain, there is at most one black vertex whose parent vertex is white.

**Solution**

Read the hints.

We can maintain the number of black vertices with $$$0/1/2/3+$$$ black child vertices in $$$O(1)$$$. When we flip the color of one vertex, only it and its parent node are affected.

If the black vertices form a chain: - no black vertex that has three or more black child vertices and there is at most one black vertex that has two black child vertices. - there is at most one black vertex whose parent vertex is white. - if there is one black vertex that has two black child vertices, its parent vertex must be white.

**Code**

```
#include<bits/stdc++.h>
using namespace std;
const int N = 1e6+10;
int f[N];
vector<int> g[N];
int col[N],num[N];
int faw,sum_two,sum_more,tot_black,xor_two;
int n;
void init(){
sum_two=0;
tot_black=0;
sum_more=0;
faw=0;
xor_two=0;
for(int i=1;i<=n;i++){
g[i].clear();
num[i]=0;
}
}
void dfs(int x,int fa){
f[x]=fa;
if(col[x]==1)tot_black++;
int sum=0;
for(auto i:g[x]){
if(i==fa)continue;
dfs(i,x);
if(col[i]==1)sum++;
}
if(col[fa]==0&&col[x]==1)faw++;
if(col[x]==1){
if(sum==2)sum_two++,xor_two^=x;
if(sum>2)sum_more++;
}
num[x]=sum;
}
void flip(int x){
col[x]^=1;
int d = col[x]==1?1:-1;
tot_black+=d;
if(col[f[x]]==0)faw+=d;
if(num[x]==2)sum_two+=d,xor_two^=x;
if(num[x]>2)sum_more+=d;
faw-=d*num[x];
if(col[x]==1){
if(col[f[x]]==1&&num[f[x]]==2)sum_two--,sum_more++,xor_two^=f[x];
num[f[x]]++;
if(col[f[x]]==1&&num[f[x]]==2)sum_two++,xor_two^=f[x];
}else{
if(col[f[x]]==1&&num[f[x]]==2)sum_two--,xor_two^=f[x];
num[f[x]]--;
if(col[f[x]]==1&&num[f[x]]==2){
sum_two++;
sum_more--;
xor_two^=f[x];
}
}
}
bool check(){
if(!tot_black)return false;
if(sum_more||sum_two>1)return false;
if(faw>1)return false;
if(sum_two&&col[f[xor_two]]==1)return false;
return true;
}
int main(){
int t;
scanf("%d",&t);
while(t--){
init();
int q;
scanf("%d %d",&n,&q);
for(int i=1;i<=n;i++) cin>>col[i];
for(int i=1;i<n;i++){
int u,v;
scanf("%d %d",&u,&v);
g[u].push_back(v);
g[v].push_back(u);
}
dfs(1,0);
while(q--){
int x;
scanf("%d",&x);
flip(x);
if(check()) printf("Yes\n");
else printf("No\n");
}
}
}
```

Idea: Toxel Solution: errorgorn, Toxel Prepared by: Nerovix

**Hint 1**

Consider enumerating each number from $$$0$$$ to $$$n-1$$$ whether it is contained by $$$S$$$, when we have enumerated the first $$$x$$$ numbers, there are only $$$2^{n-x}$$$ constraints.

**Solution**

Read the hints.

Consider enumerating each number from $$$0$$$ to $$$n-1$$$ whether it is contained by $$$S$$$. Suppose that the current enumeration reaches $$$i$$$, and in the remaining constraints, $$$T_1$$$ and $$$T_2$$$ are two sets, the only difference between them is whether they contain $$$i$$$ ($$$T_1$$$ contains $$$i$$$):

- $$$S$$$ contains $$$i$$$. We can merge $$$T_1$$$ and $$$T_2$$$ into a new constraint $$$T'$$$ and $$$v_{T'}=(v_{T_1}»1)$$$&$$$v_{T_2}$$$.
- $$$S$$$ doesn't contain $$$i$$$. We can merge $$$T_1$$$ and $$$T_2$$$ into a new constraint $$$T'$$$ and $$$v_{T'}=v_{T_1}$$$&$$$v_{T_2}$$$.

We can merge constraints quickly when the enumeration reaches a new number. And the time complexity is $$$O(n\cdot 2^n)$$$

**Code**

```
#include<bits/stdc++.h>
using namespace std;
const int N = 24, S = (1 << 20) + 5;
int n = 0, f[N][S] = {};
vector<int> ans;
inline void dfs(int s = 0, int i = 0){
if(i < n){
int m = 1 << (n - i - 1);
for(int t = 0 ; t < m ; t ++) f[i + 1][t] = f[i][t] & f[i][m | t];
dfs(s << 1, i + 1);
for(int t = 0 ; t < m ; t ++) f[i + 1][t] = f[i][t] & (f[i][m | t] >> 1);
dfs(s << 1 | 1, i + 1);
}
else if(f[n][0] & 1) ans.push_back(s);
}
int main(){
scanf("%d", &n);
f[0][0] = 1;
for(int s = 1 ; s < 1 << n ; s ++) scanf("%d", &f[0][s]);
dfs();
printf("%d\n", int(ans.size()));
for(int s : ans) printf("%d\n", s);
return 0;
}
```

Idea: Atomic-Jellyfish, zimpha Solution: Atomic-Jellyfish Prepared by: Atomic-Jellyfish, errorgorn, MagicalFlower

**Hint 1**

If $$$s, t$$$ both have $$$*$$$, or both don't have $$$*$$$, then it will be a simple problem. Can you try to solve this problem?

**Hint 2**

According to the first hint, we have discussed two cases. In the remaining cases, without loss of generality, we think that only $$$t$$$ has $$$*$$$. Suppose we write $$$t$$$ as $$$t'_0*t'_1*t'_2* \dots*t'_k$$$. Where $$$t'_i$$$ does not take $$$*$$$, assuming that $$$|t'_i| \leq 50$$$ is satisfied for all $$$i$$$, can you solve this problem?

**Hint 3**

There is a strictly subproblem of this problem, namely, given two strings $$$s$$$ and $$$t$$$ consisting only of characters from $$$a$$$ to $$$z$$$ and $$$-$$$, you need to find all positions in $$$s$$$ where $$$t$$$ can be matched.

This problem has a classic solution, where $$$-$$$ is set to $$$0$$$, and $$$a$$$ to $$$z$$$ are sequentially assigned values from $$$1$$$ to $$$26$$$, and then let $$$f_i = \sum_{j=1}^m [s_{i +j-1} > 0] \times [t_j > 0] (s_{i+j-1}-t_j)^2$$$, then all positions of $$$f_i = 0$$$ satisfy $$$s[i, i + m -1]$$$ can match $t$.

Decompose the calculation formula, $$$f_i = \sum_{j=1}^m s_{i+j-1}^2+s_j^2 - 2\sum_{j=1}^m s_{i+j -1}{t_j} - \sum_{j=1}^m [s_{i+j-1} = 0] t_j^2 - \sum_{j=1}^m s_{i+j-1}^ 2[t_j=0]$$$. For $$$\sum_{j=1}^m s_{i+j-1}^2+s_j^2$$$, you can prefix and process it, and for the remaining part, you can use FFT in $$$O(n \log n )$$$ to be resolved.

What is the use of this solution in the original problem?

**Solution**

Read the hints.

In the following, we consider $$$n, m$$$ to be of the same order.

Consider the case where $$$s$$$ and $$$t$$$ do not contain $$$*$$$. We only need to be compared bit by bit.

Consider the case where $$$s, t$$$ both contain $$$*$$$, if the beginning of $$$s$$$ is not $$$*$$$ or the beginning of $$$t$$$ is not $$$*$$$:

- If one of the first characters of $$$s$$$ and $$$t$$$ is $$$*$$$ or the first characters can match, delete the first characters of the two strings that are not $$$*$$$.
- Oherwise the answer is obviously
`No`

.

Performing the same operation on the last characters, it is not difficult to find that it will be reduced to two strings $$$*s’$$$ and $$$t'*$$$, where $$$s'$$$ and $$$t'$$$ are arbitrary strings. Then it is not difficult to find that $$$t's'$$$ matches two strings at the same time. Therefore, as long as the head and tail can be successfully deleted, the answer must be `Yes`

.

Consider the hardest case. Without loss of generality, we assume that only $$$t$$$ contains $$$*$$$, otherwise $$$s, t$$$ can be exchanged. We still delete the beginning and the end first. It is not difficult to find that thereafter $$$t$$$ can be written as $$$*t'_1*t'_2*\dots*t'_k *$$$.

Then we will have a greedy solution. We iterate $$$i$$$ from $$$1$$$ to $$$n$$$, find the first matching position of $$$t'_i$$$ in $$$s$$$ each time, and delete tthese matching characters and all preceding characters in $$$s$$$, that is, remove a prefix of $$$s$$$, and then until all matching is completed or no matching position is found in $$$s$$$.

Assume that we use FFT to brute force solve this problem every time. See Hint.3 for the specific solution, then the complexity is $$$O(nk \log n)$$$.

But it is not difficult to find that this is very wasteful. In fact, we can do this:

- When matching $$$t'_i$$$, only take out the first $$$|2t'_i|$$$ characters of $$$s$$$ each time and try to match $$$t'_i$$$. Because if the match is successful, then since all positions matching $$$t'_i$$$ are deleted, it is not difficult to find that at least $$$|t'_i|$$$ characters are deleted. And if the match fails, it is not difficult to find that the first $$$|t'_i|$$$ characters of $$$s$$$ will no longer be useful and can also be deleted. Therefore we remove at least the first $$$|t'_i|$$$ characters in $$$s$$$ in complexity $$$O(|t'_i| \log |t'_i|)$$$.

Since the total length of $$$s$$$ is $$$n$$$, the total time complexity does not exceed $$$O(n \log n)$$$.

**Code**

```
#include<bits/stdc++.h>
using namespace std;
typedef long long ll;
const ll N = (1 << 22) + 5, Mod = 2013265921, G = 31;
inline ll power(ll x, ll y){
ll ret = 1;
while(y){
if(y & 1) ret = ret * x % Mod;
x = x * x % Mod, y >>= 1;
}
return ret;
}
ll p[N] = {}, w[N] = {}, g[N] = {}, iv[N] = {};
inline void dft(ll n, ll a[], bool idft){
for(ll i = 0 ; i < n ; i ++) if(i < p[i]) swap(a[i], a[p[i]]);
for(ll m = 1 ; m < n ; m <<= 1) for(ll j = 0, k = 0 ; j < n ; j += m << 1, k ++) for(ll i = j ; i < j + m ; i ++){
ll x = a[i], y = a[i + m];
a[i] = x + y, a[i] >= Mod && (a[i] -= Mod);
a[i + m] = (x - y + Mod) * w[k] % Mod;
}
if(!idft) return;
reverse(a + 1, a + n);
for(int i = 0 ; i < n ; i ++) a[i] = a[i] * iv[n] % Mod;
}
inline ll sqr(ll x){
return x * x;
}
ll n = 0, m = 0, a[3][N] = {}, b[3][N] = {}, c[N] = {}, d[N] = {};
char s[N] = {}, t[N] = {};
inline ll work(ll L, ll R, ll l, ll r){
ll M = 1; while(M < R - L + r - l) M <<= 1;
w[0] = 1;
for(ll k = 1 ; k < M ; k <<= 1){
ll bit = M / 2 / k;
if(k == M / 2) for(ll i = 0; i < k ; i ++) p[i + k] = p[i] | bit;
else for(ll i = 0 ; i < k ; i ++){
w[i + k] = w[i] * g[k] % Mod;
p[i + k] = p[i] | bit;
}
}
for(ll i = 0 ; i < M ; i ++){
p[i] = p[i >> 1] >> 1;
if(i & 1) p[i] |= M >> 1;
}
ll z = 0;
for(ll i = 0 ; i < M ; i ++){
c[i] = 0;
for(ll f = 0 ; f < 3 ; f ++) a[f][i] = b[f][i] = 0;
}
for(ll i = L ; i < R ; i ++){
ll x = (s[i] == '-') ? 0 : (s[i] - 'a' + 1);
a[0][i - L] = x ? 0 : 1, a[1][i - L] = 2 * x, a[2][i - L] = sqr(x), d[i] = sqr(x);
}
d[R] = 0;
for(ll i = l ; i < r ; i ++){
ll x = (t[i] == '-') ? 0 : (t[i] - 'a' + 1);
b[0][r - i] = sqr(x), b[1][r - i] = x, b[2][r - i] = x ? 0 : 1, z += sqr(x);
}
for(ll f = 0 ; f < 3 ; f ++){
dft(M, a[f], 0), dft(M, b[f], 0);
for(ll i = 0 ; i < M ; i ++) c[i] = (c[i] + a[f][i] * b[f][i]) % Mod;
}
dft(M, c, 1);
for(ll i = 0 ; i < r - l ; i ++) z += d[i + L];
for(ll i = L ; i <= R - (r - l) ; z -= d[i], z += d[i + (r - l)], i ++) if(z % Mod == c[i - L + r - l]) return i;
return -1;
}
int main(){
for(ll i = 1 ; i < N ; i <<= 1) g[i] = power(G, Mod / 4 / i), iv[i] = power(i, Mod - 2);
scanf("%lld %lld", &n, &m);
scanf("%s %s", s, t);
while(n && m && s[n - 1] != '*' && t[m - 1] != '*'){
if(s[n - 1] != t[m - 1] && s[n - 1] != '-' && t[m - 1] != '-'){
printf("No");
return 0;
}
else n --, m --;
}
reverse(s, s + n), reverse(t, t + m);
while(n && m && s[n - 1] != '*' && t[m - 1] != '*'){
if(s[n - 1] != t[m - 1] && s[n - 1] != '-' && t[m - 1] != '-'){
printf("No");
return 0;
}
else n --, m --;
}
reverse(s, s + n), reverse(t, t + m);
if(min(n, m) == 0){
while(n && s[n - 1] == '*') n --;
while(m && t[m - 1] == '*') m --;
if(max(n, m) == 0) printf("Yes");
else printf("No");
return 0;
}
bool u = 0, v = 0;
for(ll i = 0 ; i < n ; i ++) if(s[i] == '*') u = 1;
for(ll i = 0 ; i < m ; i ++) if(t[i] == '*') v = 1;
if(u){
if(v){
printf("Yes");
return 0;
}
else swap(n, m), swap(s, t);
}
ll L = 0, R = 0;
for(ll l = 1, r = l ; l < m ; l = r + 1, r = l){
while(t[r] != '*') r ++;
if(r - l) while(1){
R = min(n, L + 2 * (r - l));
if(R - L < r - l){
printf("No");
return 0;
}
ll h = work(L, R, l, r);
if(h == -1) L = R - (r - l) + 1;
else{
L = h + r - l;
break;
}
}
}
printf("Yes");
return 0;
}
```

Idea: Bazoka13 Solution: Toxel Prepared by: Bazoka13

**Solution**

Consider the maximum character $$$m$$$ of the string $$$s$$$.

If $$$m$$$ only appears once, then obviously the string starting with it is the largest in lexicographical order. To make it the smallest, $$$m$$$ should be arranged at the end. Other characters can be arranged arbitrarily.

If $$$m$$$ appears more than twice, it can be proven that the beginning and end of the answer string must be $$$m$$$. If the beginning of the string is not $$$m$$$, moving the character at the beginning to somewhere before a non-first $$$m$$$ can make the suffix lexicographical order starting with this $$$m$$$ smaller, while the suffix lexicographical order starting with $$$m$$$ after this remains unchanged. For suffixes that do not start with $$$m$$$, they are definitely not the largest, so they do not need to be considered. If the end of the string is not $$$m$$$, first remove it, and all suffix lexicographical orders become smaller. Similarly, it can also be placed in front of a certain $$$m$$$ to reduce the lexicographical order.

In this case, the answer is always in the form of $$$ms_{1}ms_{2}m\dots ms_{t}m$$$. Where $$$s_{1},s_{2},\dots,s_{t}$$$ contain characters that are less than $$$m$$$, it could also be an empty string. Now suppose that the set $$${s_{1},s_{2},\dots,s_{t}}$$$ is determined, and consider how to arrange their order. Suppose we are currently comparing two suffixes $$$S_{i}=ms_{i}m\dots ms_{t}m$$$ and $$$S_{j}=ms_{j}m\dots ms_{t}m$$$, and suppose $$$s_{i+k}$$$ and $$$s_{j+k}$$$ are the first two substrings that appear differently. Then $$$ms_{i}m\dots ms_{i+k-1}$$$ and $$$ms_{j}m\dots ms_{j+k-1}$$$ must be equal, and the size relationship between $$$S_{i}$$$ and $$$S_{j}$$$ is the same as the size relationship between $$$ms_{i+k}$$$ and $$$ms_{j+k}$$$. In this way, all $$$ms_{i}$$$ can be sorted according to lexicographical order, and regarded as a new character, and Solving the original problem for $$$(ms1)(ms_2)…(ms_t)$$$ is sufficient.

Now consider how to determine the set $$${s_{1},s_{2},\dots,s_{t}}$$$. For a certain $$$s_{i}$$$, it should obviously be ordered, because this can make $$$ms_{i}$$$ smaller. There is also another property. Suppose there are $$$s_{i}=p_{i}c$$$ and $$$s_{j}$$$ such that $$$ms_{j}$$$ is the current lexicographically largest string, and $$$mp_{i}<ms_{j}$$$, where $$$c$$$ is a character less than $$$m$$$. Putting $$$c$$$ after $$$s_{j}$$$ can make the answer smaller.

The proof of the property is as follows. Without loss of generality, let $$$f_{1}=p_{i}c$$$, $$$f_{2}=p_{i}$$$, $$$f_{3}=s_{j}c$$$, $$$f_{4}=s_{j}$$$, then we have $$$mf_{2}<mf_{1}<mf_{4}$$$ and $$$f_{3}m<f_{4}m$$$. Consider the subproblem after iteration, by definition, $$$f_{4}$$$ is the largest character in the problem. If $$$f_{1}$$$ and $$$f_{4}$$$ are used, and there is an $$$f_{4}$$$ to the left of some $$$f_{1}$$$ in the optimal solution string, then replacing this $$$f_{1}$$$ and its nearest $$$f_{4}$$$ on the left with $$$f_{2}$$$ and $$$f_{3}$$$ respectively, it is easy to prove that the largest suffix will definitely become smaller or unchanged. Otherwise, all $$$f_{1}$$$ are to the left of all $$$f_{4}$$$, at this time you can arbitrarily replace some $$$f_{1}$$$ and $$$f_{4}$$$ with $$$f_{2}$$$ and $$$f_{3}$$$ respectively, it is easy to prove that the largest suffix still becomes smaller or unchanged.

Based on the above discussion, we can now give the final greedy algorithm for the problem. Suppose the largest character $$$m$$$ has $$$t+1$$$ occurrences. If $$$t=0$$$, then put the only $$$m$$$ at the end, and the other characters can be arranged arbitrarily. Otherwise, suppose there are $$$p$$$ characters less than $$$m$$$, if $$$p\le t$$$, it can be proved that there is at most one character between any two $$$m$$$. Consider $$$|s_{i}|\ge2$$$, $$$s_{j}=\varepsilon$$$ (empty string), then they meet the above properties, and it is more optimal to exchange the last character of $$$s_{i}$$$ to $$$s_{j}$$$. Therefore, $$$s_{1},\dots,s_{p}$$$ is a single character, $$$s_{p+1}=\dots=s_{t}=\varepsilon$$$, and recursively solve the subproblem. Note that to ensure time complexity, when $$$t\gg p$$$, multiple rounds of recursion need to be combined.

If $$$p>t$$$, it can be similarly proved that there will not be an $$$s_{i}$$$ that is an empty string. In addition, consider the first character of all $$$s_{i}$$$, for those among these $$$t$$$ characters that are not the largest, using the same property, it can be proved that the length of the string they are in must be $$$1$$$, and no other characters will follow. Also, since $$$s_{i}$$$ is ordered, it can be proved that the first character of all $$$s_{i}$$$ must be the smallest $$$t$$$ characters less than $$$m$$$. Next, the remaining characters will only be filled after those largest characters among these $$$t$$$ characters, and it can be similarly proved that those filled in the second position of these strings are also the smallest among the remaining characters, and so on, until all characters are used up.

Notice that the complexity of each recursion is linear, and each recursion at least halves the number of characters, so the total complexity is $$$O(n\log n)$$$.

**Bonus**

How to solve this problem in $$$O(n)$$$?

**Code**

```
#include<bits/stdc++.h>
using namespace std;
struct ch{
string c;
ch(){c = "";}
ch(string cc){c=cc;}
bool operator == (const ch& p)const{
return c==p.c;
}
bool operator != (const ch& p)const{
return c!=p.c;
}
void add(ch& p){
c.append(p.c);
}
};
vector<ch> solve(vector<ch> cs){
int n = cs.size();
int t = 0;
if(cs.empty())return cs;
for(int i=n-2;i>=0;i--){
if(cs[i]!=cs[n-1])break;
t++;
}
if(t==0){
vector<ch> res;
res.push_back(cs[n-1]);
return res;
}
int p = n-(t+1);
if(p<=t){
vector<ch> res;
vector<ch> nxt;
int k = (t+1)/(p+1);
int le = (t+1)%(p+1);
ch m = cs[n-1];
for(int j=1;j<k;j++){
m.add(cs[n-1]);
}
for(int i=0;i<p;i++){
ch tmp = m;
tmp.add(cs[i]);
nxt.push_back(tmp);
}
for(int i=0;i<le;i++){
nxt.push_back(cs[n-1]);
}
auto nxt_solved = solve(nxt);
for(auto i:nxt_solved)res.push_back(i);
res.push_back(m);
return res;
}else{
vector<ch> res;
vector<ch> nxt;
ch m = cs[n-1];
for(int i=0;i<t;i++){
nxt.push_back(m);
}
int now = 0,beg=0;
for(int i=0;i<p;i++){
nxt[now].add(cs[i]);
if(now>=1){
if(cs[i]!=cs[i-1]){
beg=now;
}
}
now++;
if(now>=t)now=beg;
}
auto nxt_solved = solve(nxt);
for(auto i:nxt_solved)res.push_back(i);
res.push_back(m);
return res;
}
}
vector<ch> trans(string s){
vector<ch> tmp;
for(auto i:s){
string tmpp = "";
tmpp+=i;
tmp.push_back(tmpp);
}
return tmp;
}
int main(){
int t;
cin>>t;
while(t--){
int n;
cin>>n;
string s;
cin>>s;
sort(s.begin(),s.end());
auto tmp = trans(s);
auto ans= solve(tmp);
for(auto c:ans)cout<<c.c;
cout<<"\n";
}
}
```

Idea: Atomic-Jellyfish Solution: Atomic-Jellyfish Prepared by: Atomic-Jellyfish, errorgorn

**Hint 0**

Play "Slay the Spire". Unfortunately, it doesn't help much.

**Hint 1**

It may be helpful to calculate the probability that your hand becomes empty instead of the probability that the draw pile becomes empty.

**Hint 2**

The intended solution is a $$$O(n^5)$$$ DP. The constant factor is small enough to run.

**Solution**

If all the cards in your hand is $$$0$$$, you immediately lose. Therefore, we only need to consider whether we can empty our hand by playing cards with non-$$$0$$$ value.

Consider a sequence of playing cards that will lead to use losing. The sum over the probabilities of all these losing sequences is the probability of us losing.

To simplify the process, we will consider drawing $$$x$$$ random cards from the deck as drawing a random card from the deck $$$x$$$ times.

Consider a sequence of played cards as $$$z_1, z_2, \ldots, z_k$$$ where $$$z_i$$$ is the index of the card played. Consider how $$$z_i$$$ was drawn into our hand. It is not hard to see that either:

- $$$z_i$$$ was in our hand at the start
- $$$z_i$$$ from drawn from card $$$a_j$$$ where $$$j$$$ is the maximum index such that $$$z_j \leq z_i$$$ and $$$j < i$$$.

From our previous observation, we can motivate the following range DP. Let $$$f(i,0/1,u,p,x)$$$ denote the a sequence of played cards with the following:

- the minimum index of a card we played is $$$i$$$
- whether there are any cards with value greater than $$$i$$$ that is one of the cards that were in our hand in the start
- before we played this sequence, we played a card of value $$$u$$$
- before we played this sequence, the deck has $$$p$$$ cards
- The
**net**number of cards that we draw is $$$x$$$

We will use $$$g(i,0/1,u,p,x)$$$ to help us transition quickly. It is defined as the suffix sum (on dimension $$$i$$$) of $$$f$$$.

Consider transitioning based on the card of index $$$i$$$.

We have the following transition for $$$f(*,0,*,*,*)$$$:

- $$$f(i, 0, u, p, x) = \sum_{y=0}^{x-(c_i-1)} g(i+1, 0, u, p, y) \times g(i, 0, c_i, p - (c_i - 1) - y, x - (c_i - 1) - y) \times \frac {u+y} {(p-y+1)^{\underline c_i}}$$$
- $$$g(i, 0, u, p, x) = g(i+1, 0, u, p, x) + f(i, 0, u, p, x)$$$

For $$$f(*,1,*,*,*)$$$, To handle the fact that there are some cards that are initially in our hands, let $$$h$$$ denote the the number of $$$j>i$$$ such that $$$s_j=1$$$.

If $$$s_i = 1$$$, we have the following transitions:

- $$$f(i, 1, u, p, x) = \sum_{y=h}^{x-c_i} g(i+1, 1, u, p, y) \times g(i, 0, c_i, p - (c_i - 1) - y+h, x - c_i - y) \times \frac {1} {(p-y+1+h)^{\underline c_i}}$$$
- $$$g(i, 1, u, p, x) = f(i, 1, u, p, x)$$$

If $$$s_i = 0$$$, we have the following transitions:

- $$$f(i, 1, u, p, x) = \sum_{y=h}^{x-(c_i-1)} g(i+1, 1, u, p, y) \times g(i, 0, c_i, p - (c_i - 1) - y+h, x - (c_i - 1) - y) \times \frac {u+y} {(p-y+1+h)^{\underline c_i}}$$$
- $$$g(i, 1, u, p, x) = g(i+1, 1, u, p, x) + f(i, 1, u, p, x)$$$

The time complexity is $$$O(n^5)$$$.

Now consider adding the cards with value $$$1$$$. It is possible that our game never terminates when we keep on drawing cards with value $$$1$$$. We will instead force that when we play a card of value $$$1$$$, we are not allowed to draw a card of value $$$1$$$. It is not hard to observe that the answer will not change.

Consider $$$dp(i,x)$$$ to be the probability that we have in our hand:

- $$$i$$$ card of value $$$0$$$
- $$$x$$$ card of value $$$1$$$

It is not hard to observe that for the initial state, $$$dp(i, o+x)$$$ is $$$g(1, 1, i+x, 0, m) \times \binom {i+x} i \times (m-o)^{\underline x}$$$.

Next, consider the transitions. There are only two cases, we draw a card with value $$$0$$$, or a card with value greater than $0$：

$$$dp(i, x) \times \frac 1 {w+k-i}\rightarrow dp(i + 1, x - 1)$$$

$$$\forall j+y>1, dp(i, x) \times g(1, 0, 1, n-d-i-x,j+y-1) \times \frac 1 {w+k-i} \times \binom {j+y} j \times (m-x+1)^{\underline y}\rightarrow dp(i+j, x+y-1)$$$

Finally, the answer is $$$\sum_{i=0}^k dp(i, 0) \times k^{\underline i}$$$.

The time complexity is $$$O(n^4)$$$.

Therefore, the total complexity is $$$O(n^5)$$$. If we use FFT, it can be $$$O(n^4 \log n)$$$, but it was not required in our solution and would probably make the code slower.

Furthermore, we can record $$$v$$$ instead of $$$u$$$, which means there are $$$v$$$ cards with $$$i$$$ or greater index drawn from $$$u$$$ draws. So it's not difficult to find $$$v \leq u $$$. And when $$$c_i \leq \sqrt n $$$, we have $$$v \leq u \leq c_i \leq \sqrt n $$$. And when $$$c_i > \sqrt n $$$, we have $$$v \leq \frac n {c_i - 1} \leq \sqrt n $$$, the complexity can become $$$O (n ^ {4.5}) $$$. If we use FFT, it can be $$$O (n ^ {3.5} \log n) $$$.

Let $$$k$$$ be the number of cards with value $$$0$$$. It is not hard to see that:

- the size of $$$i$$$ is $$$n-k$$$
- the size of $$$x$$$ is $$$k$$$
- the size of $$$u$$$ is $$$n-k$$$
- the size of $$$p$$$ is $$$n$$$

The number of states is $$$(n-k)^2k^2n$$$, which has a constant of $$$\frac{1}{16}$$$. We also have that $$$y \leq x$$$ and $$$u \leq c_i$$$, which both gives us another constant of $$$\frac{1}{2}$$$. Since we have a flag in our dp state, it has a factor of $$$2$$$. This gives us a constant factor of rougly $$$\frac{1}{32}$$$, which is extremely small. So it is reasonable to expect $$$O(n^5)$$$ to run about $$$1$$$ second.

**Code**

```
#include<bits/stdc++.h>
using namespace std;
typedef long long ll;
const ll N = 120 + 5, Mod = 1e9 + 7;
inline ll power(ll x, ll y){
ll ret = 1;
while(y){
if(y & 1) ret = ret * x % Mod;
x = x * x % Mod, y >>= 1;
}
return ret;
}
inline ll inv(ll x){
return power(x, Mod - 2);
}
ll n = 0, m = 0, w = 0, o = 0, d = 0, r = 0, k = 0, c[N] = {}, v[N] = {};
ll iv[N] = {}, a[N][N] = {}, b[N][N] = {}, binom[N][N] = {};
ll f[2][2][N][N][N] = {}, g[2][2][N][N][N] = {}, dp[N][N] = {};
char s[N] = {};
inline void init(){
n = m = w = o = d = r = k = 0;
memset(c, 0, sizeof(c)), memset(v, 0, sizeof(v)), memset(s, 0, sizeof(s));
memset(f, 0, sizeof(f)), memset(g, 0, sizeof(g)), memset(dp, 0, sizeof(dp));
}
inline void solve(){
scanf("%lld", &n);
v[0] = v[1] = 1;
for(ll i = 1 ; i <= n ; i ++){
scanf("%lld", &c[i]);
v[c[i]] = 1;
}
scanf("%s", s + 1);
for(ll i = 1 ; i <= n ; i ++){
if(s[i] == '0') r ++;
if(c[i] > 1) w ++;
else if(c[i] == 1){
if(s[i] == '1') o ++;
m ++;
}
else{
if(s[i] == '1') d ++;
else k ++;
}
}
if(c[n] <= 1){
for(ll i = 1 ; i <= n ; i ++) if(s[i] == '0'){
printf("0\n");
return;
}
printf("1\n");
return;
}
for(ll p = 0 ; p <= n ; p ++) for(ll u = 0 ; u <= c[n] ; u ++) if(v[u]){
f[1][0][u][p][0] = f[1][1][u][p][0] = 1;
g[1][0][u][p][0] = g[1][1][u][p][0] = 1;
}
for(ll i = n, h = 0 ; c[i] > 1 ; i --){
for(ll p = 0 ; p <= n ; p ++) for(ll u = 0 ; u <= c[i] ; u ++) if(v[u]){
g[0][0][u][p][0] = 1;
if(s[i] == '0') g[0][1][u][p][0] = g[1][1][u][p][0];
}
for(ll p = 0 ; p <= n ; p ++) for(ll u = 0 ; u <= c[i] ; u ++) if(v[u]) for(ll x = 1 ; x <= k ; x ++){
for(ll y = 0 ; y <= x - (c[i] - 1) ; y ++) f[0][0][u][p][x] = (f[0][0][u][p][x] + (g[1][0][u][p][y] * g[0][0][c[i]][p - (c[i] - 1) - y][x - (c[i] - 1) - y] % Mod) * ((u + y) * b[p - y + 1][c[i]] % Mod)) % Mod;
g[0][0][u][p][x] = (g[1][0][u][p][x] + f[0][0][u][p][x]) % Mod;
}
for(ll p = 0 ; p <= n ; p ++) for(ll u = 0 ; u <= c[i] ; u ++) if(v[u]) for(ll x = 1 ; x <= k ; x ++){
if(s[i] == '1'){
for(ll y = h ; y <= x - c[i] ; y ++) f[0][1][u][p][x] = (f[0][1][u][p][x] + (g[1][1][u][p][y] * g[0][0][c[i]][p - (c[i] - 1) - y + h][x - c[i] - y] % Mod) * b[p - y + 1 + h][c[i]]) % Mod;
g[0][1][u][p][x] = f[0][1][u][p][x];
}
else{
for(ll y = h ; y <= x - (c[i] - 1) ; y ++) f[0][1][u][p][x] = (f[0][1][u][p][x] + (g[1][1][u][p][y] * g[0][0][c[i]][p - (c[i] - 1) - y + h][x - (c[i] - 1) - y] % Mod) * ((u + y) * b[p - y + 1 + h][c[i]] % Mod)) % Mod;
g[0][1][u][p][x] = (g[1][1][u][p][x] + f[0][1][u][p][x]) % Mod;
}
}
for(ll p = 0 ; p <= n ; p ++) for(ll u = 0 ; u <= c[i] ; u ++) if(v[u]) for(ll x = 0 ; x <= k ; x ++){
f[1][0][u][p][x] = f[0][0][u][p][x], f[1][1][u][p][x] = f[0][1][u][p][x];
g[1][0][u][p][x] = g[0][0][u][p][x], g[1][1][u][p][x] = g[0][1][u][p][x];
f[0][0][u][p][x] = f[0][1][u][p][x] = g[0][0][u][p][x] = g[0][1][u][p][x] = 0;
}
if(s[i] == '1') h ++;
}
for(ll i = 0 ; i <= k ; i ++) for(ll x = 0 ; x <= m - o ; x ++) dp[i][o + x] = g[1][1][0][r][i + x] * (binom[i + x][i] * a[m - o][x] % Mod) % Mod;
for(ll i = 0 ; i <= k ; i ++) for(ll x = 1 ; x <= m ; x ++) if(dp[i][x]){
if(i < k) dp[i + 1][x - 1] = (dp[i + 1][x - 1] + dp[i][x] * iv[w + k - i]) % Mod;
for(ll j = 0 ; i + j <= k ; j ++) for(ll y = 0 ; x + y - 1 <= m ; y ++) if(j + y > 1) dp[i + j][x + y - 1] = (dp[i + j][x + y - 1] + (dp[i][x] * g[1][0][1][n - d - i - x][j + y - 1] % Mod) * (iv[w + k - i] * (binom[j + y][j] * a[m - x + 1][y] % Mod) % Mod)) % Mod;
}
ll ans = 1;
for(ll i = 0 ; i <= k ; i ++) ans = (ans + (Mod - dp[i][0]) * a[k][i]) % Mod;
printf("%lld\n", ans);
}
ll T = 0;
int main(){
for(ll x = 0 ; x < N ; x ++){
a[x][0] = b[x][0] = 1, iv[x] = inv(x), binom[x][0] = 1;
for(ll y = 1 ; y <= x ; y ++){
a[x][y] = a[x][y - 1] * (x - y + 1) % Mod, b[x][y] = inv(a[x][y]);
binom[x][y] = (binom[x - 1][y - 1] + binom[x - 1][y]) % Mod;
}
}
scanf("%lld", &T);
for(ll i = 1 ; i <= T ; i ++) init(), solve();
return 0;
}
```

Solution by Melania

**Solution**

Let $$$ S $$$ denote the set of cards currently in Jellyfish's hand.

We assume that $$$ a_1 = 0 $$$ and $$$ a_n \geq 2 $$$. Initially, $$$ |S| \in [1, n-1] $$$. Cases where these conditions do not hold are trivial.

We observe that Jellyfish cannot empty the draw pile if and only if she runs out of non-zero cards to play. In the following section, we compute the probability that she eventually runs out of non-zero cards.

It is clear that in any scenario where she runs out of non-zero cards, $$$ \max(S) $$$ must gradually decrease to zero. We can break down the entire process into sub-procedures, where in each sub-procedure, $$$ \max(S) $$$ decreases from $$$ i $$$ to $$$ \leq i-1 $$$, ultimately reaching zero.

Assume she is currently in a state where $$$ \max(S) = i $$$ and $$$ |S \setminus {i}| = x $$$. We denote $$$ F_{i,x,y} $$$ as the probability that she will eventually transition to a state where $$$ \max(S) \leq i-1 $$$, and in the very first state where $$$ \max(S) \leq i-1 $$$, the size of $$$ S $$$ is $$$ y $$$.

For a fixed and given pair $$$(i,x)$$$, we aim to compute all values of $$$F_{i,x,y}$$$ simultaneously.

To determine the values of $$$F_{i,x,y}$$$, consider the situation after Jellyfish plays card $$$a_i$$$. For the remaining $$$n-x$$$ cards, each card is equally likely to be one of the $$$a_i$$$ cards drawn. Therefore, for any individual card outside of the initial $$$x$$$ cards, the probability of possessing that card is $$$p = \dfrac{a_i}{n-x}$$$.

We define the initial event where Jellyfish plays card $$$a_i$$$ as the starting point of our timeline. Following this event, we consider $$$n-i+2$$$ key moments when $$$\max(S)$$$ becomes $$$\leq n$$$ for the first time, $$$\leq n-1$$$ for the first time, and so on, down to $$$\leq i+1$$$, $$$\leq i$$$, and finally $$$\leq i-1$$$ for the first time.

We denote $$$G_{j,y}$$$ as the probability that Jellyfish will eventually reach a state where $$$\max(S) \leq j$$$, and in the very first state where $$$\max(S) \leq j$$$, the size of $$$S$$$ is $$$y$$$. Initially, $$$G_{n,x + a_i} = 1$$$.

To understand the transitions of $$$G_{j,y}$$$, note that since all cards $$$\leq j$$$ (excluding the initial $$$x$$$ cards) are symmetrical, the probability of owning card $$$j$$$ is exactly $$$p=\dfrac{y-x}{j-x}$$$.

- If Jellyfish does not own card $$$j$$$, then the first state where $$$\max(S) \leq j-1$$$ is simply the first state where $$$\max(S) \leq j$$$. This describes a transition of $$$G_{j-1,y} \leftarrow (1-p)G_{j,y}$$$.
- If Jellyfish owns card $$$j$$$, we enumerate the number of cards $$$|S| = z$$$ in her hand when she first reaches $$$\max(S) \leq j-1$$$. The probability of this happening is $$$F_{j,y-1,z}$$$. Thus, we have $$$G_{j-1,z} \leftarrow pG_{j,y}F_{j,y-1,z}$$$.

Here’s a further explanation of why "all cards $$$\leq j$$$ (excluding the initial $$$x$$$ cards) are symmetrical": Because this state is the very first time $$$\max(S) \leq j$$$, it means that all cards played before this state and after Jellyfish initially plays $$$a_i$$$ are $$$\geq j+1$$$. From the perspective of those cards $$$\geq j+1$$$, all cards $$$\leq j$$$ (excluding the initial $$$x$$$ cards) are symmetrical.

In the end, the actual algorithm we derived for computing $$$F_{i,x,y}$$$ is quite straightforward:

- Iterate through $$$j$$$ backwards from $$$n$$$ to $$$i$$$.
- For each $$$G_{j,y}$$$, make two types of transitions:

- $$$G_{j-1,z} \leftarrow pG_{j,y}F_{j,y-1,z}$$$, where $$$p = \dfrac{y-x}{j-x}$$$;
- $$$G_{j-1,y} \leftarrow (1-p)G_{j,y}$$$.

Finally, $$$F_{i,x,y} = G_{i-1,y}$$$.

A small catch is that we could have self-loops. If $$$a_i = 1$$$, then in the final step where $$$j = i$$$, there will be some transitions where $$$F_{i,x,y} \leftarrow cF_{i,x,y}$$$. For a given $$$F_{i,x,y}$$$, if all the other known values that contribute to it sum to $$$A$$$ and all the $$$c$$$ coefficients sum to $$$C$$$, then we have $$$F_{i,x,y} = A + CF_{i,x,y}$$$. This results in $$$F_{i,x,y} = \dfrac{A}{1-C}$$$. We can address this by calculating the inverse.

It can be shown that if we also iterate through $$$x$$$ from larger to smaller values, no $$$F_{i,x,y}$$$ will use other values of $$$F_{i',x',y'}$$$ that are neither known nor equal to $$$F_{i,x,y}$$$ itself. In other words, the value of $$$F_{i,x,y}$$$ does not invoke other unknown values of $$$F_{i',x',y'}$$$.

After computing all values of $$$F_{i,x,y}$$$, we proceed to determine the final answer.

We define $$$\text{ans}_{i,x}$$$ as the average probability that Jellyfish will eventually run out of non-zero cards (thus losing the game) if she starts with all $$$s_i$$$ cards in $$$[1, i]$$$ (which she owns initially) plus an additional $$$x$$$ cards drawn randomly from the remaining $$$i - s_i$$$ cards in $$$[1, i]$$$ (which she does not own initially).

- If $$$c_i = 1$$$, the transition is given by: $$$\text{ans}_{i,x}=\displaystyle\sum F_{i,x-1,y}\text{ans}_{i-1,y}$$$;
- If $$$c_i = 0$$$, the transition is given by: $$$\text{ans}_{i,x}=(1-p)\text{ans}_{i-1,x}+p\displaystyle\sum F_{i,x-1,y}\text{ans}_{i-1,y}$$$, where $$$p=\dfrac{x-s_i}{i-s_i}$$$.

Finally, we compute $$$1 - \text{ans}_{n,s_n}$$$ to obtain the probability that Jellyfish will eventually win the game.

The time complexity of this approach is $$$\mathcal{O}(n^5 + n^3 \log p)$$$, and the space complexity is $$$\mathcal{O}(n^3)$$$.

It is also possible to optimize the $$$\mathcal{O}(n^5)$$$ time complexity to $$$\mathcal{O}(n^4 \sqrt{n})$$$ by using Lagrange interpolation.

To achieve this speedup, we notice that the values of $$$i$$$ and $$$x$$$ do not play a crucial role in the transition of $$$G_{j,y}$$$. Specifically, the influence of the values $$$i$$$ and $$$x$$$ on the transition of $$$G_{j,y}$$$ can be described as follows:

- The initial value is $$$G_{n,x+a_i} = 1$$$.
- During the transition, $$$p = \dfrac{y-x}{j-x}$$$ and $$$1-p = \dfrac{j-y}{j-x}$$$.

We can fix the value of $$$k = x + a_i$$$ and treat $$$x$$$ as a variable, maintaining a polynomial in $$$x$$$ as the value of $$$G_{j,y}$$$.

We observe that it is only necessary to keep the point values between $$$[k-1, k-\sqrt{n}]$$$ to track the polynomial accurately.

- For $$$a_i > \sqrt{n}$$$, since each transition involving multiplication by $$$p$$$ will increase the value of $$$y$$$ by at least $$$a_i$$$, it is clear that the degree of the polynomial $$$G_{j,y}$$$ is at most $$$\sqrt{n}$$$. Thus, keeping track of $$$\sqrt{n}$$$ values is sufficient to restore the polynomial, and we can use linear Lagrange interpolation to obtain the point values outside these $$$\sqrt{n}$$$ points.
- For $$$a_i \leq \sqrt{n}$$$, since $$$x = k - a_i$$$, we only care about the point values in the range $$$[k-1, k-\sqrt{n}]$$$. Although the polynomial restored may be inaccurate beyond this range, these point values are the ones that matter for our calculations anyway.

By applying this optimization, we effectively reduce the time complexity to $$$\mathcal{O}(n^4 \sqrt{n} + n^3 \log p)$$$.

Any guesses on what game Jellyfish will be playing next?

Blue Whale

rotaeno

Fun facts for problem E: At first, this problem was Div. 2 D and required adding vertices dynamically.

E was Nice.

I had a different solution for

Cwhere I was able to infer hint 1, but could not deduce hint 2, so I instead ended up simulating the entire process by considering each element as a candiate (in descending order) and optimizing the $$$O(n^2)$$$ solution using data structure (set) to maintain the minimum distance between adjacent elements.Overkill, I know, but just wanted to share my thought process during the contest.

In the contest, I almost implemented the following solutions. In decreasing order of elements, let's see if we have a subarray with median >= X. In the original array, lets replace all the nos < X by -1 and >=X by 1. A subarray with median >= X exists, which implies that there exists one subarray with length >= 2 and sum >= 1.

In the segment tree, if we maintain the following in a node, - Sum - max pref sum - max suffix sum - max subarray sum

After each -1 -> 1 update we can check if there exists a subarray with length >=2 and sum >= 1.

262621530 262621561

Btw once someone realised this idea. We don't need a segment tree; we can do a binary search on the median and check if a subarray exists with sum >= 1 and length >= 2.

262622439

I have done this in O(N) 262595462, I couldn't formally prove it, but my intuition was that if there are number x,y,z such that x<=y<=z, then median of them is y, right? Any window of size K, and it's median is x, then this should imply that there are at least floor(K+1/2) elements in that window which have value less than or equal to x, for odd K (2*M-1), the number of elements less than or equal to x should be at least M, and since the size is 2*M-1, this implies in any case, the difference of indices of elements less than or equal to x should be at most 2, in other words there exists a smaller window [a,b,c] in the window of size K, such that either a and b or b and c or c and a should be less than or equal to x.

Thats the solution in editorial too.

Look at it like this.

Lets say we have an array of 11 elements and the median is 10.

What I will prove is there exists 3 adjacent elements in this array which has median >= 10.

Median = 10, implies that this array has atleast 6 elements >= 10.

Lets look at first 2 elements,

If both of them are >= 10, then we can take first 3 elements and the median will be >= 10 of these first 3 elements.

Otherwise either none of them or only one of them is >= 10. We can delete these two elements. In remaining 9 elements we will still have atleast 5 elements >= 10.

If we keep repeating this process either we will find first 2 elements >= 10, in which case we are done. Or atlast we will be left with 3 elements and among them 2 of them must be >= 2.

Among 11 elements, atleast 6 of them are >= 10.

Among 9 elements, atleast 5 of them are >= 10.

Among 7 elements, atleast 4 of them are >= 10.

Among 5 elements, atleast 3 of them are >= 10.

Among 3 elements, atleast 2 of them are >= 10. If we reach this subarray, we can just use it to get a subarray of size 3 which has atleast 2 elements >= 10.

The proof for even case is similar.

Similary if we have an array of 12 elements and median is 10. There must exist atleast 7 elements >= 10.

We can use similar reasoning to say that there exists atleast 2 adjacent elements >= 10.

This is the reason why we should only look at subarray of length 2 and 3 to get maximum possible median.

Using a subarray of size 5 (a1,a2,a3,a4,a5) will make the median worse when compared with 3 size subarray's present in the subarray, namely (a1,a2,a3) , (a2,a3,a4) , and (a3,a4,a5).

How can we prove the hint 2 given for question C? As far as I understand, the hint means that if X is the answer to the question, then X is definitely the median of some subarray (length >= 2) in the original array. Is it not possible that, though X is not the median of any subarray in the original array, it becomes the median of some subarray in the transformed after applying the operation a few times?

Could you prove that if there is no subarray in the original array for which X is the median, then even after applying the said operation a few times, there won't be any subarray in the transformed array with X as the median?

nice

Fun facts for problem H: At first, this problem was Div. 2 F.(=・ω・=)

Desired solution for E is much more simpler than thinking about HLD .~.

at first, I used DSU, after getting WA I noticed that I need HLD, I almost gave up but somehow, I figured out that DSU works too

At my first glance,it's surely a HLD problem.However,i noticed that the graph is a tree and only degrees of nodes influnced the answer,then i wrote something very close to the editorial with a higher complexity of $$$O(n \sqrt{n})$$$,which ran rather quick.If using set to implement the editorial's idea,it would be a much simpler solution than the $$$O(n)$$$ one.And the way that the solution figured out the root which has two sons amazed me.Anyway,it's a problem worth thinking.

DuongForeverAlone can you explain your solution for E please?

i also used HLD (and got TLE), but my approach was to keep the set of all minimal elements of the relation "a is ancestor of b", i.e. for each "path to the root" i only keep the lowest vertex (max depth) in the set. when adding/removing a black vertex you have to know the next black vertex on the path to the root which can be done using HLD and segtrees.

the rest is similar to the editorial; you have a few conditions with which you can check whether the minimal vertices form a chain (there must be at most 2, the path between them (==> euler tour trick + segtrees/fenwick tree) must only consist of black vertices ...)

The constraints were not generous. I do not know HLD very well too, so I did not go for that. I was trying something similar to editorial's approach but quite different in implementation.

Sorry, I am not very good at tree queries but I get your idea a bit. By "euler tour trick + segtrees/fenwick tree" do you mean the approach where you flatten the tree and build segtree/fwick tree over the flat tree?

with HLD you can answer path sum queries in $$$O(\log^2(n))$$$ and the constants (at least in my implementation) aren't that good. i couldn't come up with another approach though so i gave it a try

i used the euler tour trick to count the number of black vertices on the path between two "minimal" vertices. i could've also used the hld, but this would be another $$$O(\log^2(n))$$$ factor, so i used ETT after the first TLE. you basically have each vertex twice in the flattened tree, once at "dfs_innum" and one at "dfs_outnum" with a "1" at "innum" and "-1" at "outnum". then a path sum from the root to "u" is sum [0; dfsin[u]]. and to answer such queries you can use a FT/SegTree.

the total complexity is still $$$O(n \log^2(n))$$$

Thank you, I understood this part. I will try implementing it on my own. Thanks a lot!

Do you mean HLD approach?

No, the one that you submitted here 262667007.

Oh, I just implement the approach of the editorial. Looking at the 4 "if" at the end of the while loop, you will see the four cases that exactly the same like the editorial said:

If you need to detail of the variables I used, then: $$$cnt1$$$, $$$cnt3$$$ is number of black nodes which have $$$1$$$ and $$$3$$$ (or more) black child respectively. $$$st$$$ is the set of black vertices which have $$$2$$$ black child. white_cnt is the number of black vertices which have a white parent. I used a $$$cnt$$$ array to keep track the number of black child for each vertex $$$i$$$, denote by $$$cnt_i$$$. For each query, I need to update all those variables before checking for the mentioned $$$4$$$ conditions.

This ensures there's no black vertex having 3 or more black child

This ensures there is at most 1 black vertex with 2 black child.

Exactly 1 black vertex has a white parent.

If there exists a black vertex with 2 black child, then its parent must be white.

PS: I just realize that the $$$cnt1$$$ is pretty useless here, never mind it :v.

Thank you for the explanation. I had some doubts —

How do you handle the case with root being a part of the chain? In that case there won't be any white parent. I put a dummy node over root as a white node but I am not sure if that is the best way to about it.

Also, could you please explain how the parent is being updated per query.

I did read the editorial but the implementation there is going on top my head.

I've already handled the case when the root node being a part of the chain. I use the dummy node $$$0$$$ as white node, and connect it to the node $$$1$$$ (which is the root of the tree). Focus on the part before I used DFS, and that's it:

For the updating part, I recommend you to do it yourself for better understanding, as different people have their own coding style. However, I still explain mine in this situation. It is similar to something like updating the sum of an array. Let's say I have an array $$$a$$$ with the sum $$$s$$$. If I need to update $$$a_i = x$$$, then $$$s$$$ will be updated like:

which is pretty much similar to my implementation. It was like: remove the previous state, and adding the new state to the variable.

PS: If you have any more questions, just DM for me as the comment section is pretty "flowwy" right now.

Thank you, I will try thinking and coding it again myself. Thanks a lot for all the explanations, really kind of you.

And is much more simpler than thinking about the number of black chains of three vertices and debugging counter's update function lol 262610439

I liked this contest very much, the best one I did this year I guess. F then E are my favourite problems in the contest.

Problem A,B,C Video Editorial : YOUTUBE VIDEO LINK --Click Here

Problem G has a linear solution. Or maybe it doesn't.

nice problems, E is very interesting, tutorial solution is way easier than mine any source to study how to solve problems like F, it took me much time to understand it, and I didn't even get close to a solution

What xor_two represent in the tutorial solution? Could you please explain the editorial code of question E.

there is a case for the chain, when

there is 1 vertex with 2 child

there is 2 vertex with 0 child

the rest must have 1 child

but you also have to check that the vertex having 2 child must have white parent.

So if you maintain xor of all the vertices having 2 child, when their count is 1, the xor will be exactly the vertex which has 2 children, so you can check whether it has white parent or not.

problem Didk why it works but it doseop1 = 2 * (n — 1) — max_dph_a

op2 = 2 * (n — 1) — max_dph_b + dist_bwn_a_b % 2

ans = min(op1, op2) + dist_bwn_a_b

Ok so I was working on this problem for a while and I think this is why your solution works:

you calculate the minimum steps required to paint all the nodes from red_start, as red doesnt depend on blue. you also calculate the minimum steps required to travel all the nodes from blue.

if red has a better start, blue simply goes to red and then follows its path, therefore: ans = (min_steps_red) + dist_ab;

if blue has a better start, then red and blue should try to meet each other on their path, if dist_ab is even, they can syncronize perfectly otherwise, blue stays one step behind. ans = (min_steps_blue + dist_ab % 2) + dist_ab;

You calculate your answer from the minimum of the two above approaches.

1975C — Chamo and Mocha's Array

Detailed Video Editorial

English Language => https://www.youtube.com/watch?v=J-xsU37pJoI

Hindi Language => https://www.youtube.com/watch?v=g6720vEw8r4

Can you explain editorial of F more clearly? Firstly, what does mean array $$$f[n][s]$$$ and why such transitions on it?

I don't understand the editorial of F too. It would be nice if someone explains it in more of a detail.

It's some sort of SOS dp. f[n][s] is "for each set bit b of f[n][s] it means that after considering all masks of the first n bits you have b 1's to spare in the remaining bits".

Basically when you use a bit 1 as 1, then you consume 1 bit which is the reason why there's the shift right. When you use 1 as 0 or 0 as anything then no bit is consumed.

I guess in fact they are enumerating $$$i$$$ from $$$n-1$$$ to $$$0$$$ in the sample implementation.

Crux of the idea is —

We are adding nos from n-1 to 0.

N = 6

Our current set is

`***101`

, i.e. we have added 3 and 5 in the set, we are yet to decide for 2,1,0.Lets say we have -

1. f(100101) allows set sizes 0,1,3,4

2. f(100001) allows set sizes 0,3,5

The idea is we can merge all f(100***) into just one constraint.

Union of 100101 and ***101 is set of size 2. So we can say for the remaining bits we are allowed to only include -2,-1,1,2 nos . We can remove -ve and say we must include 1 or 2 nos only.

Union of 100101 and ***001 is set of size 1. So we can say for the remaining bits we are allowed to only include -1,2,4 nos . We can remove -ve and say we must include 2,4 nos only.

Infact we can merge these two constraints and say we must include 2 nos among 0,1,2.

This way when we have chosen

`X`

nos so far. For remaining nos we have $$$2^{N-X}$$$ distinct combinations. We can reduce original $$$2^N$$$ constraints to $$$2^{N-X}$$$ distinct constraints.Basically after deciding whether we should include

`n-1`

in our set or not, we can reduce original 2^n constraints to $$$2^{N-1}$$$ constraints.By merging $$$s_0s_1s_2s_3s_40$$$ with $$$s_0s_1s_2s_3s_41$$$.

If we include $$$N-1$$$ we subract all the allowed sizes by 1 in $$$s_0s_1s_2s_3s_41$$$, and then merge with $$$s_0s_1s_2s_3s_40$$$

If we do not include $$$N-1$$$ and then we directly merge $$$s_0s_1s_2s_3s_41$$$ with $$$s_0s_1s_2s_3s_40$$$

Now, if you read the editorial and model soln again it should make sense what they are doing.

Oh, thanks. Now it makes sense. Looks very easy for problem F :)

So , why should

`f[n][0]`

be odd so that s is included in then answer?Also , why do we do , & with

`(f[i][m | t] >> 1)`

in not take case. Why are we right shifting by 1 or dividing by 2..Thank you..

Can someone explain approach binary search for problem C

to determine if the answer is greater or bigger than x, you check whether there are two indices i, j (i != j) that diffrence between i and j is smaller than 3 and a[i] >= x and a[j] >= x. you can see my submission to see my impl.

The idea is similar to what explained in below gfg link.

LINK — https://www.geeksforgeeks.org/find-largest-median-of-a-sub-array-with-length-at-least-k/

IDEA : will apply BS on set of values given.

CHECK(_) : "check(mid)" will validate if there exist a subarray of length >= 2 , having median mid or not.

LOGIC for CHECK(_) : we will build a prefix array ,

pre[i] = 1 ,a[i] >= mid , otherwise -1. : After building prefix sum find the max_subarray_sum , if it's positive than median id possible otherwise not.

SUBMISSION LINK : https://mirror.codeforces.com/contest/1975/submission/262866407

`if it's positive than median id possible otherwise not.`

how do make sure this always works ?

is there any proof or logic behind it

-> A subarray in a′ of given array having a non-negative sum means that the number of elements ≥mid is at least the number of elements <mid in that subarray , and so if we sort the subarray and find median it will com out to be mid.

CONDITION :- -> For a subarray to have mid as its median, at least half (or more) of its elements must be ≥mid.

-> Hence, if a subarray in a′ has a sum ≥0, then the subarray has at least as many 1's as −1's, implying mid can be a median.

I hope this makes sense.

yes indeed it makes , i tried to impliment the same idea but i didnt find how !

thank you

too unfortunate didn't realize that minimum number of steps to reach every vertex from certain vertex in tree is classical problem by taking furthest distance d then 2 * (n — 1) — d. I've tried to implement that from scratch but couldn't make it...😭

Do you have any resources for this classic problem?

You can implement it two ways:

if you want to use the formula, get the maximum distance by simply using BFS/DFS.

Otherwise, to implement everything from scratch, here's what you do: Do a DFS from the node that returns the maximum depth from a node, for each node add twice the depth of each neighbor to a sum, as well as keep track of the maximum depth among each neighbor, then simply subtract the maximum depth from the sum.

I did it like this. Sorry if my implementation is a bit naive, I am still new to graphs and trees

I was discussing with adamant about some different approaches to do the Wildcard Matching in problem G. I had some ideas to hack him, because his solution was randomized and had a pretty bad probability of failure. The basic idea is the following case:

Here, whenever characters a and b accidentally match, the algorithm fails, because afterwards it can match all the a's with the wildcards. So with this you can build testcases that have failure probability of $$$(1 - \text{failure at one position})^{10^6}$$$. Unfortunately people do not always reset their randomness everytime they do a wildcard matching, so then this case is pretty useless. But by putting multiple different kinds of strings at the places of a and b, and for example constructing similar testcases with words of $$$2$$$ and $$$3$$$ characters, you can also hack fishy solutions which don't reset their randomness.

This hack hacked some solution in Polygon, so it gave a Unexpected Verdict :(.

can someone explain why does this logic doesnot work for question 3 why cannot we just check the median of all subarray of length 2 and return the maximum among them

Consider this case 1 1 3 2 3 2 3

Deleted. Irrelevant

For G, I am wondering why this submission is getting a WA but this one gets an AC.

I tried to use Atomic-Jellyfish's template cuz the MOD = 2013265921 seemed cool, but is the way I copied it somewhat wrong? I copied just as it is except that I made p[] and w[] for every 2^i beforehand.

a very good round with some traps like the guess for c looked so trivial but it wasn't. Also I concluded the correct algo for D when the distance between A and B is even then it is quite simple. We just reach a common vertex and do euler tour together from there but I couldn't conclude that it is also the case when the distance is odd. Can someone explain why it works with odd distance between A and B?

To want to find the best vertex where they can meet, I did a modified bfs on both the vertices untill i visit a vertex by B which was also visited by A, have a look at the getVertex function here 262674537

Editorial for 1975D - Paint the Tree is not clear to me.

Will anyone explain me in more details ?

Firstly, it's clear that the Pb movements are useless if it's not moving on a red vertex. once it reaches a red vertex, it's easy to see that we can make Pa move ahead of Pb (or with Pb), so, after we reach a red cell, we just need to traverse the whole tree to make it all blue. we can traverse the whole tree and go back to where we were in 2*(n-1) moves, but once we color the last vertex blue we don't need to go back, so if the distance between the starting node and that last-colored vertex is d, we will need 2*(n-1)-d moves, obviously we need to maximize d so it'll be the furthest vertex from out starting position. the last thing is that we need to reach a red vertex ASAP so we can calculate the distance between Pa and Pb and divide it by 2 rounded up.

Thanks a lot.

In problem B, I retrieve minimum even and minimum odd number of array a.Then iterate over array a and check wheater ai is divisible by minimum even or minimum odd number. So what is the problem of this approach?

You can consider the case the array contains only odd numbers, for example

`3 5`

.Thanks.

we can simplify E further by considering another array pc[v] = xor (c[v] and costs of its children). Now, c and pc form a bijection. And a chain of black vertices is just 4 black vertices in pc that satisfy some conditions. 262582001

Please someone tell me what is wrong with this solution for problem D. my submission (It's failing on test 3).

Or please provide a test case where it may fail.

hack

Thank you!!

There is an issue in the algorithm for finding the middle element on the path between a and b.

Can someone please explain solution of Problem D in detail with

"proof"? Thanks in advanceThe key point is to notice that no matter how the vertices are painted red before they meet for the first time, the minimum cost will be $$$2∗(n−1)−maxdep$$$ if we choose the $$$vertice$$$ first painted blue as the root. The proof is that $$$P_A$$$ can choose to go to any subtree of the root before they meet, ensuring $$$2∗(n−1)−maxdep$$$ can be reached(It's obvious that it's the minimum). And it is also obvious that delaying the time of the meeting will at most decrease $$$maxdep$$$ by 1, as the new root can only be the child of the original root.

mysubmission can someone explain why I am getting wa? please give me one testcase.

3

100 1 100

Thank you so much, I understand.

Probably an overkill but E can be solved using Euler Tour and Seg Tree . Code

Please explain how will we find the first vertex that will be painted blue in Problem D, the editorial is not clear to me.

i did dfs (expand out radially at each step) starting from both A and B , checkout getVertex here 262674537

Using that after a node is colored blue (init), the number of moves would be $$$2(n-1) - d$$$, so the total number of moves would be $$$2(n-1)-d+\text{moves to reach color blue, init}$$$. To do this, we need the number of nodes in the unique path between A and B, and the first node of this path is colored blue. This can be found naively using DFS (https://mirror.codeforces.com/contest/1975/submission/272341781), but this exceeds the memory limit. Instead, we can find the distance of each node from A and B, respectively. Knowing that any path between A and B will include an arbitrary node C, we can iterate through these distances to find the minimum sum (distance from A to C + distance from B to C), and all of the nodes with this sum are on the path between A and B. We can easily take the node colored first from this subset, and compute the length of the longest path from this (https://mirror.codeforces.com/contest/1975/submission/272580770).

For problem C: - Why check for a length of 3? Initially, I used a length of 2 and noticed that some tests failed. I then considered a length of 3. - Can you help me ?

did you find any answer ?

Now I do. Do you need my help?

I love the idea of problem number B. You solved it in a beautiful way. Love you man :)

I was able to get the logic of

Cduring the contest. But using the wrong Data Structure just overkilled me.Please explain any approach to problem D ( unable to understand editorial). Also, if you were able to relate the problem with any classical problem ( as mentioned in Edi) then please provide a reference.

Problem CWhy this is giving wrong answer...Just consider this test case 4 3 2 1 5 You can try to find answer but answer can't be more than 3. But your approach will give answer as 4 which is wrong. Messing up the array will give wrong answers

Got it. Thanks brother..

why my B is wrong

https://mirror.codeforces.com/contest/1975/submission/262576631

the 2nd number B must be the smallest number not divisible by A, not the smallest number not equal to A.

Because then B can divide those numbers that A can't.

shit,i was so close,,i didnt notice this,btw thanks

Example 2 4 5 — you must pick 2 and 5, your code picks 2 and 4.

Can anybody let me know what is wrong with my logic in Problem C:

My observation, for any array of size n, you can convert the entire array to any element that is not at the last position. So the answer must be the highest element in the sub array of size n-1, that excludes the last element. What is wrong with my observation?

for example pick an array [a, b, c, d, e, f]: with the given operations, you can convert all elements to a, b, c, d and e, but not f. therefore the answer is the highest among a b c d & e.

Edit: nvm I got it

shitty mistake

Hey whats the mistake here? Tried the same but got WA

If you are excluding the values by position then fine but don't do it by value since the array can contain duplicates like for example if

`4 3 2 5 5`

here the answer is

`5`

but if you discard by comparing it with the first and last element it will cause issue or if you calculate median ignoring the right 5 then you might get 3 as your answerMy approach was to find out the biggest element from [0,n-2], with the same reason as kyojin27 mentioned. The case that you mentioned, the ans is still 5 with this approach right? Could you give a test case where this fails?

Try doing it on

`4 3 2 1 5`

here according to your logic answer should be 4 but here the answer is 3 since no matter how you form an array 4 can never become medianCan't I select $$$(1,2)$$$ and then have $$$⌊\frac{(2+1)}{2}⌋ = 1st$$$ element as the median?Then similarly select $$$(2,3)$$$,$$$(3,4)...$$$ to make entire array 4

Bro you misread the question I guess when we find median we always sort the array so for (1,2) median will be 1 then (1,3) will be 1 and then (1,4) will be 1 again

Thanks for the clarification. I was also stuck in this same kind of logic.

Ah sorry, stupid mistake, Thanks a lot :)

in ques C i think ans should be max of what is present between first and second last elements.bcz then we can ca change every element to that max,first taking subarray of length 2 and then 3..

correct me If I am wrong

262692303 Can anyone tell why mle and any possible sol to remove it??

Problem C was very similar to a past problem 1349B - Orac and Medians. It is exactly the same operation on a subarray, only it asks a slightly different question, but both problems can be easily reduced to eachother and the same observations can be used.

What would be the rating of problem D?

in G, can somebody explain why can I take only |2*ti| of s and match it with ti?

Can problem D be solved using centroid of a tree? Please explain why it will/will not work!

Help needed. Can anyone explain my mistake for problem A? My approach was as follows. From the beginning, find the first element that breaks a non-decreasing sequence, and starting from there again "find" an element that breaks the current non-decreasing sequence. If there is none (i == n), it's good. Also, make sure that last element is actually less or equal to first.

I'm sure the bug is stupid, but can't see it. Also, the submission is here. 262528960

Gist of the codeThe code of your submission fails on cases as this one.

Your code gives YES, though it should be NO.

But when I use your snippet 262716265 it works.

Thanks. I didn't realize that the two codes are different. And I thought the change was impactless, so I didn't bother submitting the updated (and correct) one. My bad.

The presentation of problem D was absolutely mind-blowing. The concept of using the formula

`2 * (n - 1) - d`

to traverse a tree using DFS was cleverly presented and quite tricky. I don't think beginner programmers could have easily guessed the approach required for this problem!The hint and solution provided for problem D were excellent and are highly recommended to go through. The contest questions overall were amazing!

Can anyone explain why would a subarray of length 3 always suffice to provide the correct solution for C ?

Think about determining whether an array's any sub-array has a median $$$>x$$$. Clearly, the array needs to have a majority of $$$>x$$$ values (i.e. 3 values greater than $$$x$$$ out of 5, or 6 out of 9, 51 out of 100 etc). So, for every $$$<x$$$ values, there has to be equal (in fact, more) number of $$$>x$$$ values.

Now, with that observation in mind, if the array starts to look like $$$x, <x, <x, <x,...$$$ which is 1 $$$x$$$ and 3 values less than $$$x$$$, then you'll need to have 3 $$$x (or >x)$$$ to have a 4 out of 7 majority. So, it's better to ignore the first value (so that we don't have to carry the "burden" of 3 $$$<x$$$ values, and start fresh from the later positions. If the answer exists (for the previous example, indeed there are 3 more $$$>x$$$ values are coming ahead, you'll find them in the future anyway.

Thus, you can see that the moment we get an $$$x$$$ followed by 2 $$$<x$$$ values, we ignore that and start again from the next position. This is the reason of every sub-array of length $$$3$$$ and its median is a potential answer (and we are taking the maximum of them). Hopefully I was clear enough.

Thank you for the contest; I really learnt something from E.

what?

Can someone explain the solution to Problem F? The tutorial mentions "constraints" and sets T1 and T2 without defining anything.

Problem C solution is not clear to me. Can anyone explain?

Also, the approach I used was not the right one and but still I don't know why. Median is the middle value of non-decreasing sequence, the middle value position is floor(m+1/2). Now, for 2 elements like [5, 4] answer would be 4. For [1, 3, 5] the answer would be 3. So, for any array a, if I just sort it in non-decreasing order and say that the maximum median value possible after performing the operations and making every value same, is simply a[n-2]. Why is it wrong? Because the 2nd last value in a non-decreasing array of n>=2 is always a valid candidate to be the median as [a1, a2] has median a1 if it is sorted. Now, I just need to pick the l, r in such way that a[n-2] stays the median for all a[l...r] subarray. What is the wrong logic I am using here I don't understand. Can anybody explain the problem if possible with a solution where this logic does not work?

Edit: Got the answer of my 2nd query.

Here's an alternative solution for E which I found easier to understand than the model.

Let's root the tree arbitrarily. If the forest formed by the black vertices has 1 leaf, it consists of a single chain leading upwards from that leaf. If there are 2 leaves, each leaf similarly forms a chain leading upward, except now we need to check if they meet and stop at their LCA.

This is as simple as checking if the highest black vertex has 2 black children. The 2 black children tell us it's the LCA we're looking for, and we know that each of them must lead directly to a leaf, because it would mean there are more than 2 leaves otherwise.

problem C hint 3

then there must be a subarray of length 3 with a median of y (y≥x ).

this is not clear can someone please explain why length of 3 is enough ???

I can't prove it to you mathematically,but if a number cannot be a median in 3 sized subarray,it never becomes the median in any sized subarray. Suppose {[a1,a2,a3],a4,a5,a6} is a 3-sized subarray(sorted),suppose I want to make a3 as median somehow(of any subarray),now if a4>=a3:a3 aint median in 4 sized subarray,if a4<a3,then a3 becomes max of 4 sized subarray and hence is still not the median.

so do you see the problem,you can simulate this adding of element to the subarray and check all possibilites, you will never be able to make a3 a median

Problem G is very beautiful. Both the problem and the solution is.

int j=(i%n)+1;

Why this has been done in the first solution? What's the point?

Can someone plz explain hint2 & hint3 for the problem C? I couldn't connect those actually.

what a nonsense question C is . Clearly we can do the same with the by taking two numbers at a time and convert the largest number into smaller one and then expand it to make all same by making it of size of 3.

Can anyone tell me why my solution 263565067 for problem A is wrong.

In problem $$$D$$$, you can try

brute forceall the nodes to determine the best answer.Let $$$sumTree[i]$$$ be the minimum number of steps needed to traverse all the tree considering $$$i$$$ as a root. As we know, $$$sumTree[i] = 2 * (N - 1) * maxHeight(i)$$$. Now the problem is to find $$$maxHeight$$$ for each node. We can see that this value will always equal to the

maxdistance from node $$$i$$$ to one of the tree diameters (as they are considered the farthest nodes by apart). So know, we can calculate $$$sumTree$$$ for any node in just $$$O(1)$$$ by determining the diameter and their distances for each node in the tree (which can be simply done by $$$3$$$ dfs).Now to calculate our $$$answer$$$, we just need to calculate distances from our start nodes $$$P_A$$$, $$$P_B$$$.

For each node $$$i$$$, $$$ans[i]$$$ $$$=$$$ $$$distFromB[i]$$$ + $$$sumTree[i]$$$ . (don't forget to check that person $$$A$$$ has reached node $$$i$$$ before person $$$B$$$). The $$$answer$$$ will be the

minone for all nodes.This is the solution link

In the tutorial of problem 1975C - Chamo and Mocha's Array, I couldn't understand the $$$3^{rd}$$$ hint.

Do you understand 2nd hint?plz explain. I didn't understand 2nd and 3rd hint.

int d;

This is for problem D Is this the correct code to find number of steps taken before the first node is turned blue? d1 and d2 have the path length from A and B to i th node respectively

In this question,i.e. C part, I tried to use a sliding window of length 2 in the contest but in editorial they have used a sliding window of length 3. What is the problem with a sliding window of length 2 in this question.

I have a simple proof for length 3, but not for 2. In fact I can provide a counter example.

Consider the array 5 1 4 2 3. The median of the array is 3. But the medians of sliding window of length 2 are 1 1 2 2. Since they are all smaller than 3, length 2 will not work.

Proof for length 3 (By contradiction): Consider array a1 a2 a3...an. Let ak be the median. Let number of elements less than ak be l and greater be r. Clearly l < 2*r. Also, the median of each subarray of length 3 is less than ak. But if you consider the fact that each subarray which contains a number greater than ak will contain 2 numbers less than ak. If we sum over all those numbers (notice those are disjoint) we get l >= 2*r. Thus contradiction.

I am also having the same issue. I dont understand why we can't take the subarray of length 2

check for 4 1 3 2, the ans is 3 but you'd get 2;

For problem I, it looks like the second solution naturally extends to arbitrary $$$c_i$$$ as long as division by zero does not occur.

Here's an argument that division by zero does not occur when the $$$c_i$$$ are constrained to be nondecreasing:

So in either case $$$1-C\not\equiv 0\mod{10^9+7}$$$.

I wonder if there are other examples of nontrivial constraints on $$$c_i$$$ (other than $$$c_i$$$ nondecreasing) that guarantee that division by zero does not occur.

It is interesting that the original problem is actually to choose any card to play, maximizing the probability of emptying the draw pile.

Through a large amount of data verification, it has been shown that the problem is equivalent to the final problem. However, until the day before the start of the competition, no one was able to provide a rigorous proof, so the final problem is given.

Can anyone explain the proof in problem D?

_why do we need to sort vector b in B-problem

else it won't be able to find the smallest element and the smallest element not divisible by the smallest element

I had some trouble with F and couldn't find a satisfactory explanation, so I decided to write one including motivation.

So first, notice that the structure of the problem isn't friendly. We're given constraints on things that heavily depend on one another in a non-obvious way. We can try merging conditions together without fixing anything, but the relationships are complex. This means that naturally, we should try fixing some elements in $$$S$$$.

Say we have fixed $$$x$$$ elements to definitively be/not be in $$$S, $$$ and denote the set of their indices as $$$X.$$$ Let's try to figure out how we can use the fact that we have fixed $$$x$$$ elements in order to reduce redundancies in the constraints.

We want to try to infer information about the non-decided elements from the decided ones. Consider one of the $$$2^{N-X}$$$ possible subsets of the non-decided elements, and call it $$$R$$$. Consider every $$$T$$$ that is the union of $$$R$$$ and some subset of $$$X.$$$ Notice that since we know $$$V_{T}$$$ and we know how many elements in $$$X$$$ are present in $$$S, $$$ we can immediately update $$$V_{R}$$$ to reflect this known information. Iterating over all possible subsets we can keep updating $$$V_{R}$$$ again and again. Now, we don't have to consider every subset of $$$X, $$$ we can consider $$$V_{R}$$$ as one constraint. This is what the editorial means when it says that there are $$$2^{N-x}$$$ constraints once we have fixed x elements: this is because there is one constraint associated with every $$$R.$$$

Now, the problem is that computing the constraint for a fixed $$$R$$$ is expensive: it takes $$$\mathcal{O}(2^x)$$$ operations since we have to merge all sets of the form $$$R + \text{subset of X}.$$$ So instead, as we build up the set $$$X, $$$ we can try repeatedly merging constraints from the constraint set. Say we were trying to fix a new element $$$\text{pos}$$$ to be either in $$$S$$$ or out of $$$S, $$$ and let's say we have the constraint set for every $$$R \in [0, 1, 2, \dots, N-1] - X.$$$ How will the constraint sets change upon declaring $$$\text{pos}$$$ to either be in $$$S$$$ or not be in $$$S$$$?

For every set $$$R' \in [0, 1, 2, \dots, N-1] - [\text{pos}] - X, $$$ we can "merge" $$$V_{R'}$$$ and $$$V_{R' + \text{pos}}$$$ together depending on whether we declare $$$\text{pos}$$$ to be in $$$S$$$ or not. Now, we can simply DFS and find the potential answers by checking for contradictions at the end of the process.

Submission: 267928405

In problem d in its solution it's said that it based on a classical problem can anyone please provide more information about this.

8*aquman