Thanks for participating. We hope you liked the problems and enjoyed the round.
2226A - Disturbing Distribution
Idea: wakanda-forever
Preparation: wakanda-forever
Solution: wakanda-forever
For any two positive integers $$$x$$$ and $$$y$$$, observe that $$$x \times y \ge x + y$$$ when $$$x \gt 1$$$ and $$$y \gt 1$$$. Thus, it is never optimal to choose two elements that are more than $$$1$$$ in a single operation. Also, it is optimal to choose as many $$$1$$$'s as possible in a single operation.
Hence, we opt to choose subsequences of the form $$$[1, 1, \ldots, x]$$$ and delete them. Note that the cost for this operation is equal to $$$x$$$. Therefore, the answer is just the sum of elements that are more than $$$1$$$. But if the last element is $$$1$$$, it cannot be clubbed with any "more than $$$1$$$ element", and it adds $$$1$$$ to the answer.
Time Complexity: $$$\mathcal{O(n)}$$$
#include<bits/stdc++.h>
using namespace std;
int main(){
int t;
cin >> t;
while(t--){
int n;
cin >> n;
vector<int> a(n);
for(int i = 0; i < n; i++) cin >> a[i];
int ans = 0;
for(int i = 0; i < n; i++) if(a[i] > 1) ans += a[i];
if(a.back() == 1) ans++;
cout << ans << '\n';
}
}
2226B - Everything Everywhere
Idea: wakanda-forever
Preparation: wakanda-forever
Solution: wakanda-forever
For any array $$$a$$$, let $$$\text{g}$$$ denote the $$$\text{gcd}$$$ of all the elements in the array $$$a$$$. Note that every element in $$$a$$$ is a multiple of $$$\text{g}$$$. Let the maximum element be equal to $$$M \cdot \text{g}$$$ and the minimum element be equal to $$$m \cdot \text{g}$$$.
If array $$$a$$$ is good, the given condition simplifies to $$$M - m = 1$$$. This implies that $$$a$$$ must contain exactly two distinct elements, and they should be of the form $$$m \cdot \text{g}$$$ and $$$(m+1) \cdot \text{g}$$$.
Note that you are given a permutation rather than any array. Since all the elements in a permutation are distinct, any subarray that has length not equal to $$$2$$$ is guaranteed to be not good. Thus, we only need to check those subarrays that have a length equal to $$$2$$$.
This is trivial. For every $$$i$$$, check whether the array $$$[p_i, p_{i+1}]$$$ satisfies the given condition.
Time Complexity: $$$\mathcal{O(n)}$$$
#include<bits/stdc++.h>
using namespace std;
int main(){
int t;
cin >> t;
while(t--){
int n;
cin >> n;
vector<int> a(n);
for(int i = 0; i < n; i++) cin >> a[i];
int ans = 0;
for(int i = 0; i < n - 1; i++) if(a[i] % (a[i] - a[i + 1]) == 0) ans++;
cout << ans << '\n';
}
return 0;
}
2226C - Mental Monumental (Easy Version)
Idea: wakanda-forever
Preparation: wakanda-forever
Solution: wakanda-forever
For a fixed value of $$$x$$$ and varying $$$y$$$, let us analyze what values $$$x \, \bmod \, y$$$ can attain.
To start with, if $$$y \gt x$$$, $$$x \, \bmod \, y = x$$$. Otherwise, if $$$y \le x$$$, $$$x \, \bmod \, y$$$ will always be less than $$$\frac{x}{2}$$$. We can prove this by breaking down the condition $$$y \le x$$$ into two cases:
- $$$y \le \frac{x}{2}$$$: Since the remainder must be less than the divisor, we have $$$x \, \bmod \, y \lt y \le \frac{x}{2}$$$. Thus, $$$x \, \bmod \, y \lt \frac{x}{2}$$$.
- $$$\frac{x}{2} \lt y \le x$$$: In this case, note that $$$x \, \bmod \, y = x - y \lt x - \frac{x}{2} = \frac{x}{2}$$$. Thus, $$$x \, \bmod \, y \lt \frac{x}{2}$$$.
And finally, we note that $$$x \, \bmod \, y$$$ can be equal to $$$x$$$ (choose any $$$y$$$ greater than $$$x$$$) or it can attain any value $$$z \lt \frac{x}{2}$$$ (choose $$$y = x - z$$$).
Now, let us try to solve the given problem. Let us call an integer $$$k$$$ good if we can attain all the values $$$0, 1, 2, \ldots, k-1$$$ in the array $$$a$$$ after performing the operation. Note that the answer for this problem is the maximum integer $$$k$$$, which is good. Furthermore, observe that if a particular value $$$k$$$ is good, then $$$k-1$$$ is also guaranteed to be good. This monotonic behavior naturally leads to the use of binary search to determine the maximum such $$$k$$$.
The only thing left now is "given an integer value $$$k$$$, how do we check if it is good?" To check if $$$k$$$ is good, we need to check whether we can attain all the values $$$0, 1, 2, \ldots, k-1$$$ in the array by performing an operation. To attain a value $$$x$$$ ($$$0 \le x \lt k$$$), we need atleast one index $$$i$$$ that satisfies $$$a_i = x$$$ or $$$a_i \gt 2x$$$ (since $$$a_i \, \bmod \, b_i$$$ can be either $$$a_i$$$ or $$$ \lt \frac{a_i}{2}$$$). We must also make sure that every value is assigned to at most one position.
To handle this, let us process the required values in decreasing order, i.e., from $$$k-1$$$ to $$$0$$$. We maintain a multiset of all unused array elements. For a current value $$$x$$$, we do the following:
If $$$x$$$ itself is present in the multiset, then we remove one occurrence of $$$x$$$.
Otherwise, we need some unused value $$$a_i$$$ such that $$$a_i \gt 2x$$$. Among all such values, we can choose one.
If at some step neither of the above is possible, then the current value $$$x$$$ cannot be formed, and hence $$$k$$$ is not good.
The above strategy works because any value greater than $$$2x$$$ can always be reduced to $$$x$$$, and when both $$$x$$$ and some value $$$ \gt 2x$$$ are available, it is optimal to choose the smallest feasible value, since larger elements are more flexible and may still be needed to construct smaller values later.
Time Complexity: $$$\mathcal{O}(n \cdot \log^2 n)$$$
#include<bits/stdc++.h>
using namespace std;
#define ll long long
int main(){
int t;
cin >> t;
while(t--){
int n;
cin >> n;
vector<int> a(n);
for(int i = 0; i < n; i++) cin >> a[i];
multiset<int> ms(a.begin(), a.end());
int l = 0, r = n + 1;
while(l < r){
int md = (l + r) / 2;
assert((int)ms.size() == n);
bool is = 1;
vector<int> removed;
for(int i = md - 1; i >= 0; i--){
if(ms.count(i)){
removed.push_back(i);
ms.erase(ms.find(i));
continue;
}
else{
int x = *ms.rbegin();
if(x < 2*i + 1){
is = 0;
break;
}
removed.push_back(x);
ms.erase(ms.find(x));
}
}
for(auto& z : removed) ms.insert(z);
if(is) l = md + 1;
else r = md;
}
assert(l > 0);
--l;
cout << l << '\n';
}
}
2226D - Reserved Reversals
Idea: wakanda-forever
Preparation: wakanda-forever
Solution: wakanda-forever
Let us consider odd and even values separately. If odd values can be sorted within themselves, and even values can be sorted within themselves, then it is easy to see that the whole array can be sorted eventually (we can just apply adjacent reversals by choosing segments of length $$$2$$$ of which one is even and one is odd).
Now, let us consider even values separately. Let the subsequence of even values be $$$[e_1, e_2, \ldots, e_m]$$$. For each $$$i$$$ from $$$1$$$ to $$$m$$$, we aim to make the prefix up to index $$$i$$$ sorted. That is, we try to place $$$e_i$$$ in its correct position in the sorted order, assuming that the prefix up to index $$$i-1$$$ is already sorted.
Let the sorted prefix up to index $$$i-1$$$ be $$$[E_1, E_2, \ldots, E_{i-1}]$$$, where $$$E_1 \le E_2 \le \cdots \le E_{i-1}$$$. If $$$e_i \ge E_{i-1}$$$, we can directly extend the prefix and proceed to index $$$i+1$$$. Otherwise, if $$$e_i \lt E_{i-1}$$$, we must change the relative order of $$$e_i$$$ and $$$E_{i-1}$$$. So, we must apply an operation involving both of them (otherwise, the relative order will not change). In order to apply such an operation, we must at least need an odd integer $$$z$$$ such that either $$$z \gt E_{i-1}$$$ or $$$z \lt e_i$$$. If we cannot find such $$$z$$$, we can instantly conclude that the array $$$a$$$ cannot be sorted.
If we can find such $$$z$$$, it is sufficient, and we can use it to change the relative order of $$$e_i$$$ and $$$E_{i-1}$$$ without changing the relative order of any other pair of elements having the same parity in the array $$$a$$$. Proof is as follows:
Let the values $$$e_i$$$, $$$E_{i-1}$$$, $$$z$$$ appear in the array $$$a$$$ in the following manner: $$$[\ldots, E_{i-1}, \ldots, e_i, \ldots, z, \ldots]$$$. Note that there won't be any even value in between $$$E_{i-1}$$$ and $$$e_i$$$. So, let's move all the odd values between them to the left of $$$E_{i-1}$$$ by making adjacent reversals. There can be both even and odd values between $$$e_i$$$ and $$$z$$$; we propagate all the even values to the right of $$$z$$$ and all the odd values to the left of $$$E_{i-1}$$$ by making adjacent reversals. Now, the array $$$a$$$ looks like: $$$[\ldots, E_{i-1}, e_i, z, \ldots]$$$. Finally, choose the segment with all three elements and reverse it, we get the desired order: $$$[\ldots, z, e_i, E_{i-1}, \ldots]$$$.
We follow the same idea with $$$E_{i-2}, E_{i-3} \ldots$$$. Finally, we place $$$e_i$$$ in its correct position in the sorted order. So, we check the existence of such $$$z$$$ for every inversion (can be easily done by storing the prefix maximum).
We follow the same process with odd values. To conclude, if we find any inversion within a parity and cannot find a valid $$$z$$$ of opposite parity, the answer is NO. Otherwise, the answer is YES.
Time Complexity: $$$\mathcal{O(n)}$$$.
#include<bits/stdc++.h>
using namespace std;
#define ll long long
int main(){
int t;
cin >> t;
while(t--){
int n;
cin >> n;
vector<int> a(n);
for(int i = 0; i < n; i++) cin >> a[i];
vector<int> even, odd;
for(auto& z : a){
if(z % 2) odd.push_back(z);
else even.push_back(z);
}
int even_min = 1e9;
int even_max = -1e9;
int odd_min = 1e9;
int odd_max = -1e9;
if(!even.empty()){
for(auto& z : even){
even_min = min(even_min, z);
even_max = max(even_max, z);
}
}
if(!odd.empty()){
for(auto& z : odd){
odd_min = min(odd_min, z);
odd_max = max(odd_max, z);
}
}
bool is = 1;
if(even.size() > 1){
int mx = even[0];
for(int i = 1; i < even.size(); i++){
if(mx > even[i]){
if(odd_min > even[i] && odd_max < mx){
is = 0;
break;
}
}
else{
mx = even[i];
}
}
}
if(odd.size() > 1){
int mx = odd[0];
for(int i = 1; i < odd.size(); i++){
if(mx > odd[i]){
if(even_min > odd[i] && even_max < mx){
is = 0;
break;
}
}
else{
mx = odd[i];
}
}
}
if(is) cout << "YES\n";
else cout << "NO\n";
}
}
2226E - Mental Monumental (Hard Version)
Huge thanks to chromate00 for suggesting this task.
Preparation: wakanda-forever
Solution: Proof_by_QED
First, read the tutorial for the easy version of the problem.
Let's call a number $$$m$$$ fixed if for some $$$i$$$, we have $$$a_i=m$$$, and we choose $$$b_i \gt m$$$ (so that $$$a_i \operatorname{mod} b_i = a_i$$$). Recall that from the easy version, when we want to determine whether an answer $$$k$$$ is possible to achieve, we should greedily fix as many numbers under $$$k$$$ as possible, and then check if assigning the other numbers so that $$$a_i \gt 2(a_i \operatorname{mod} b_i)$$$ is possible.
Our first useful observation for the hard version is that the answer is monotonic in terms of prefix. That is, the answer for the prefix of size $$$i$$$ must be at least the answer for the prefix of size $$$i-1$$$. This motivates us to use two pointers on the prefix and the answer. That is, for each prefix, increment our answer as much as possible.
Unfortunately, both updating to a new prefix and incrementing the answer by $$$1$$$ is hard. For example, suppose our current answer is $$$3$$$, and we are trying to increment it to $$$4$$$. It is possible that for some $$$a_i=4$$$, we have $$$b_i=3$$$ (so we get $$$a_i \operatorname{mod} b_i=1$$$). When we incremented our answer to $$$4$$$, it is optimal to reassign the $$$b_i=5$$$ so that $$$4$$$ is fixed. This creates a chain of updates of assignments, and maintaining all updates directly is too slow.
Therefore, we need a way to check whether an assignment is possible efficiently. Call the multiset of values in $$$a$$$ that is not used to fix "offerers", and call the set of values of $$$a_i \operatorname{mod} b_i$$$ we need "receivers". Our job is to match each receiver to an offerer that is more than twice its value. Suppose we already fixed our set of receivers and offerers — how do we check whether a matching is possible? This is a classic result: we should loop through receivers from smallest to largest and match it to the smallest offerer remaining that it can match to.
Let's look at it a different way. Consider each prefix of receivers from largest to smallest. In order for it to be possible to match to only the largest receiver (call it $$$x$$$), we need an offerer of at least $$$2x+1$$$. What about it to be possible to match the largest two receivers? (call it $$$x,y$$$ for $$$x \gt y$$$). Then, we need at least two numbers of at least $$$2y+1$$$, and for the requirement for $$$x$$$ to be satisfied. Continuing this on, for a prefix of $$$p$$$ receivers with the smallest one being $$$q$$$, we need at least $$$p$$$ offerers of at least $$$2q+1$$$, and for the requirements of all prefixes before it to be satisfied.
Now, let's bring this set of requirements to a data structure. A segment tree is an appropriate choice. Define the segment tree such that the value at index $$$i$$$ is (number of offerers at least $$$2i+1$$$ — number of receivers at least $$$i$$$). To check whether the current answer is possible, we can query the minimum in the segment tree, and if it is nonnegative, then it is possible. Incrementing the prefix and answer can both be described as range additions. Thus, the problem is solved in $$$O(n\log n)$$$ time.
Time Complexity: $$$\mathcal{O(n \log n)}$$$
#include <bits/stdc++.h>
#define int long long
#define ll long long
#define pii pair<int,int>
#define fi first
#define se second
using namespace std;
struct segtree {
const static int INF = 1e18;
int l, r, v = 0, lz = 0;
segtree *lc, *rc;
segtree* getmem();
segtree() : segtree(-1, -1) {};
segtree(int l, int r) : l(l), r(r) {
if (l==r) return;
int m=(l+r)/2;
lc=getmem(); *lc=segtree(l,m);
rc=getmem(); *rc=segtree(m+1,r);
}
int getv() {
return v+lz;
}
int op(int a, int b) {
return min(a,b);
}
void upd(int i, int x) {
add(i,i,x-q(i,i));
}
void add(int ql, int qr, int x) {
if (r<ql||qr<l) return;
if (ql<=l&&r<=qr) {lz+=x; return;}
lc->add(ql,qr,x); rc->add(ql,qr,x);
v=op(lc->getv(), rc->getv());
}
int q(int ql, int qr) {
if (qr<l||r<ql) return INF;
if (ql<=l&&r<=qr) return getv();
return op(lc->q(ql,qr), rc->q(ql,qr))+lz;
}
};
segtree mem[1000000]; int memsz=0;
segtree* segtree::getmem() {
return &mem[memsz++];
}
void solve() {
int n; cin >> n;
vector<int> v(n);
unordered_map<int,int> m;
segtree st(0,n+2);
int i = 0;
st.upd(0,-1);
for (int &r:v) {
cin >> r;
if (m[r]||r>i) {
st.add(0,(r-1)/2,1);
}
else {
st.upd(r,1e9);
st.add(0,r,1);
}
m[r]++;
while (1) {
// for (int j = 0; j < n; j++) cout << st.q(j,j) << " "; cout << "\n";
if (st.q(0,i)<0) break;
i++;
st.add(0,i,-1);
if (m[i]) {
st.upd(i,1e9);
st.add(0,(i-1)/2,-1);
st.add(0,i,1);
}
}
cout << i << " "; //cout << '\n';
}
cout << "\n";
}
int32_t main() {
ios::sync_with_stdio(0); cin.tie(0);
int t=1; cin >> t;
while (t--) solve();
}
2226F - Inversion Invasion
Idea: wuhudsm
Preparation: wuhudsm, wakanda-forever
Solution: wakanda-forever
The key observation lies in the symmetry
So for every value below $$$n$$$, replacing $$$x$$$ by $$$n-x$$$ keeps the $$$\gcd$$$ condition unchanged.
Now, let us focus on the values $$$1,2,\ldots,n-1$$$. If we take any valid arrangement of these values and replace every $$$x$$$ by $$$n-x$$$, then the arrangement is still valid. Also, for any pair of values $$$x, y \lt n$$$,
So every inversion becomes a non-inversion, and every non-inversion becomes an inversion. There are exactly $$$\binom{n-1}{2}$$$ pairs among the values $$$1,2,\ldots,n-1$$$. Therefore, in each complementary pair of valid permutations, the total number of inversions coming from these values is exactly $$$\binom{n-1}{2}$$$. Hence, the average contribution is
This is the core trick.
Now, let us try to count the number of valid permutations of $$$1, 2, \ldots, n - 1$$$. For a divisor $$$d \mid n$$$, let $$$\text{cnt}_d$$$ denote the number of integers $$$v \in [1,n]$$$ with $$$\gcd(v,n)=d$$$. So if currently $$$\text{cur}_d$$$ positions require $$$\gcd$$$ value $$$d$$$, then the number of ways to assign values to those positions is
Multiplying over all $$$d \lt n$$$ gives the number of ways to satisfy all nonzero constraints. Let this product be $$$B$$$. If for some $$$d$$$, we have $$$\text{cur}_d \gt \text{cnt}_d$$$, then there are no valid permutations.
Finally, we have to add the inversions contributed by $$$n$$$. If $$$n$$$ is placed at position $$$i$$$, then it contributes exactly $$$n-i$$$ inversions, since it is larger than every value to its right and smaller than none to its left.
So the whole answer is:
- the symmetric contribution of values $$$1,2,\ldots,n-1$$$,
- plus the position-based contribution of $$$n$$$.
The final formulas will be as follows.
Let $$$Z$$$ be the number of positions with $$$a_i=0$$$. Also define
We will have two cases depending on whether $$$n$$$ is fixed or not.
Case 1: one position is fixed to $$$n$$$
Suppose the position of $$$n$$$ is forced to be $$$pos_n$$$. Then the number of valid permutations is
The total inversion sum is
Case 2: no position is fixed to $$$n$$$
Then $$$n$$$ can be placed in any zero position. For a fixed position of $$$n$$$, the remaining values can be arranged in
ways.
So the total inversion sum is
Time Complexity: $$$\mathcal{O(n + n \log n + q)}$$$.
#include<bits/stdc++.h>
using namespace std;
#define ll long long
const ll mod = 998'244'353;
ll power(ll x, ll y, ll z = LLONG_MAX){
ll res = 1;
while(y > 0){
if(y % 2) res = (res * x) % z;
y /= 2;
if(y) x = (x * x) % z;
}
return res % z;
}
#define MX 2'000'007
ll fac[MX + 1], numInv[MX + 1], facNumInv[MX + 1];
void compfac(ll z){
facNumInv[0] = facNumInv[1] = numInv[0] = numInv[1] = fac[0] = 1;
for(ll i = 2; i <= MX; i++) numInv[i] = numInv[z % i] * (z - z / i) % z;
for(ll i = 2; i <= MX; i++) facNumInv[i] = (facNumInv[i - 1] * numInv[i]) % z;
for(ll i = 1; i <= MX; i++) fac[i] = (fac[i - 1] * i) % z;
return;
}
ll nPrmod(ll n, ll r, ll z){
if(r > n) return 0;
return (fac[n] * facNumInv[n - r]) % z;
}
int main(){
ios_base::sync_with_stdio(false); cin.tie(NULL);
int t;
cin >> t;
compfac(mod);
while(t--){
ll n, q;
cin >> n >> q;
ll val = (n - 2) * (n - 1);
val /= 2;
val %= mod;
val *= power(2, mod - 2, mod);
val %= mod;
vector<ll> cnt(n + 1, 0);
for(ll i = 1; i <= n; i++) cnt[gcd(i, n)]++;
cnt[0] = n;
vector<ll> a(n, 0);
vector<ll> cur(n + 1, 0);
cur[0] = n - 1;
ll zerocnt = 0;
ll prod = 1;
ll sum = n * (n - 1);
sum /= 2;
sum %= mod;
while(q--){
ll i, x;
cin >> i >> x;
--i;
assert(n % x == 0);
assert(a[i] == 0);
cur[a[i]]--;
a[i] = x;
if(x == n) sum = n - 1 - i, cur[0]++;
else if(cur[n] == 0) sum -= (n - i - 1);
if(cur[0] < 0) zerocnt = 1;
ll prev = nPrmod(cnt[a[i]], cur[a[i]], mod);
prod *= power(prev, mod - 2, mod);
prod %= mod;
cur[a[i]]++;
ll after = nPrmod(cnt[a[i]], cur[a[i]], mod);
prod *= after;
prod %= mod;
if(cur[a[i]] > cnt[a[i]]) zerocnt = 1;
ll mul = 1;
if(cur[n] == 0) mul = cur[0] + 1;
if(zerocnt > 0) cout << 0 << '\n';
else cout << ((prod * fac[cur[0]]) % mod * (val * mul % mod + sum)) % mod << '\n';
}
}
return 0;
}
2226G - Stop Spot
Idea: wuhudsm
Preparation: wuhudsm, wakanda-forever
Solution: wuhudsm
Let the final array be:
where $$$p$$$ is a permutation of $$$[1,m]$$$.
Notice that every newly created even-length palindrome can only be of one type:
- the palindrome crosses the boundary between $$$a$$$ and $$$p$$$;
- its symmetry axis must lie on the $$$a$$$ side, or exactly on the boundary between $$$a$$$ and $$$p$$$.
The number of even-length palindromes completely inside $$$a$$$ is fixed. It can be computed by Manacher in $$$O(n)$$$ time, and it is independent of the permutation $$$p$$$.
So we only need to count:
For each permutation $$$p$$$, how many extra even-length palindromes crossing the boundary are created.
Consider an even-length palindrome crossing the boundary.
It must have the following form:
where:
- $$$R$$$ is an even-length palindromic suffix of $$$a$$$, possibly empty;
- $$$S$$$ is the segment immediately to the left of $$$R$$$;
- $$$rev(S)$$$ must appear as a prefix of the permutation $$$p$$$.
Since $$$p$$$ is a permutation, its prefix cannot contain duplicate characters.
Therefore, all characters in $$$S$$$ must also be pairwise distinct.
Thus, for every even-length palindromic suffix $$$R$$$, we extend to the left from the position immediately before $$$R$$$, and take a locally longest segment $$$S$$$ whose characters are pairwise distinct.
Then we insert $$$rev(S)$$$ into a Trie.
During insertion, every time we pass through a Trie node, we increase the counter of that node by one.
The meaning is:
If the prefix of a permutation reaches this Trie node, then it matches at least one valid $$$S+$$$ palindromic suffix structure, so it creates one extra boundary-crossing even-length palindrome.
Thus, a path in the Trie with pairwise distinct characters corresponds to a prefix of some permutation $$$p$$$.
More specifically, if the prefix of a permutation $$$p$$$ walks along a path in the Trie, then the number of boundary-crossing palindromes it creates is the sum of counters on that path.
At first glance, every even-length palindromic suffix may extend a segment $$$S$$$ to the left and insert it into the Trie, so the total length may look large.
However, the total inserted length is actually $$$O(n)$$$.
The key claim is:
Each position $$$a_i$$$ enters the Trie at most twice.
Proof sketch:
Consider two even-length palindromic suffix structures:
and
where $$$R_1, R_2$$$ are both palindromic suffixes of $$$a$$$, $$$|R_2| \gt |R_1|$$$, $$$S_1, S_2$$$ are the distinct-character segments extended to their left, and $$$S_1, S_2$$$ have an intersection (sharing at least one $$$a_i$$$).
Since both suffixes $$$R_1, R_2$$$ are palindromes, by the periodicity lemma of strings, these two palindromes induce a period whose length is $$$|R_2|-|R_1|$$$.
The key observation is, since $$$S_1$$$ is a segment with pairwise distinct characters, $$$S_1∩R_2$$$ cannot be long.
If the length of $$$S_1∩R_2$$$ is greater than $$$2$$$, for example, it is $$$3$$$, a structure like the following would appear:
This contradicts the fact that $$$R_1,R_2$$$ are palindromes.
Therefore, if the $$$S_1,S_2$$$ parts of two structures $$$S_1+R_1$$$ and $$$S_2+R_2$$$ overlap, then $$$R_1,R_2$$$ can only be in two very special forms:
- $$$AA...AA$$$ type;
- $$$AB...AB$$$ type.
Now extend this conclusion to three structures.
If some position $$$a_i$$$ entered three different $$$S$$$ segments, then every pair of these structures would have to satisfy the special restriction above.
However, three different palindromic suffixes satisfying this restriction at the same time would force duplicate characters to appear inside some $$$S$$$.
This contradicts the fact that $$$S$$$ is a locally longest segment with pairwise distinct characters.
Therefore, the total number of characters inserted into the Trie is:
We perform DFS on the Trie.
Suppose we are currently at Trie node $$$u$$$:
- the current depth is $$$dep$$$, meaning the length of the already fixed prefix of the permutation;
- the sum of counters on the current path is $$$sum$$$;
- there are still $$$m-dep$$$ numbers not placed;
- the number of non-empty children of $$$u$$$ is $$$child(u)$$$.
If the next number is chosen as one of the Trie children, then the permutation prefix can still continue matching the Trie, and the contribution is handled by that child.
If the next number is not a Trie child, then the permutation prefix leaves the Trie from this position.
Once it leaves the Trie, no matter how the remaining numbers are arranged, no new boundary-crossing palindrome will be created.
Therefore, at node $$$u$$$, we need to count the contribution of going to an empty child.
The number of empty children is:
For each empty child, the remaining $$$m-dep-1$$$ numbers can be arranged arbitrarily, so the number of corresponding permutations is:
All these permutations have the same number of boundary-crossing palindromes, namely the current $$$sum$$$.
So we add:
Then for every non-empty child $$$v$$$, we continue DFS:
where $$$w_v$$$ is the counter stored at Trie node $$$v$$$.
If $$$dep=m$$$, then a complete permutation has already been formed, so we directly add:
Finally, if a permutation creates $$$x$$$ boundary-crossing palindromes, then its total number of even-length palindromes is:
Therefore:
The answer is:
taken modulo $$$998244353$$$.
The overall complexity is:
Since $$$m \le n$$$, the complexity of each test case can also be regarded as $$$O(n)$$$.
#include <bits/stdc++.h>
#define int long long
#define ll long long
#define pii pair<int,int>
#define fi first
#define se second
using namespace std;
vector<int> manacher_odd(vector<int> s) {
int n = s.size();
s.insert(s.begin(),-2);
s.push_back(-3);
vector<int> p(n + 2);
int l = 0, r = 1;
for(int i = 1; i <= n; i++) {
p[i] = min(r - i, p[l + (r - i)]);
while(s[i - p[i]] == s[i + p[i]]) {
p[i]++;
}
if(i + p[i] > r) {
l = i - p[i], r = i + p[i];
}
}
return vector<int>(begin(p) + 1, end(p) - 1);
}
vector<int> manacher(vector<int> s) {
vector<int> t;
for(auto c: s) {
t.push_back(-1);
t.push_back(c);
}
t.push_back(-1);
auto res = manacher_odd(t);
return vector<int>(begin(res) + 1, end(res) - 1);
}
struct node {
int count = 0;
unordered_map<int, node> m;
node() {
count = 0;
}
};
void add(node ¤t, vector<int> &a, int d) {
current.count++;
if (d>=a.size()) return;
if (current.m.find(a[d])!=current.m.end()) {
add(current.m[a[d]],a,d+1);
}
else {
node h;
current.m[a[d]]=h;
add(current.m[a[d]],a,d+1);
}
}
const int mod = 998244353, MX=1e6+4;
int F[MX];
unordered_map<int,int> counts;
void init() {
F[0]=1;
for (int i = 1; i < MX; i++) F[i]=F[i-1]*i%mod;
}
int pw(int b, int e=mod-2) {
int x=1;
while (e) {
if (e%2) x=x*b%mod;
b=b*b%mod;
e/=2;
}
return x;
}
void dfs(node ¤t, int cur, int d) {
if (!d) {
counts[cur]++;
return;
}
else (counts[cur]+=F[d-1]*(d-current.m.size()))%=mod;
for (auto [i, n]:current.m) {
dfs(n, cur+n.count, d-1);
}
}
void solve() {
int n, m; cin >> n >> m;
vector<int> v(n);
for (int &r:v) cin >> r;
auto t=manacher(v);
// for (int r:t) cout << r << " "; cout << "\n";
int cur=0;
vector<int> prev(m+1,n), nxt(n);
for (int i = n-1; ~i; i--) {
nxt[i]=prev[v[i]];
prev[v[i]]=i;
}
node head;
for (int i = 1; i < n; i++) {
int k=t[2*i-1]/2;
cur+=k;
if (k<n-i) continue;
vector<int> a;
for (int j = i-(n-i)-1; j >= 0; j--) {
if (nxt[j]<=i-(n-i)-1) break;
a.push_back(v[j]);
}
add(head,a,0);
}
vector<int> a;
for (int j = n-1; ~j; j--) {
if (nxt[j]<n) break;
a.push_back(v[j]);
}
add(head,a,0);
counts.clear();
dfs(head,cur,m);
int ans=0;
for (auto [i,c]:counts) {
// cout << i << " " << c << "\n";
ans=(ans+pw(c,i+1))%mod;
}
cout << ans << "\n";
}
int32_t main() {
ios::sync_with_stdio(0); cin.tie(0);
init();
int t=1; cin >> t;
while (t--) solve();
}









first. $$$B$$$ is a great segtree problem
You mean $$$E$$$?
No, for $$$B$$$ you can notice that if you fix an index $$$i$$$ in the array, then for any $$$j \ge i$$$ the value of $$$[\max(arr[i..j]) - \min(arr[i..j])] - \gcd(arr[i..j])$$$ is monotonic (it only increases as $$$j$$$ increases). That's because, whenever you add an element, the $$$\gcd$$$ can only decrease and the $$$\max - \min$$$ can only increase.
So since the property is monotonic, you can maintain two sliding windows $$$i..j$$$ and $$$i..k$$$ where $$$i..j$$$ represents the range of endpoints $$$i \le e_1 \le j$$$ to $$$i$$$ such that $$$[\max(arr[i..e_1]) - \min(arr[i..e_1])] - \gcd(arr[i..e_1]) \le 0$$$ and $$$i..k$$$ represents the range of endpoints $$$i \le e_2 \le k$$$ such that $$$[\max(arr[i..e_2]) - \min(arr[i..e_2])] - \gcd(arr[i..e_2]) \lt 0$$$. And the answer for an $$$i$$$ is just $$$j - k$$$.
To easily query the $$$\max, \min, \gcd$$$ of a subarray, you can use a segment tree or a sparse table.
checking adjacent values is enough ig, you can never have a subarray of length three or more which satisfies the properties.
Ofc he knows that... But do you know the Segment tree approach? :)
Cannot be on the same page as the author in 2226D - Reserved Reversals
Was doing exactly the same stuff for C, but wasn't convinced with any greedy approach that came to mind.
i managed to find the mods for each one but i couldnt think of binary search ffs -elo
D's so hard :(
that C was great, I tunnel visioned on the wrong greedy
I tunnel visioned on a greedy and that's how E was made (not the full story but long story short yeah)
I am doing the exact same thing for C as in the editorial 372865973. Where am I going wrong?
Same question! I'm using binary search as well, I don't know what I'm missing here. 372869255 :/
Waitt...... I didn't know
my_map[key]creates an element if the key doesn't exit....... I can't believe I spent an hour on this problem and this was the issue (372875201). Ughhhh!!!! Goodbye Specialist ;(problem C is awesome
Agreed
E reminded me of https://cses.fi/problemset/task/2425, was close to solving it :(
Fun fact: F can be done in $$$O(\sqrt n)$$$
i guess you are absolutely awesome guy, but i am too noob to even read that ques for now
For Problem E, I started thinking about going from smallest prefix to the longest but concluded it was not feasible under the time limit. Then it hit me to go from largest prefix to smallest. pretty amazing problem imo.
Screencast with commentary : Could only solve A and B. Had the correct idea for C, but had some bugs in implementation.
I imagine that some people tried an increasing greedy instead of a decreasing greedy for C. This does work but the solution is less clean. The idea is still to binary search on a valid $$$m$$$. We check if the first $$$m-1$$$ elements can be filled. For a given $$$m$$$, we can do the following:
Create a $$$cnt$$$ array, and maintain an unresolved left bound $$$L$$$ and right bound to pick elements $$$R$$$. Iterate $$$L$$$ from $$$0$$$ to $$$m-1$$$. If $$$cnt[L] \geq 1$$$, then the element already contributes to the mex. Otherwise, we need to find a location $$$R$$$ to take the next element from. Increment $$$R$$$ until:
Such a two pointer approach works in $$$O(n\text{ log }n)$$$.
Submission: 372875264
A's statement was weird, like B is 10 times clearer, and the difference between C and D in difficulty is massive
I've spent 2 hours on C, but for some strange reason the output in the cf tester differed with the output in my IDE. Like sometimes the tester would give me WA on the first test, although I would get the correct output in my IDE. In an online compiler I also get the correct answer. What can be the reason for that? I also asked it during the contest, but did not get an answer
sorry if it was newbie question, but i gotta ask.
why do we in problem one(2226A) make the answer expecting at least a single 1 in the array? isnt it possible that the array doesnt have any ones? sorry again for the newbie question, and thanks.
We don't expect 1. If there are not 1s, we just do a simple sum. Hard part starts when there is a 1.
oh, i see now, thanks for your help kind person!
One of the best C problems in the recent contests !
Binary search on C was so good
Highly recommend to check out 2093E - Min Max MEX if you liked C. It introduces another idea to MEX that this editorial didn't use. Anyways cool round!
Post contest discussion stream