Hope that everyone enjoyed the round. Feel free to ask questions in the comments if you do not understand any part of the editorial
1672A - Log Chopping
Author: errorgorn
No matter what move each player does, the result of the game will always be the same.
Count the number of moves.
Let us consider the ending state of the game. It turns out that at the ending state, we will only have logs of $$$1$$$ meter. Otherwise, players can make a move.
Now, at the ending state of the game, we will have $$$\sum\limits_{k=1}^n a_k$$$ logs. And each move we increase the number of logs by exactly $$$1$$$. Since we started with $$$n$$$ logs, there has been exactly $$$(\sum\limits_{k=1}^n a_k) - n$$$ turns.
Alternatively, a log of length $$$a_k$$$ will be cut $$$a_k-1$$$ times, so there will be $$$\sum\limits_{k=1}^n (a_k-1)$$$ turns.
If there were an odd number of turns $$$\texttt{errorgorn}$$$ wins, otherwise $$$\texttt{maomao90}$$$ wins.
// Super Idol的笑容
// 都没你的甜
// 八月正午的阳光
// 都没你耀眼
// 热爱105°C的你
// 滴滴清纯的蒸馏水
#include <bits/stdc++.h>
#include <ext/pb_ds/assoc_container.hpp>
#include <ext/pb_ds/tree_policy.hpp>
#include <ext/rope>
using namespace std;
using namespace __gnu_pbds;
using namespace __gnu_cxx;
#define int long long
#define ll long long
#define ii pair<ll,ll>
#define iii pair<ii,ll>
#define fi first
#define se second
#define endl '\n'
#define debug(x) cout << #x << ": " << x << endl
#define pub push_back
#define pob pop_back
#define puf push_front
#define pof pop_front
#define lb lower_bound
#define ub upper_bound
#define rep(x,start,end) for(auto x=(start)-((start)>(end));x!=(end)-((start)>(end));((start)<(end)?x++:x--))
#define all(x) (x).begin(),(x).end()
#define sz(x) (int)(x).size()
#define indexed_set tree<ll,null_type,less<ll>,rb_tree_tag,tree_order_statistics_node_update>
//change less to less_equal for non distinct pbds, but erase will bug
mt19937 rng(chrono::system_clock::now().time_since_epoch().count());
int n;
int arr[105];
signed main(){
ios::sync_with_stdio(0);
cin.tie(0);
cout.tie(0);
cin.exceptions(ios::badbit | ios::failbit);
int TC;
cin>>TC;
while (TC--){
cin>>n;
rep(x,0,n) cin>>arr[x];
int tot=0;
rep(x,0,n) tot+=arr[x]-1;
if (tot%2==0) cout<<"maomao90"<<endl;
else cout<<"errorgorn"<<endl;
}
}
1672B - I love AAAB
Author: errorgorn
Pretend that we can only insert $$$\texttt{AB}$$$.
What if we replaced $$$\texttt{A}$$$ and $$$\texttt{B}$$$ with $$$\texttt{(}$$$ and $$$\texttt{)}$$$?
Claim: The string is obtainable if it ends in $$$\texttt{B}$$$ and every prefix of the string has at least as many $$$\texttt{A}$$$ as $$$\texttt{B}$$$.
An alternative way to think about the second condition is to assign $$$\texttt{A}$$$ to have a value of $$$1$$$ and $$$\texttt{B}$$$ to have a value of $$$-1$$$. Then, we are just saying that each prefix must have a non-negative sum. This is pretty similar to bracket sequences.
Now, both conditions are clearly necessary, let us show that they are sufficient too.
We will explicitly construct the string (in the reverse direction). While there are more than $$$1$$$ occurrences of $$$\texttt{B}$$$ in the string, remove the first occurrence of $$$\texttt{AB}$$$. After doing this process, you will eventually end up with the string $$$\texttt{AAA...AAB}$$$.
// Super Idol的笑容
// 都没你的甜
// 八月正午的阳光
// 都没你耀眼
// 热爱105°C的你
// 滴滴清纯的蒸馏水
#include <bits/stdc++.h>
#include <ext/pb_ds/assoc_container.hpp>
#include <ext/pb_ds/tree_policy.hpp>
#include <ext/rope>
using namespace std;
using namespace __gnu_pbds;
using namespace __gnu_cxx;
#define int long long
#define ll long long
#define ii pair<ll,ll>
#define iii pair<ii,ll>
#define fi first
#define se second
#define endl '\n'
#define debug(x) cout << #x << ": " << x << endl
#define pub push_back
#define pob pop_back
#define puf push_front
#define pof pop_front
#define lb lower_bound
#define ub upper_bound
#define rep(x,start,end) for(auto x=(start)-((start)>(end));x!=(end)-((start)>(end));((start)<(end)?x++:x--))
#define all(x) (x).begin(),(x).end()
#define sz(x) (int)(x).size()
#define indexed_set tree<ll,null_type,less<ll>,rb_tree_tag,tree_order_statistics_node_update>
//change less to less_equal for non distinct pbds, but erase will bug
mt19937 rng(chrono::system_clock::now().time_since_epoch().count());
signed main(){
ios::sync_with_stdio(0);
cin.tie(0);
cout.tie(0);
cin.exceptions(ios::badbit | ios::failbit);
int TC;
cin>>TC;
while (TC--){
string s;
cin>>s;
bool ok=(s.back()=='B');
int sum=0;
for (auto it:s){
if (it=='A') sum++;
else sum--;
if (sum<0) ok=false;
}
if (ok) cout<<"YES"<<endl;
else cout<<"NO"<<endl;
}
}
1672C - Unequal Array
Author: maomao90
If the array is $$$a=[1,1,\ldots,1]$$$. We will need $$$0$$$ moves if $$$n \leq 2$$$ and will need $$$\max(n-3,1)$$$ moves.
The only way to reduce the number of $$$i$$$ such that $$$a_i = a_{i+1}$$$ is when $$$a_i = a_{i+1}$$$ and $$$a_{i+2} = a_{i+3}$$$, and you apply the operation on $$$a_{i+1}$$$ and $$$a_{i+2}$$$.
Suppose $$$l$$$ is the smallest index where $$$a_l = a_{l+1}$$$ and r is the largest index where $$$a_r = a_{r+1}$$$. If $$$l=r$$$ or $$$l$$$ and $$$r$$$ does not exist, the condition is already satisfied and we can do 0 operations. Otherwise, the answer is $$$\max(1, r - l - 1)$$$. The proof is as follows:
- Suppose $$$l+1=r$$$, then, there are 3 elements that are adjacent to each other. Hence, we can just do one operation with $$$i=l$$$ and $$$x=\infty$$$ to make the equality of the array 1.
- Suppose otherwise, then the array will look something like $$$[..., X, X, ..., Y, Y, ...]$$$, with $$$r - l - 2$$$ elements between the second $$$X$$$ and the first $$$Y$$$. Then, we can do operations on $$$i={l+1, l+2, \ldots, r-2, r-1}$$$ to make the equality of the array 1.
To see why we need at least $$$r - l - 1$$$ operations, observe that each operation will cause $$$r - l$$$ to decrease by at most 1. This is because if we do not do an operation on $$$i \in \{l-1,l+1,r-1,r+1\}$$$, then both $$$a_l = a_{l+1}$$$ and $$$a_r = a_{r+1}$$$ will still hold. We see that $$$r-l$$$ only decreases when do we an operation on $$$i \in {l+1,r-1}$$$ and it is not too hard to show that it only decreases by $$$1$$$ in those cases while $$$r-l > 2$$$
Hence, we keep doing the operation until $$$r - l = 2$$$, which will only require 1 operation to change both pairs and make the equality 1.
#include <bits/stdc++.h>
using namespace std;
template <class T>
inline bool mnto(T& a, T b) {return a > b ? a = b, 1 : 0;}
template <class T>
inline bool mxto(T& a, T b) {return a < b ? a = b, 1: 0;}
#define REP(i, s, e) for (int i = s; i < e; i++)
#define RREP(i, s, e) for (int i = s; i >= e; i--)
typedef long long ll;
typedef long double ld;
#define MP make_pair
#define FI first
#define SE second
typedef pair<int, int> ii;
typedef pair<ll, ll> pll;
#define MT make_tuple
typedef tuple<int, int, int> iii;
#define ALL(_a) _a.begin(), _a.end()
#define pb push_back
typedef vector<int> vi;
typedef vector<ll> vll;
typedef vector<ii> vii;
#ifndef DEBUG
#define cerr if (0) cerr
#endif
#define INF 1000000005
#define LINF 1000000000000000005ll
#define MAXN 200005
int t;
int n;
int a[MAXN];
int main() {
#ifndef DEBUG
ios::sync_with_stdio(0), cin.tie(0);
#endif
cin >> t;
while (t--) {
cin >> n;
REP (i, 0, n) {
cin >> a[i];
}
int mn = -1, mx = -1;
REP (i, 1, n) {
if (a[i] == a[i - 1]) {
if (mn == -1) {
mn = i;
}
mx = i;
}
}
if (mn == mx) {
cout << 0 << '\n';
} else {
cout << max(1, mx - mn - 1) << '\n';
}
}
return 0;
}
1672D - Cyclic Rotation
Author: errorgorn
The operation of cyclic shift is equivalent to deleting $$$a_i$$$ and duplicating $$$a_j$$$ where $$$a_i = a_j$$$.
Reverse the process.
We will solve the problem by reversing the steps and transform array $$$b$$$ to array $$$a$$$.
We can do the following operation on $$$b$$$:
- pick index $$$i<j$$$ such that $$$b_j=b_{j+1}$$$ and remove $$$b_{j+1}$$$ and insert it after position $$$i$$$.
Now, for every consecutive block of identical elements in $$$b$$$, we can remove all but one element from it and move it left.
If we process from right to left, we can imagine as taking consecutive elemnets in $$$b$$$ out and placing in a reserve, and using them to match some elements in $$$a$$$ towards the left.
Using this idea, we can use the following greedy two-pointer algorithm:
Let $$$i$$$ and $$$j$$$ represent the size of $$$a$$$ and $$$b$$$ respectively (and hence $$$a_i$$$ and $$$b_j$$$ will refer to the last elements of $$$a$$$ and $$$b$$$ respectively). We also have an initially empty multiset $$$S$$$, which represents the reserve. We will perform the following operations in this order:
- While $$$b_{j-1}=b_j$$$, add $$$b_j$$$ to the multiset $$$S$$$ and decrement $$$j$$$
- If $$$a_i = b_j$$$, then decrement both $$$i$$$ and $$$j$$$
- Otherwise, we delete an occurance of $$$a_i$$$ in $$$S$$$ and decrement $$$i$$$. If we cannot find $$$a_i$$$ in $$$S$$$, it is impossible to transform $$$b$$$ to $$$a$$$.
Let us define an array $$$c$$$ where all elements of $$$c$$$ is $$$1$$$. We can rephrase the problem in the following way:
- Choose $$$i<j$$$ such that $$$a_i = a_j$$$ and $$$c_i>0$$$. Then update $$$c_i \gets c_i-1$$$ and $$$c_j \gets c_j+1$$$.
The final array $$$b$$$ is obtained by the following: Let $$$b$$$ be initially empty, then iterate $$$i$$$ from $$$1$$$ to $$$n$$$ and add $$$c_i$$$ copies of $$$a_i$$$ to $$$b$$$.
As such, we can consider mapping elements of $$$b$$$ into elements of $$$a$$$. More specifically, consider a mapping $$$f$$$ where $$$f$$$ is non-decreasing, $$$b_i=a_{f_i}$$$ and we increment $$$c_{f_i}$$$ by $$$1$$$ for all $$$i$$$. All that remains is to determine if we can make such a mapping such that $$$c$$$ is valid.
Notice that if all elements of $$$a$$$ are identical, the necessary and sufficient condition for a valid array $$$c$$$ is that $$$c_1+c_2+\ldots+c_i \leq i$$$ for all $$$i$$$.
This motivates us to construct an array $$$pa$$$ where $$$pa_i$$$ is the number of indices $$$j \leq i$$$ such that $$$a_i=a_j$$$. Analogously, construct $$$pb$$$.
Then the necessary and sufficient conditions for a mapping $$$f$$$ is that $$$f$$$ is non-decreasing, $$$b_i=a_{f_i}$$$ and $$$pb_i \leq pa_{f_i}$$$. A greedy algorithm to construct $$$f$$$, if it exists, is trivial by minimizing $$$f_i$$$ for each $$$i$$$.
#include <bits/stdc++.h>
using namespace std;
template <class T>
inline bool mnto(T& a, T b) {return a > b ? a = b, 1 : 0;}
template <class T>
inline bool mxto(T& a, T b) {return a < b ? a = b, 1: 0;}
#define REP(i, s, e) for (int i = s; i < e; i++)
#define RREP(i, s, e) for (int i = s; i >= e; i--)
typedef long long ll;
typedef long double ld;
#define MP make_pair
#define FI first
#define SE second
typedef pair<int, int> ii;
typedef pair<ll, ll> pll;
#define MT make_tuple
typedef tuple<int, int, int> iii;
#define ALL(_a) _a.begin(), _a.end()
#define pb push_back
typedef vector<int> vi;
typedef vector<ll> vll;
typedef vector<ii> vii;
#ifndef DEBUG
#define cerr if (0) cerr
#endif
#define INF 1000000005
#define LINF 1000000000000000005ll
#define MAXN 200005
int t;
int n;
int a[MAXN], b[MAXN];
int cnt[MAXN];
int main() {
#ifndef DEBUG
ios::sync_with_stdio(0), cin.tie(0);
#endif
cin >> t;
while (t--) {
cin >> n;
REP (i, 1, n + 1) {
cnt[i] = 0;
}
REP (i, 0, n) {
cin >> a[i];
}
REP (i, 0, n) {
cin >> b[i];
}
int i = 0, j = 0;
bool pos = 1;
while (j < n) {
if (i < n && j < n && a[i] == b[j]) {
i++; j++;
continue;
}
if (cnt[b[j]] > 0 && b[j] == b[j - 1]) {
cnt[b[j++]]--;
} else if (i < n) {
cnt[a[i++]]++;
} else {
pos = 0;
break;
}
}
if (pos) {
assert(i == n);
REP (i, 1, n + 1) {
assert(cnt[i] == 0);
}
cout << "YES\n";
} else {
cout << "NO\n";
}
}
return 0;
}
// Super Idol的笑容
// 都没你的甜
// 八月正午的阳光
// 都没你耀眼
// 热爱105°C的你
// 滴滴清纯的蒸馏水
#include <bits/stdc++.h>
#include <ext/pb_ds/assoc_container.hpp>
#include <ext/pb_ds/tree_policy.hpp>
#include <ext/rope>
using namespace std;
using namespace __gnu_pbds;
using namespace __gnu_cxx;
#define int long long
#define ll long long
#define ii pair<ll,ll>
#define iii pair<ii,ll>
#define fi first
#define se second
#define endl '\n'
#define debug(x) cout << #x << ": " << x << endl
#define pub push_back
#define pob pop_back
#define puf push_front
#define pof pop_front
#define lb lower_bound
#define ub upper_bound
#define rep(x,start,end) for(auto x=(start)-((start)>(end));x!=(end)-((start)>(end));((start)<(end)?x++:x--))
#define all(x) (x).begin(),(x).end()
#define sz(x) (int)(x).size()
#define indexed_set tree<ll,null_type,less<ll>,rb_tree_tag,tree_order_statistics_node_update>
//change less to less_equal for non distinct pbds, but erase will bug
mt19937 rng(chrono::system_clock::now().time_since_epoch().count());
int n;
int arr[200005];
int brr[200005];
int crr[200005];
int num[200005];
signed main(){
ios::sync_with_stdio(0);
cin.tie(0);
cout.tie(0);
cin.exceptions(ios::badbit | ios::failbit);
int TC;
cin>>TC;
while (TC--){
cin>>n;
rep(x,1,n+1) cin>>arr[x];
rep(x,1,n+1) cin>>brr[x];
rep(x,1,n+1) num[x]=0;
rep(x,1,n+1){
num[arr[x]]++;
crr[x]=num[arr[x]];
}
rep(x,1,n+1) num[x]=0;
int idx=1;
rep(x,1,n+1){
num[brr[x]]++;
while (idx<=n && (arr[idx]!=brr[x] || crr[idx]<num[brr[x]])) idx++;
}
if (idx>n) cout<<"NO"<<endl;
else cout<<"YES"<<endl;
}
}
1672E - notepad.exe
Author: errorgorn, oolimry
Find the sum of lengths of words.
Given a height, how many "good" widths are there.
The first idea that we have is that we want to be able to find the minimum possible width of the text editor for a specific height. We can do this in $$$n\log (n \cdot 2000)$$$ queries using binary search for each height. This is clearly not good enough, so let us come up with more observations.
First, we can binary search for the minimum possible width for height $$$1$$$. This value is $$$(\sum_{i=1}^n l_i)+n-1$$$ which we will denote with $$$S$$$.
Let us consider what we might want to query for height $$$h$$$. Suppose that the words are arranged very nicely such that there are no trailing spaces in each line. Then, the total number of characters will be $$$S - h +1$$$. This means that the minimum possible area for height $$$h$$$ will be $$$S-h+1$$$. We also know that if the area is more than $$$S$$$, it will not be useful as the area for $$$h=1$$$ is already $$$S$$$.
Now, we know that the range of possible areas that we are interested in is $$$[S - h +1, S]$$$. There is a total of $$$h$$$ possible areas that it can take, and an area is only possible if $$$h \cdot w = a$$$, or in other words, the area is divisible by $$$h$$$. Among the $$$h$$$ consecutive possible values, exactly one of them will be divisible by $$$h$$$, hence we can just query that one value of $$$w$$$ which can very nicely be found as $$$\lfloor \frac{S}{h} \rfloor$$$.
The total number of queries used is $$$n + \log(n \cdot 2000)$$$ where $$$n$$$ comes from the single query for each height and the $$$\log(n \cdot 2000)$$$ comes from the binary search at the start.
// Super Idol的笑容
// 都没你的甜
// 八月正午的阳光
// 都没你耀眼
// 热爱105°C的你
// 滴滴清纯的蒸馏水
#include <bits/stdc++.h>
#include <ext/pb_ds/assoc_container.hpp>
#include <ext/pb_ds/tree_policy.hpp>
#include <ext/rope>
using namespace std;
using namespace __gnu_pbds;
using namespace __gnu_cxx;
#define int long long
#define ll long long
#define ii pair<ll,ll>
#define iii pair<ii,ll>
#define fi first
#define se second
#define debug(x) cout << #x << ": " << x << endl
#define pub push_back
#define pob pop_back
#define puf push_front
#define pof pop_front
#define lb lower_bound
#define ub upper_bound
#define rep(x,start,end) for(auto x=(start)-((start)>(end));x!=(end)-((start)>(end));((start)<(end)?x++:x--))
#define all(x) (x).begin(),(x).end()
#define sz(x) (int)(x).size()
#define indexed_set tree<ll,null_type,less<ll>,rb_tree_tag,tree_order_statistics_node_update>
//change less to less_equal for non distinct pbds, but erase will bug
int n;
int ask(int i){
if (i==0) return 0;
cout<<"? "<<i<<endl;
int temp;
cin>>temp;
if (temp==-1){
exit(0);
}
return temp;
}
signed main(){
ios::sync_with_stdio(0);
cin.tie(0);
cout.tie(0);
cin.exceptions(ios::badbit | ios::failbit);
cin>>n;
int lo=-2,hi=5e6,mi;
while (hi-lo>1){
mi=hi+lo>>1;
if (ask(mi)==1) hi=mi;
else lo=mi;
}
int ans=1e9;
rep(x,1,n+1){
int temp=ask(hi/x);
if (temp) ans=min(ans,temp*(hi/x));
}
cout<<"! "<<ans<<endl;
}
1672F1 - Array Shuffling and 1672F2 - Checker for Array Shuffling
Author: errorgorn
The number of occurances of the most frequent element is important.
Let $$$X$$$ be the number of occurances of the most common element. The maximal sadness is $$$N-X$$$.
For every minimal sequence of swaps, we have duality of maximal number of cycles when considering the graph with edges $$$a_i \to b_i$$$.
Let $$$N$$$ be the length of $$$A$$$ and $$$B$$$.
We want to prove that an optimal swapping from $$$B \to A$$$ is equivalent to sorting via some cycles. Suppose our swap order is $$$\{(l_1,r_1),(l_2,r_2),\ldots,(l_K,r_K)\}$$$. Let's consider a graph $$$G$$$ with edges being the swaps. Suppose the number of connected components in $$$G$$$ is $$$CC$$$, then there is a way to perform the transformation $$$B \to A$$$ using $$$CC$$$ cycles since we can view the labels of each connected component of $$$G$$$ as a permutation of the original vertices. One cycle of length $$$X$$$ uses $$$X-1$$$ swaps, so we use $$$N-CC$$$ swaps in total. Since $$$CC \geq N-K$$$, we can always change the swap order to swapping cycles while not performing a bigger number of moves. Now we have changed the problem to maximizing the number of cycles we use.
Let $$$cnt_x$$$ be the number of occurrences of $$$x$$$ in $$$A$$$. WLOG $$$cnt_1 \geq cnt_2 \geq \ldots$$$.
Let $$$s_A(B)$$$ denote the sadness of $$$B$$$ when the original array is $$$A$$$.
Claim: $$$\max(s_A) \leq N-cnt_1$$$
Proof: By pigeonhole principle, we know there exist a cycle with $$$2$$$ occurrences of the element $$$1$$$.
Consider a cycle that swaps $$$i_1 \to i_2 \to \ldots \to i_K \to i_1$$$ where $$$A_{i_1}=A_{i_z}=1$$$. Then we can increase the number of connected components while maintaining $$$B$$$ by splitting into $$$2$$$ cycles $$$i_1 \to i_2 \to \ldots \to i_{z-1} \to i_1$$$ and $$$i_z \to i_2 \to \ldots \to i_N \to i_z$$$.
Therefore, in an optimal solution, there should not be a cycle that passes through the same value twice. $$$\blacksquare$$$
Therefore, we can assume that all occurrences of $$$1$$$ belong to different cycles. Therefore, $$$\#cyc \geq cnt_1$$$ swaps are used. The number of swaps used is $$$N-\#cyc \leq N-cnt_1$$$.
Therefore, $$$N-cnt_1$$$ is a upper bound of $$$s$$$.
Claim: $$$s_A(B)<N-cnt_1$$$ $$$\Leftrightarrow$$$ there exists a cycle $$$i_1 \to i_2 \to \ldots \to i_K \to i_1$$$ such that all $$$i_x \neq 1$$$.
Proof: $$$(\Rightarrow)$$$ There exists a cycle decomposition of the graph that uses at least $$$cnt_1+1$$$ cycles. Since a single element of $$$1$$$ can only go to a single cycle, there exists a cycle without $$$1$$$.
$$$(\Leftarrow)$$$ Let's remove this cycle to form an arrays $$$A'$$$ and $$$B'$$$. Then $$$s_{A'}(B') \leq N-K-cnt_1$$$. Now, we only needed $$$K-1$$$ swaps to remove the cycle, so it much be that $$$s_A(B) \leq (N-K-cnt_1)+(K-1)=N-cnt_1-1$$$. $$$\blacksquare$$$
Constructing maximal $$$B$$$
To construction a permutation such that $$$s(B)=N-cnt_1$$$, let's construct a graph $$$G_{cnt}$$$ based on the number of occurrences of each element in $$$A$$$. We draw $$$cnt_{i+1}$$$ edges from $$$(i) \to (i+1)$$$ and $$$cnt_{i}-cnt_{i+1}$$$ edges from $$$(i) \to (1)$$$. It is obviously impossible to find a cycle that does not contain $$$1$$$. Since all edges will be of the form $$$(i) \to (i+1)$$$.
Another way to construct this permutation is to assume that $$$A$$$ is sorted. Then we perform $$$cnt_1$$$ cyclic shifts on $$$A$$$ to obtain $$$B$$$.
Checking if $$$B$$$ is maximal
Given the graph representation, finding such a cycle $$$i_1 \to i_2 \to \ldots \to i_K \to i_1$$$ such that all $$$i_x \neq 1$$$ is easy. Let's remove $$$1$$$ from the graph then check if the graph is a DAG.
// Super Idol的笑容
// 都没你的甜
// 八月正午的阳光
// 都没你耀眼
// 热爱105°C的你
// 滴滴清纯的蒸馏水
#include <bits/stdc++.h>
#include <ext/pb_ds/assoc_container.hpp>
#include <ext/pb_ds/tree_policy.hpp>
#include <ext/rope>
using namespace std;
using namespace __gnu_pbds;
using namespace __gnu_cxx;
#define int long long
#define ll long long
#define ii pair<ll,ll>
#define iii pair<ii,ll>
#define fi first
#define se second
#define endl '\n'
#define debug(x) cout << #x << ": " << x << endl
#define pub push_back
#define pob pop_back
#define puf push_front
#define pof pop_front
#define lb lower_bound
#define ub upper_bound
#define rep(x,start,end) for(auto x=(start)-((start)>(end));x!=(end)-((start)>(end));((start)<(end)?x++:x--))
#define all(x) (x).begin(),(x).end()
#define sz(x) (int)(x).size()
#define indexed_set tree<ll,null_type,less<ll>,rb_tree_tag,tree_order_statistics_node_update>
//change less to less_equal for non distinct pbds, but erase will bug
mt19937 rng(chrono::system_clock::now().time_since_epoch().count());
int n;
int arr[200005];
int brr[200005];
int cnt[200005];
vector<int> al[200005];
signed main(){
ios::sync_with_stdio(0);
cin.tie(0);
cout.tie(0);
cin.exceptions(ios::badbit | ios::failbit);
int TC;
cin>>TC;
while (TC--){
cin>>n;
rep(x,0,n) cin>>arr[x];
rep(x,1,n+1) al[x].clear();
rep(x,1,n+1) cnt[x]=0;
rep(x,0,n) cnt[arr[x]]++;
int mx=0;
rep(x,1,n+1) mx=max(mx,cnt[x]);
rep(x,0,n) brr[x]=arr[x];
sort(brr,brr+n);
rep(x,0,n) al[brr[x]].pub(brr[(x+mx)%n]);
rep(x,0,n){
cout<<al[arr[x]].back()<<" \n"[x==n-1];
al[arr[x]].pob();
}
}
}
// Super Idol的笑容
// 都没你的甜
// 八月正午的阳光
// 都没你耀眼
// 热爱105°C的你
// 滴滴清纯的蒸馏水
#include <bits/stdc++.h>
#include <ext/pb_ds/assoc_container.hpp>
#include <ext/pb_ds/tree_policy.hpp>
#include <ext/rope>
using namespace std;
using namespace __gnu_pbds;
using namespace __gnu_cxx;
#define int long long
#define ll long long
#define ii pair<ll,ll>
#define iii pair<ii,ll>
#define fi first
#define se second
#define endl '\n'
#define debug(x) cout << #x << ": " << x << endl
#define pub push_back
#define pob pop_back
#define puf push_front
#define pof pop_front
#define lb lower_bound
#define ub upper_bound
#define rep(x,start,end) for(auto x=(start)-((start)>(end));x!=(end)-((start)>(end));((start)<(end)?x++:x--))
#define all(x) (x).begin(),(x).end()
#define sz(x) (int)(x).size()
#define indexed_set tree<ll,null_type,less<ll>,rb_tree_tag,tree_order_statistics_node_update>
//change less to less_equal for non distinct pbds, but erase will bug
mt19937 rng(chrono::system_clock::now().time_since_epoch().count());
int n;
int arr[200005];
int brr[200005];
vector<int> al[200005];
bool onstk[200005];
bool vis[200005];
bool cyc;
void dfs(int i){
onstk[i]=vis[i]=true;
for (auto it:al[i]){
if (onstk[it]) cyc=true;
if (!vis[it]) dfs(it);
}
onstk[i]=false;
}
signed main(){
ios::sync_with_stdio(0);
cin.tie(0);
cout.tie(0);
cin.exceptions(ios::badbit | ios::failbit);
int TC;
cin>>TC;
while (TC--){
cin>>n;
rep(x,1,n+1) al[x].clear();
rep(x,1,n+1) vis[x]=onstk[x]=false;
rep(x,1,n+1) cin>>arr[x];
rep(x,1,n+1) cin>>brr[x];
rep(x,1,n+1) al[arr[x]].pub(brr[x]);
int mx=1;
rep(x,1,n+1) if (sz(al[x])>sz(al[mx])) mx=x;
vis[mx]=true;
cyc=false;
rep(x,1,n+1) if (!vis[x]) dfs(x);
if (cyc) cout<<"WA"<<endl;
else cout<<"AC"<<endl;
}
}
1672G - Cross Xor
Author: maomao90, errorgorn
We need to consider $$$4$$$ cases based on the parity of $$$r$$$ and $$$c$$$.
Let $$$R_i$$$ and $$$C_j$$$ denote $$$\bigotimes\limits_{j=1}^c a_{i,j}$$$ and $$$\bigotimes\limits_{i=1}^r a_{i,j}$$$ respectively, or the xor-sum of the $$$i$$$-th row and the xor-sum of the $$$j$$$-th column respectively. For some cases, $$$R$$$ and $$$C$$$ will be constant sequences no matter what sequence of operations we perform.
The obvious necessary conditions for $$$R$$$ and $$$C$$$ are actually sufficient.
When at least one of $$$r$$$ or $$$c$$$ is even, counting the valid grids is easy. When both $$$r$$$ and $$$c$$$ is odd, consider drawing the bipartite graph where edges are cells with $$$\texttt{?}$$$.
Determining which Grids are Obtainable
Let $$$R_i$$$ and $$$C_j$$$ denote $$$\bigotimes\limits_{j=1}^c a_{i,j}$$$ and $$$\bigotimes\limits_{i=1}^r a_{i,j}$$$ respectively, or the xor-sum of the $$$i$$$-th row and the xor-sum of the $$$j$$$-th column respectively.
We will split the problem into 3 cases.
Case 1: $$$r$$$ is even and $$$c$$$ is even
Choose some $$$(x,y)$$$ and do an operation on all $$$(i,j)$$$ where $$$i=x$$$ or $$$j=y$$$. The effect of this series of operations is toggling $$$(x,y)$$$.
All possible grids are reachable. Counting them is easy.
Case 2: $$$r$$$ is even and $$$c$$$ is odd
If $$$r$$$ is odd and $$$c$$$ is even, we can treat it as the same case by swapping a few variables.
Notice that every operation toggles all elements in $$$R$$$. It is neccasary that $$$R$$$ all values in R are the same, let us prove that this is sufficient as well.
Now suppose $$$R$$$ is all 0. If $$$R$$$ is all $$$1$$$. We can perform the operation on $$$(1,1)$$$ and now $$$R$$$ is all $$$0$$$.
If we pick $$$1 \leq x \leq r$$$ and $$$1 \leq y < c$$$ and perform operations on all $$$(i,j)$$$ where $$$i \neq x$$$ and $$$j=y$$$ or $$$j=c$$$, then it is equivalent to toggling $$$(x,y)$$$ and $$$(x,c)$$$.
We can perform the following new operation:
- pick $$$1 \leq x \leq r$$$ and $$$1 \leq y < c$$$
- toggle $$$(x,y)$$$,$$$(x,c)$$$
Since $$$R$$$ is all 0, each row has an even number of $$$1$$$. If we apply the new operation on all $$$(x,y)$$$ where $$$a_{x,y} = 1$$$ and $$$y < c$$$, then $$$(x,c)$$$ will be $$$0$$$ in the end. Hence, the whole grid will be $$$0$$$.
Case 3: $$$r$$$ is odd and $$$c$$$ is odd
Notice that every operation toggles all elements in $$$R$$$ and $$$C$$$. It is neccasary that both $$$R$$$ are $$$C$$$ all having the same values, let us prove that this is sufficient as well.
Suppose $$$R$$$ is all $$$0$$$ and $$$C$$$ is all $$$0$$$. If $$$R$$$ and $$$C$$$ are all $$$1$$$, we apply the operation on $$$(1,1)$$$ to make $$$R$$$ and $$$C$$$ both all $$$0$$$
Notice that if we pick $$$1 \leq x_1 < x_2 \leq r$$$ and $$$1 \leq y_1 < y_2 \leq c$$$. Let $$$S=\{(x_1,y_1), (x_1,y_2), (x_2,y_1),(x_2,y_2)\}$$$. When we perform operations on all cells in $$$S$$$, it is equivalent to toggling all cells in $$$S$$$.
We can perform the following new operation:
- pick $$$1 \leq x < r$$$ and $$$1 \leq y < c$$$
- toggle $$$(x,y)$$$,$$$(x,c)$$$,$$$(r,y)$$$,$$$(r,c)$$$
Since $$$R$$$ and $$$C$$$ is all 0, each row and column has an even number of 1. If we apply the new operation on all $$$(x,y)$$$ where $$$a_{x,y} = 1$$$ and $$$x < r$$$ and $$$y < c$$$ , then $$$(x,c)$$$ will be $$$0$$$ for $$$0 < x < r$$$ and $$$(r,y)$$$ will be $$$0$$$ for $$$0 < y < c$$$ in the end. And hence, $$$a_{r,c} = 0$$$ too since $$$R$$$ and $$$C$$$ is all 0. Hence, the whole grid will be $$$0$$$.
Alternate Justification
Thanks to dario2994 for writing this.
Let $$$V = Z_2^{nm}$$$. $$$V$$$ is endowed with the natural scalar product, which induces the concept of orthogonality.
Let $$$M$$$ be the subspace generated by the moves. Let $$$M^{\perp}$$$ be the space orthogonal to $$$M$$$. It is a basic result in linear algebra that $$$(M^{\perp})^{\perp} = M$$$.
One can see that $$$\{(x1, y1), (x1, y2), (x2, y1), (x2, y2)\}$$$ belongs to $$$M$$$ (it is a combination of 4 moves). Thus one deduces that if $$$u \in M^{\perp}$$$ then $$$u_{x,y} = a_x \oplus b_y$$$ for two vectors $$$a\in Z_2^r, b \in Z_2^c$$$. Given $$$a, b$$$; the scalar product between $$$u$$$ and the move centered at $$$(x, y)$$$ is: $$$xor(a) \oplus xor(b) \oplus (c+1)a_x \oplus (r+1)b_y$$$. Assume that $$$u$$$ is in $$$M^{\perp}$$$:
- If $$$r, c$$$ are both even, then $$$a_x$$$ and $$$b_y$$$ must be constant and equal each other. Thus $$$M^{\perp}$$$ is only the $$$0$$$ vector.
- If $$$r$$$ is even and $$$c$$$ is odd, then $$$b_y$$$ is constant. Hence $$$M^{\perp}$$$ is generated by any two rows.
- If $$$r$$$ is odd and $$$c$$$ is even, analogous.
- If $$$r$$$ and $$$c$$$ are both odd, then the only condition is $$$xor(a) \oplus xor(b) = 0$$$. This is necessary and sufficient for the orthogonality. And it implies that $$$M^{\perp}$$$ is generated by any two rows and any two columns.
Since we determined $$$M^{\perp}$$$, we have determined also $$$M$$$.
Counting
Case 1 and 2 are the easy cases while counting case 3 is more involved.
Case 1: $$$r$$$ is even and $$$c$$$ is even
All grids are obtainable. Let $$$\#?$$$ denote the number of $$$\texttt{?}$$$s in the grid. Then the answer is $$$2^{\#?}$$$ since all grid are obtainable.
Case 2: $$$r$$$ is even and $$$c$$$ is odd
If $$$r$$$ is odd and $$$c$$$ is even, we can treat it as the same case by swapping a few variables.
Let us fix whether we want $$$R=[0,0,\ldots,0]$$$ or $$$R=[1,1,\ldots,1]$$$. We will count the number of valid grids for each case.
Let $$$\#?_i$$$ denote the number of $$$\texttt{?}$$$s in the $$$i$$$-th row. If $$$\#?_i>0$$$, then then number of ways to set the $$$i$$$-th row is $$$2^{\#?_i-1}$$$. Otherwise, the number of ways is either $$$0$$$ to $$$1$$$ depending on the initial value of $$$R_i$$$.
Case 3: $$$r$$$ is odd and $$$c$$$ is odd
Let us define a bipartite graph with vertices $$$r+c$$$ vertices, labelled $$$V_{R,i}$$$ for $$$1 \leq i \leq r$$$ and $$$V_{C,j}$$$ for $$$1 \leq j \leq c$$$. If $$$a_{i,j}=\texttt{?}$$$, then we will add an (undirected) edge $$$V_{R,i} \leftrightarrow V_{C,j}$$$. Now we assume that each $$$\texttt{?}$$$ is set to $$$\texttt{0}$$$ at first. We will choose a subset of them to turn into $$$\texttt{1}$$$. When we do this on $$$a_{i,j}$$$, the value of $$$R_i$$$ and $$$C_j$$$ will toggle. In terms of the graph, this corresponds to assigning $$$0$$$ or $$$1$$$ to each edge. When we assign $$$1$$$ to the edge connecting $$$V_{R,i}$$$ and $$$V_{C,j}$$$, then $$$R_i$$$ and $$$C_j$$$ will toggle. We can consider $$$R_i$$$ and $$$C_j$$$ to be the weight of the vertices $$$V_{R,i}$$$ and $$$V_{C,j}$$$ respecitvely.
Consider a connected component of this bipartite graph. Choose an arbitrary spanning tree of this connected component. By assinging the weights of the edges in the spanning tree, we can arbitrarily set the weights of all but one vertex. We cannot arbitarily set the weight of all vertices as the xor-sum of the weight of vertices is an invariant.
Let us show that we can arbitarily choose the weights of all but one vertex on this connected component using the spanning tree. Let us arbitrarily root the tree. Choose some arbitrary leaf of the tree, if the weight of the leaf is correct, assign the edge connected to that vertex weight $$$0$$$. Otherwise, assign it weight $$$1$$$. Then remove the leaf and its corresponding edge. Actually, this shows that there is a one-to-one correspondents between the possible weights of the edges and the possible weights of the vertices.
For the edges not in the spanning tree we have chosen, we can arbitarily set their weights while we are still able to choose the weights of all but one vertex on this connected component by properly assigning weights of the edges in the spanning tree.
Suppose we want this constant value of $$$R$$$ and $$$C$$$ to be $$$v$$$, where $$$v$$$ is either $$$0$$$ or $$$1$$$.
Suppose that the connected component has size $$$n$$$, has $$$m$$$ edges and the xor of all the initial vertex weights is $$$x$$$.
If $$$n$$$ is even:
- If $$$x=0$$$, then there are $$$2^{m-n+1}$$$ ways to assign weights to edges.
- If $$$x=1$$$, then there are $$$0$$$ ways to assign weights to edges.
If $$$n$$$ is odd:
- If $$$x=v$$$, then there are $$$2^{m-n+1}$$$ ways to assign weights to edges.
- If $$$x\neq v$$$, then there are $$$0$$$ ways to assign weights to edges.
// Super Idol的笑容
// 都没你的甜
// 八月正午的阳光
// 都没你耀眼
// 热爱105°C的你
// 滴滴清纯的蒸馏水
#include <bits/stdc++.h>
#include <ext/pb_ds/assoc_container.hpp>
#include <ext/pb_ds/tree_policy.hpp>
#include <ext/rope>
using namespace std;
using namespace __gnu_pbds;
using namespace __gnu_cxx;
#define int long long
#define ll long long
#define ii pair<ll,ll>
#define iii pair<ii,ll>
#define fi first
#define se second
#define endl '\n'
#define debug(x) cout << #x << ": " << x << endl
#define pub push_back
#define pob pop_back
#define puf push_front
#define pof pop_front
#define lb lower_bound
#define ub upper_bound
#define rep(x,start,end) for(auto x=(start)-((start)>(end));x!=(end)-((start)>(end));((start)<(end)?x++:x--))
#define all(x) (x).begin(),(x).end()
#define sz(x) (int)(x).size()
#define indexed_set tree<ll,null_type,less<ll>,rb_tree_tag,tree_order_statistics_node_update>
//change less to less_equal for non distinct pbds, but erase will bug
mt19937 rng(chrono::system_clock::now().time_since_epoch().count());
const int MOD=998244353;
ll qexp(ll b,ll p,int m){
ll res=1;
while (p){
if (p&1) res=(res*b)%m;
b=(b*b)%m;
p>>=1;
}
return res;
}
int n,m;
char grid[2005][2005];
int w[4005];
vector<int> al[4005];
bool vis[4005];
int ss,par,edges;
void dfs(int i){
if (vis[i]) return;
vis[i]=true;
ss++;
par^=w[i];
edges+=sz(al[i]);
for (auto it:al[i]) dfs(it);
}
signed main(){
ios::sync_with_stdio(0);
cin.tie(0);
cout.tie(0);
cin.exceptions(ios::badbit | ios::failbit);
cin>>n>>m;
rep(x,0,n) cin>>grid[x];
if (n%2>m%2){
swap(n,m);
rep(x,0,2005) rep(y,0,2005) if (x<y) swap(grid[x][y],grid[y][x]);
}
// rep(x,0,n){
// rep(y,0,m) cout<<grid[x][y]<<" "; cout<<endl;
// }
if (n%2==0 && m%2==0){
int cnt=0;
rep(x,0,n) rep(y,0,m) if (grid[x][y]=='?') cnt++;
cout<<qexp(2,cnt,MOD)<<endl;
}
else if (n%2==0 && m%2==1){
int cnt0=1,cnt1=1;
rep(x,0,n){
int val=0;
int cnt=0;
rep(y,0,m){
if (grid[x][y]=='?') cnt++;
else val^=grid[x][y]-'0';
}
if (cnt==0){
if (val==0) cnt1=0;
else cnt0=0;
}
else{
cnt0=(cnt0*qexp(2,cnt-1,MOD))%MOD;
cnt1=(cnt1*qexp(2,cnt-1,MOD))%MOD;
}
}
cout<<(cnt1+cnt0)%MOD<<endl;
}
else{
rep(x,0,n) rep(y,0,m){
if (grid[x][y]!='?'){
w[x]^=grid[x][y]-'0';
w[y+n]^=grid[x][y]-'0';
}
else{
al[x].pub(y+n);
al[y+n].pub(x);
}
}
int cnt0=1,cnt1=1;
rep(x,0,n+m) if (!vis[x]){
ss=0,par=0,edges=0;
dfs(x);
edges/=2;
edges-=ss-1; //extra edge
int mul=qexp(2,edges,MOD);
if (ss%2==0){
if (par) mul=0;
cnt0=(cnt0*mul)%MOD;
cnt1=(cnt1*mul)%MOD;
}
else{
if (par==0){
cnt0=(cnt0*mul)%MOD;
cnt1=0;
}
else{
cnt0=0;
cnt1=(cnt1*mul)%MOD;
}
}
}
cout<<(cnt0+cnt1)%MOD<<endl;
}
}
1672H - Zigu Zagu
Author: maomao90, errorgorn
We will always remove a maximal segment, or in other words we will select $$$l$$$ and $$$r$$$ such that $$$A_{l-1} = A_l$$$ and $$$A_r = A_{r+1}$$$.
Try to find an invariant.
We can first split string $$$A$$$ into the minimum number of sections of $$$\texttt{010101}\ldots$$$ and $$$\texttt{101010}\ldots$$$. Let the number of sections be $$$K$$$. Since we can simply delete each section individually, the worst answer that we can get is $$$K$$$. Also, there is no reason to only delete part of a segment, so from here on we only assume that we delete maximal segments.
Now, we can decompose $$$A$$$ based on its $$$K$$$ sections and write it as a string $$$D$$$. The rules for the decomposition is as follows:
- $$$10\ldots01 \to x$$$
- $$$01\ldots10 \to x'$$$
- $$$10\ldots10 \to y$$$
- $$$01\ldots01 \to y'$$$
For example, the string $$$A=[0101][1][1010]$$$ becomes $$$D=y'xy$$$. Now, let us look at what our operation does on $$$D$$$.
When we remove a section of even length ($$$y$$$ or $$$y'$$$) that is not on the endpoint of the string, the left and right sections will get combined. This is because the two ends of an even section are opposite, allowing the left and right sections to merge. Otherwise, it results in no merging.
When some sections get combined, the length of string $$$D$$$ gets reduced by $$$2$$$, while the length of $$$D$$$ gets reduced by $$$1$$$ otherwise. Clearly, we want to maximize deleting the number of sections of even length that are not on the endpoints of the string. We will call such a move a power move.
Let us classify strings that have no power moves. They actually come in $$$8$$$ types:
- $$$x x \ldots x$$$
- $$$y' x x \ldots x$$$
- $$$x x \ldots x y$$$
- $$$y' x x \ldots x y$$$
- $$$x' x' \ldots x'$$$
- $$$y x' x' \ldots x'$$$
- $$$x' x' \ldots x' y'$$$
- $$$y x' x' \ldots x' y'$$$
We can prove that for any string not of this form, there will be always be character $$$y$$$ or $$$y'$$$ that is not on the ends of the string. Suppose that the string contains both $$$x$$$ and $$$x'$$$, then $$$xyx'$$$ or $$$x'y'x$$$ must be a substring. Also, the number of $$$y$$$ or $$$y'$$$s on each side cannot be more than $$$1$$$. Note that strings such that $$$y$$$ or $$$yy'$$$ may fall under multiple types.
Furthermore, for string of these types, the number of moves we have to make is equal to the length of the string.
Let us define the balance of $$$x$$$ as the number of $$$x$$$ minus the number of $$$x'$$$. We will define the balance of $$$y$$$ similarly. When we perform a power move, notice that the balance of the string is unchanged. Indeed, each power move either removes a pair of $$$x$$$ and $$$x'$$$ or $$$y$$$ and $$$y'$$$ from the string.
With this, we can easily find which type of ending string we will end up with based on the perviously mentioned invariants, except for the cases of differentiating between the string $$$x x \ldots x$$$ and $$$y' x x \ldots x y$$$ (and the case for $$$x'$$$).
To differentiate between these $$$2$$$ cases, we can note that the first character of our string does not change when we perform power moves. And indeed, $$$x$$$ and $$$y'$$$ have different starting characters.
Note that we have to be careful when the balance of $$$x$$$ and the balance of $$$y$$$ is $$$0$$$ in the initial string as for strings such as $$$yy'$$$, the final string is not $$$\varnothing$$$ but $$$yy'$$$. With this, we can answer queries in $$$O(1)$$$ since we can query the balance of $$$x$$$, the balance of $$$y$$$ and the total length of the decomposed string in $$$O(1)$$$.
Furthermore, there is a implementation trick here. Notice that if $$$a_{l-1}\neq a_l$$$, then then answer for $$$s[l-1,r]$$$ will be equal to the answer for $$$s[l,r]$$$. So in implementation, it is easier to "extend" $$$l$$$ and $$$r$$$ to find the balance of $$$x$$$ and $$$y$$$.
#include <bits/stdc++.h>
using namespace std;
int n,q;
string s;
int l[200005];
int r[200005];
int psum[200005];
int balance[200005];
signed main(){
ios::sync_with_stdio(0);
cin.tie(0);
cout.tie(0);
cin.exceptions(ios::badbit | ios::failbit);
cin>>n>>q;
cin>>s;
s=s[0]+s+s[n-1];
for (int x=1;x<=n;x++){
if (s[x-1]==s[x]) l[x]=x;
else l[x]=l[x-1];
}
for (int x=n;x>=1;x--){
if (s[x]==s[x+1]){
r[x]=x;
psum[x]=1;
if ((x-l[x])%2==0){
balance[x]=(s[x]=='1'?1:-1);
}
}
else r[x]=r[x+1];
}
for (int x=1;x<=n;x++){
psum[x]+=psum[x-1];
balance[x]+=balance[x-1];
}
int a,b;
while (q--){
cin>>a>>b;
a=l[a],b=r[b];
int bl=balance[b]-balance[a-1];
int sum=psum[b]-psum[a-1];
int ans=(sum+abs(bl))/2;
if ((sum+abs(bl))%2==1) ans++;
else if (abs(bl)==0) ans++;
else if (bl>0 ^ s[a]=='1') ans++;
cout<<ans<<"\n";
}
}
1672I - PermutationForces
Author: errorgorn
When removing an element, consider how the costs of other elements change?
When removing an element, consider the elements whose cost increase. What are their properties?
Is there a greedy algorithm?
Yes there is. How to make it run fast?
Use monotonicity to reduce one dimension
Let us rephrase the problem. Let $$$x$$$ and $$$y$$$ be arrays where $$$x_i=p_i$$$ and $$$y_i=i$$$ initially. For brevity, let $$$c_i = |x_i - y_i|$$$.
We want to check if we can do the following operation $$$n$$$ times on the array:
- Choose an index $$$i$$$ such that and $$$c_i \leq s$$$.
- For all $$$j$$$ where $$$x_i < x_j$$$, update $$$x_j \gets x_j-1$$$.
- For all $$$j$$$ where $$$y_i < y_j$$$, update $$$y_j \gets y_j-1$$$.
- Set $$$x_i \gets \infty$$$ and $$$y_i \gets -\infty$$$
Let us fix $$$s$$$ and solve the problem of checking whether a value of $$$s$$$ allows us to transform the permutation into the empty permutation.
Lemma 1
Let $$$(x,y,c)$$$ be the arrays before some arbitrary operation and $$$(x',y',c')$$$ be the arrays after that operation. If we only perform moves with $$$c_i \leq s$$$, then $$$c_j \leq s$$$ implies that $$$c'_j \leq s$$$ i.e. if something was removable before, it will be removable later if we only use valid moves.
Proof: Note that $$$x'_j = x_j$$$ or $$$x'_j=x_j-1$$$. The case for $$$y$$$ is same.
We can see that $$$c'_j \leq c_j+1$$$. So the only case where $$$c'_j > s$$$ is when $$$c_j=s$$$.
Case $$$1$$$: $$$x_j \leq y_j$$$
Then it must be that $$$x'_j=x_j$$$ and $$$y'_j=y_j-1$$$. By the definition of our operation, we have the following inequality: $$$x_i < x_j \leq y_j < y_i$$$.
This implies that $$$c_i>s$$$, which is a contradiction.
Case $$$2$$$: $$$x_j \geq y_j$$$
By similar analysis we see that $$$c_i>s$$$. $$$\blacksquare$$$
Suppose that we only remove points with $$$c_i \leq s$$$ for some fixed $$$s$$$. This greedy algorithm works here - at each step, choose any point $$$c_i \leq s$$$ with and remove it. - if no such point exists, the $$$s$$$ does not work
Proof:
Given any permutation, let any point with $$$c_a \leq s$$$ be $$$a$$$. Consider any optimal sequence of moves $$$[b_1,b_2,\ldots,b_w,a,\ldots]$$$. We can transform to another optimal solution it by moving $$$a$$$ to the front.
Let the element before $$$a$$$ to be $$$b_w$$$. We will swap $$$a$$$ and $$$b_w$$$. $$$a$$$ is already removable at the start so it will be removable after removing $$$b_1,b_2,\ldots,b_{w-1}$$$ by lemma $$$1$$$. After removing everything before $$$b_1,b_2,\ldots,b_{w-1}$$$, $$$b_w$$$ is removable, so it will be removable after removing $$$a$$$ by lemma $$$1$$$. Hence we can move $$$a$$$ to the front of the sequence of moves by repeatedly swaping elemenets.
By exchange arguement, the greedy solution of removing any point with $$$c_a \leq s$$$ is an optimal solution.
Time Complexity Speedups
By extension, the following greedy algorithm works:
Set $$$s \gets 0$$$.
- At each step, choose index $$$i$$$ with minimal $$$c_i$$$
- Update $$$s \gets \max(s,c_i)$$$
- Remove point $$$i$$$
Let's start with $$$s=0$$$ and remove things while we can. If we are at a state that we are stuck, incremenet $$$s$$$. When we increment $$$s$$$, the moves that we haved done before will still be a valid choice with this new value of $$$s$$$. We simply increment $$$s$$$ until we can remove the entire permutation which is
Now the only difficult part about this is maintaining the array $$$c_i$$$ (the cost) for the points we have not removed.
Let's define a point as good as follows:
If $$$y < x$$$, the point is good if there exist no other point $$$(x',y')$$$ such that $$$y < y' \leq x' < x$$$.
Otherwise, the point is good if there exist no other point $$$(x',y')$$$ such that $$$x < x' \leq y' < y$$$.
We maintain only the good elements, because only good elements are candidates for the minimum $$$c_i$$$. Suppose element is not good and minimal, then the point that causes it to be not good has a strictly smaller cost, an obvious contradiction.
Now we will use data structures to maintain $$$c_i$$$ of good points. We will split the good points into the left good and right good points which are those of $$$x_i \leq y_i$$$ and $$$y_i \leq x_i$$$ respectively. Notice that if $$$x_i = y_i$$$, then it is both left good and right good.
We will focus on the left good points. Suppose $$$i$$$ and $$$j$$$ are both left good with $$$x_i < x_j$$$, then $$$y_i < y_j$$$. Suppose otherwise, then we have $$$x_i < x_j \leq y_j < y_i$$$, making $$$i$$$ not good. As such $$$x$$$ and $$$y$$$ of the left good points are monotone.
To find this monotone chain of left good points, we can maintain a max segment tree which stores max $$$y$$$ for all alive $$$x$$$. Using binary search on segment tree to find the unique point with $$$x' > x$$$ such that $$$y'$$$ is minimized. Where $$$(x,y)$$$ is a point on the chain, and $$$(x',y')$$$ is the next point. We can repeatedly do this to find the entire chain of left good elements
We can store a segment tree where $$$i$$$ is the key and $$$c_i$$$ is the value. If an element is left good, it will always be left good until it is removed.
The following two operations are simply range updates on the segment tree since $$$y_i$$$ is monotone. - For all $$$j$$$ such that $$$x_j>x_i$$$, set $$$x_j \leftarrow x_j-1$$$. - For all $$$j$$$ such that $$$y_j<y_i$$$, set $$$y_j \leftarrow y_j-1$$$.
Now, when we remove some left good point, some other points will become left good, and we will need to add them. We do this by starting from the previous element of the left good chain, and then keep repeating the same algo using descend on the segment tree.
When we add a new left good point, we need to know the cost at the current point in time. If we consider a point which is initially $$$(x,y)$$$, and all other previously removed $$$(x',y')$$$, $$$x$$$ decreases by 1 per $$$x' < x$$$ and $$$y$$$ decreases by 1 per $$$y' < y$$$. Hence, we can maintain a fenwick tree of the removed point's $$$x$$$ and $$$y$$$, and using that we can determine the $$$x$$$ and $$$y$$$ at the time when we add it to the left good chain (and hence to the segment tree).
Time Complexity: $$$O(n \log n)$$$
Quad Trees
Thanks to dario2994 for pointing this out.
Surprisingly quad trees are provably $$$O(n \sqrt n)$$$ here. Take the $$$k$$$-th layer of the quad tree. The $$$n \cdot n$$$ grid will be split into $$$4^k$$$ squares in the $$$k$$$-th layer. Since we are doing half plane covers, our query range will only touch $$$2^k$$$ squares. At the same time, the width of those $$$2^k$$$ squares is $$$\frac{n}{2^k}$$$. Since each column only has a single element, our query range will also by bounded by $$$\frac{n}{2^k}$$$. The time complexity for a single update is given by $$$\sum\limits_{k=1}^{\log n} \min(2^k,\frac{n}{2^k}) = O(\sqrt n)$$$.
// Super Idol的笑容
// 都没你的甜
// 八月正午的阳光
// 都没你耀眼
// 热爱105°C的你
// 滴滴清纯的蒸馏水
#include <bits/stdc++.h>
#include <ext/pb_ds/assoc_container.hpp>
#include <ext/pb_ds/tree_policy.hpp>
#include <ext/rope>
using namespace std;
using namespace __gnu_pbds;
using namespace __gnu_cxx;
#define ii pair<int,int>
#define fi first
#define se second
#define debug(x) cout << #x << ": " << x << endl
#define pub push_back
#define pob pop_back
#define puf push_front
#define pof pop_front
#define lb lower_bound
#define ub upper_bound
#define rep(x,start,end) for(auto x=(start)-((start)>(end));x!=(end)-((start)>(end));((start)<(end)?x++:x--))
#define all(x) (x).begin(),(x).end()
#define sz(x) (int)(x).size()
#define indexed_set tree<ll,null_type,less<ll>,rb_tree_tag,tree_order_statistics_node_update>
//change less to less_equal for non distinct pbds, but erase will bug
mt19937 rng(chrono::system_clock::now().time_since_epoch().count());
struct FEN{
int fen[500005];
FEN(){
memset(fen,0,sizeof(fen));
}
void upd(int i,int j){
while (i<500005){
fen[i]+=j;
i+=i&-i;
}
}
int query(int i){
int res=0;
while (i){
res+=fen[i];
i-=i&-i;
}
return res;
}
} fval,fidx;
struct dat{
struct node{
int s,e,m;
ii val;
int lazy=0;
node *l,*r;
node (int _s,int _e){
s=_s,e=_e,m=s+e>>1;
val={1e9,s};
if (s!=e){
l=new node(s,m);
r=new node(m+1,e);
}
}
void propo(){
if (lazy){
val.fi+=lazy;
if (s!=e){
l->lazy+=lazy;
r->lazy+=lazy;
}
lazy=0;
}
}
void update(int i,int j,int k){
if (s==i && e==j) lazy+=k;
else{
if (j<=m) l->update(i,j,k);
else if (m<i) r->update(i,j,k);
else l->update(i,m,k),r->update(m+1,j,k);
l->propo(),r->propo();
val=min(l->val,r->val);
}
}
void set(int i,int k){
propo();
if (s==e) val.fi=k;
else{
if (i<=m) l->set(i,k);
else r->set(i,k);
l->propo(),r->propo();
val=min(l->val,r->val);
}
}
} *root=new node(0,500005);
struct node2{
int s,e,m;
int val=-1e9;
node2 *l,*r;
node2 (int _s,int _e){
s=_s,e=_e,m=s+e>>1;
if (s!=e){
l=new node2(s,m);
r=new node2(m+1,e);
}
}
void update(int i,int k){
if (s==e) val=k;
else{
if (i<=m) l->update(i,k);
else r->update(i,k);
val=max(l->val,r->val);
}
}
ii query(int i,int key){ //find key<=val where i<=s
if (e<i || val<key) return {-1,-1};
if (s==e) return {s,val};
else{
auto temp=l->query(i,key);
if (temp!=ii(-1,-1)) return temp;
else return r->query(i,key);
}
}
} *root2=new node2(0,500005);
set<ii> s={ {500005,500005} };
//root stores the values of each pair
//root2 stores the left endpoint of each pair to add non-overlapping ranges
//s stores the pairs are still alive so its easy to do searches
dat *d; //we also store the other guy
bool orien; //false for i->arr[i]
int pp[500005];
void push(int i,int j){
pp[j]=i;
root2->update(j,i);
}
void add(int i,int j){
root2->update(j,-1e9);
s.insert({i,j});
int val;
if (!orien) val=fval.query(j)-fidx.query(i);
else val=fidx.query(j)-fval.query(i);
root->set(j,val);
}
void del(int j){
ii curr={-1,-1};
int lim=500005;
if (j!=-1){
int i=pp[j];
auto it=d->s.ub({j,1e9});
d->root->update(i,(*it).se-1,-1);
if (!orien) fidx.upd(i,-1),fval.upd(j,-1);
else fval.upd(i,-1),fidx.upd(j,-1);
it=s.find({i,j});
if (it!=s.begin()) curr=*prev(it);
lim=(*next(it)).se;
s.erase(it);
root->set(j,1e9);
root2->update(j,-1e9);
}
while (true){
auto temp=root2->query(curr.se,curr.fi);
swap(temp.fi,temp.se);
if (temp==ii(-1,-1) || lim<=temp.se) break;
add(temp.fi,temp.se);
curr=temp;
}
}
} *l=new dat(),*r=new dat();
int n;
int main(){
ios::sync_with_stdio(0);
cin.tie(0);
cout.tie(0);
cin.exceptions(ios::badbit | ios::failbit);
//cyclic mapping to each other
l->d=r;
r->d=l;
r->orien=true;
cin>>n;
rep(x,1,n+1){
int y;
cin>>y;
if (x<=y) l->push(x,y);
else r->push(y,x);
}
rep(x,1,n+1) fidx.upd(x,1),fval.upd(x,1);
l->del(-1);
r->del(-1);
int ans=0;
rep(x,0,n){
if (l->root->val.fi<=r->root->val.fi){
ans=max(ans,l->root->val.fi);
l->del(l->root->val.se);
}
else{
ans=max(ans,r->root->val.fi);
r->del(r->root->val.se);
}
}
cout<<ans<<endl;
}
Auto comment: topic has been updated by maomao90 (previous revision, new revision, compare).
Testdata for E was somewhat a little weak, I passed with parrel binary search with some optimizations. In the actual testdata it took no more than $$$500$$$ queries to find the answer.
Can anyone hack me?
Try this:
In fact, there is a $$$20\%$$$ probability that your solution will use more than $$$n+30$$$ queries, when $$$n=6$$$ and $$$l_i=2001-i-d_i$$$, where $$$d_i$$$ is randomly chosen in $$$[0,10]$$$.
P.S. I didn't do enough experiments so that maybe the probability is not exact at all.
Me Mister Bebra!
My solution for D failed pretest 3. What I did is I went from right to left in array b maintaining a set current that contains all the indices in a that are being used, and a map extra that stores a count of all the extra values I can use.
Case1 b[i] = b[i — 1]: I will consider b[i] and b[i — 1] together. I find the greatest element in the set, let's call if g. I then iterate from g decreasing. In the current index k, if it isn't in current I skip it. Otherwise if it's equal to b[i], I'll remove k from current and break out of the loop. If it's not equal to b[i], I check if I have an extra a[k] in my map. If not, the answer is NO. Otherwise, I decrease the count by 1, remove k from current, and continue; At the end, I add b[i — 1] to the map extra so that I can use it in the future.
Case2 b[i] != b[i — 1]: I'll consider b[i] by itself. Then I'll do the exact same proccess as in Case1. The only difference is that I won't add b[i — 1] to extra and I'll just move on to the next i.
At the very end, array a might have some leftover elements. I will iterate over all these elements and see if I have enough values in extra to match them all. If I do, then the answer is YES. Otherwise, the answer is NO.
Is my logic wrong? 154756012
Input:
Expected Output:
YES
Your Output:
NO
I'm a little puzzled myself. I'm doing more or less the same thing, and I pass your given test case.
154775344
Edit — ah I see the issue, nvm
I've spent 1 hour 30 minutes looking for a bug in my solution of D (got WA on the 3rd test)... After the round finished I updated the page every minute, waiting for testcases to be shown... And what I see now is only "wrong answer expected YES, found NO [515th token]", because the test data is truncated.
I'm begging to look at the full testdata for test case 3, this is my only dream now
I'm in a similar situation.
154781966
test case 515
20
3 2 2 1 1 3 1 3 3 1 3 2 1 3 1 3 1 1 3 1
3 2 1 1 3 1 1 3 2 2 3 3 1 1 3 1 3 3 1 1
Input:
Expected Output:
NO
Your Output:
YES
I think that problem H was really misplaced. Reading it afterward, it seems pretty straightforward, but I didn't try it during the contest because I thought it would have been much harder.
Also, making 1 1 a system test rather than a pretest for E really screwed a lot of people (including me) over.
Overall, it was a good contest!
MyCodeD
I passed both your above testcases. Not getting what's going wrong.
20
3 2 2 1 1 3 1 3 3 1 3 2 1 3 1 3 1 1 3 1
3 2 1 1 3 1 1 3 2 2 3 3 1 1 3 1 3 3 1 1
Expected yes
Thanks man.
In Problem B, I couldn't understand how AABBABAAAABABABAABAB can be acheived(Output is YES) as they are two succesive B's and good strings are AB, AAB, AAAB,..., so there must be atleast one A between two succesive B's and 'B' is bad string. I believe deletion of one character or substring is not allowed. Sorry if it is too dumb, but could anyone can explain this to me Thanks
The problem says "Choose any position of s1 and insert some good string in that position."
So we can insert AB into the middle of another AB which results in AABB
We can do string
AABB
like this. 1) empty ->AB
2)AB
->AABB
(A+AB+B
) So we got the string likeAABB
Ohh I got it now
Thank you so much
// Super Idol的笑容 // 都没你的甜 // 八月正午的阳光 // 都没你耀眼 // 热爱105°C的你 // 滴滴清纯的蒸馏水
Now this is epicWhere is that from? I mean why is everyone posting this in the comments
It's in the solutions above? If you mean what does it mean then it's an epic beautiful song
I found a different, possibly simpler, solution to H (Zigu Zagu). As in the hints I always remove a maximal segment. There are four possibilities:
The endpoints of the segment are the same (e.g. we have ..1|101|1..). If we remove this segment we remove two pairs of consecutive 0s or 1s, but create one new pair of consecutive 0s or 1s
Its endpoints are different (e.g. ..0|01|1..). In this case we remove one pair of consecutive 0s and one pair of consecutive 1s, without creating any new pairs of consecutive 0s or 1s
One of its end points is at the end of the substring. Removing the segment simply removes one pair of consecutive 0s or 1s.
The segment is the whole remaining substring. We always finish like this.
This means that every segment of type 1 or 3 reduces either the numbers of pairs of 0s or the number of pairs of 1s by 1. This reduces the number of remaining maximal segments by 1. Every segment of type 2 reduces both the number of pairs of 0s and the number of pairs of 1s by one, hence reducing the number of maximal segments by 2. As such our sequence should be:
Remove all possible segments of type 2. As long as there are pairs of both 0s and 1s there will always be at least one segment of type 2; so this will take min(0 pairs, 1 pairs) steps
Remove all remaining type 1 and type 3 segments. This will take a number of steps equal to the number of remaining 0 or 1 pairs, which will be max(0 pairs, 1 pairs) - min(0 pairs, 1 pairs)
Remove the last segment.
This gives a total of max(0 pairs, 1 pairs) + 1 steps.
To implement this, do a pass through the string calculating the number of 0 pairs and 1 pairs before each element, and we can then trivially calculate the number 0 pairs and 1 pairs for each query, and hence the answer to each query.
See https://mirror.codeforces.com/contest/1672/submission/154753079 (Python)
Anyone interested can check my code 154764444 for problem F1 (easier to understand than editorial).
brrrrrrr
br
Can you explain your solution?
I took the next greater element for every element in A which will result in making the longest chain of substitutions which then results in the highest sadness. And for every cycle we have of length L, the number of swaps will be L-1.
Naming the function "brrrr" was key. Editorial missed it totally.
that's pretty cool
About A's tutorial, seems $$$\sum_{k=1}^na_i$$$ there is a mistake
Thanks it is fixed.
Finally I got FST on problem E because I set the upperbound of binary search 4e9 but not 5e9, and I got WA on test 20. Anyway, it's a good contest with excellent problems.
I don't know why I get TLE on problem D 154740846. I think it's O(N). Can someone help me.Orz
I could have reached cm today if i didn't fail in (E) Main tests, (〒﹏〒)
Misprint In editorial of D, tutorial 1, case 3: "we delete an occurance of ai in S and decrement j", We should decrement i instead of j.
Does anyone has a segtree solution for H? I was stuck on how to correctly merge two segments together... And I guess there must be some smart way of doing so.
In editorial of D, tutorial 2, I can't understand what is this saying : "As such, we can consider mapping elements of b into elements of a. More specifically, consider a mapping f where f is non-decreasing, bi=afi and we increment cfi by 1 for all i". Can anyone elaborate please?
Thanks in advance.
Nice contest, although I did not perform well.
Problem A is amazing. I use SG function to solve it, even though I felt that there must be a simpler solution during the contest. It turns out that my guess is right when I read tutorials and notice that I have missed the observation mentioned there.
It does not take me too much time to find a solution to Problem B, but I forget a corner case which leads to a WA.
It also needs some observation for problem C. I use almost the same idea as in tutorials. It is lucky that my first try is correct.
I get stuck in problem D from the beginning, and, until the end of contest. I find that somehow I am quite bad at such kind of problems. During the whole contest, I have never got any chance to get close to the idea in the tutorials.
I really get depressed when I see the top ones in the first page of standings, solving D within 15 minutes. This problem is an almost impossible task for me, while it is just like a warm up practice for them.
But now, what I can do is only to work much harder and practice more. There is no time for depression.
Permutation problems are irritating.
Instead of taking sadness from this round, take this: Whenever there is a question of applying operations some number of number. Try to think of applying operations in reverse.
Last line of tutorial of problem H: shouldn't it be "if $$$a_{l-1}\neq a_l$$$" ?
It is changed. Thank you!
If you prefer command line, checkout CF Doctor for more flexibility and faster feedback.
If you are/were getting a WA/RE verdict on problems from this contest, you can get the smallest possible counter example for your submission on cfstress.com. To do that, click on the relevant problem's link below, add your submission ID, and edit the table (or edit compressed parameters) to increase/decrease the constraints.
If you are not able to find a counter example even after changing the parameters, reply to this thread (only till the next 7 days), with links to your submission and ticket(s).
My approach for F1 https://mirror.codeforces.com/contest/1672/submission/154768409
OperationForces xd
Is H too similar to this problem? 1329D - Dreamoon Likes Strings
An only *3000 problem I?? I think the difficulty is too underestimated for such a huge-amount-of-code data structure problem.
By the way, ♪♪Super Idol的笑容 都没你的甜♪♪ XD. Is that song also got popularized in Singapore?
It is because rainboy decides to deflate the rating numbers by only solving I.
wrong answer expected NO, found YES [32nd token] problem D (Test: #2) can i find this test case somehow?
I think there is a typo in editorial for D. For the line
Otherwise, we delete an occurance of ai in S and decrement j. If we cannot find ai in S, it is impossible to transform b to a.
Instead of decrement j, it should be decrement i.
It is changed. Thank you!
In editorial of F1: "Proof: (⇒) There exists a cycle decomposition of the graph that uses at least cnt1+1 cycles. Since a single element of 1 can only go to a single cycle, there exists a cycle without 1."
We are basically decomposing such that each cycle has only one occurrence of element 1. Why don't we need to do for elements 2, 3,...
Because we only want to prove that there exist a cycle without $$$1$$$? So we do not have to care about what happens to any other element
F1 the Checker comment said "wrong answer B does not have the minimum possible sadness (test case 33)"
I Print
4 7 2 5 2 7 4
Why I Wrong answer?
My Submission
Your array only has sadness $$$4$$$, while the maximum has sadness $$$5$$$.
4 7 2 5 2 7 4
4 2 2 5 7 7 4
7 2 2 5 7 4 4
7 2 5 2 7 4 4
7 2 5 4 7 4 2
thx
Dislike for referring to politics in task E
I am still confused about the graph representation of problem F1/F2. Why can we build the graph by
rep(x,1,n+1) al[arr[x]].pub(brr[x]);
in the sample solution of F2.As far as I understand, a permutation has a cycle representation, for example, the permutation
1 2 3 4 5 -> 1 3 5 2 4
can be represented by circles $$$(1)(2,4,5,3)$$$. But it seems that the construction in the solution is different from this kind. So far I couldn't understand where the idea comes from. Can anyone post some detailed tutorials or other related resources or problems?Edit: Deleted.
in problem F1 ,why the lower limit of number of swaps of every cycle of length x equal to (x-1)?
For example, if $$$x=5$$$, we can turn
1 2 3 4 5
into2 3 4 5 1
. And that costs at least 4 swaps.ok but why the number of swaps can't be less than x-1 in this example?
You can think that there is an edge from $$$a_i$$$ to $$$b_i$$$. If we let $$$b_i=a_{i+1}$$$, it will form a cycle. In each swap we can only let at most $$$1$$$ number to it's right place, so we should have at least $$$x-1$$$ swaps to turn $$$a_i$$$ into $$$b_i$$$.
can you please explain the construction part in F1?
consider the following example : 1 2 3 1 2 it contains two cycles :(1 2 3) and (1 2),
find all of these cycles and shift them left by 1 , so the cycles after shift will be (2 3 1),(2 1) respectively.
then put these cycles in their original positions in the array, so the array will become :2 3 1 2 1
The proof by SpadeA261 isn't too correct
Take the cycle of size 2, in 1 swap we can move both numbers into its right place.
A correct proof is as follows.
There are $$$2$$$ types of swaps (pretend the trivial swap that swaps an element with itself doesn't exist):
In the first type, the number of cycles will always increase by $$$1$$$. In the second type, the number of cycles will always decrease by $$$1$$$.
Anyways we can see that the number of cycles can only increase by at most $$$1$$$. So When there is a single cycle of length $$$x$$$, we need at least $$$x-1$$$ moves to break it into $$$x$$$ cycles, with all length $$$1$$$.
F1 can easily solved by Derangement of an array
my solution