2173A-Sleeping Through Classes Idea & Solution: HHH666666
From the statement we can conclude that a class can be slept through $$$\textbf{if and only if}$$$:
- The class itself is not an important class.
- The class is not within $$$k$$$ classes after any important class.
Here's an $$$O(nk)$$$ approach:
We directly simulate the process: for each important class at position $$$i$$$, mark positions $$$i+1, i+2, \dots, \min(i+k, n)$$$ as “must stay awake”. After processing all important classes, count the classes that are neither important nor marked. This $$$O(nk)$$$ solution is acceptable under the constraints.
There's also an $$$O(n)$$$ solution:
For each unimportant class, we only need to focus on the nearest important class before it. Thus we maintain a variable $$$last$$$ storing the index of the most recent important class (or $$$-\infty$$$ if none seen yet), then for each $$$i$$$ from $$$1$$$ to $$$n$$$, if it's an important class we let $$$last = i$$$, otherwise if $$$last + k \lt i$$$, the class is able to be slept through and we add $$$1$$$ to the answer.
#include<bits/stdc++.h>
using namespace std;
int n,k;string s;
void solve(){
cin>>n>>k>>s;
int ans=0;
for(int i=0;i<n;i++){
if(s[i]=='0'){
int ok=1;
for(int j=max(0,(int)i-k);j<i;j++){
if(s[j]=='1'){
ok=0;
break;
}
}
ans+=ok;
}
}
cout<<ans<<endl;
}
int main(){
ios_base::sync_with_stdio(false);
cin.tie(0),cout.tie(0);
int T;cin>>T;
for(;T--;)solve();
return 0;
}
2173B-Niko's Tactical Cards Idea & Solution: HHH666666
Consider a simple greedy algorithm: For each turn, choose the option that gives the higher immediate score. That is, if our current score is $$$k$$$, it becomes $$$\max(k - a_i, b_i - k)$$$ after this turn. Why doesn't the algorithm work?
Try to improve the algorithm in Hint 1.
The greedy approach in Hint 1 fails because it assumes that if we know the maximum possible score after turn $$$i$$$ (denoted by $$$max_i$$$), we can determine $$$max_{i + 1}$$$ from it. However, this assumption is wrong and here is a counterexample:
$$$a = [2, 0], b = [1, 0]$$$
Here $$$max_1 = 1$$$ and $$$max_2 = 2$$$. The greedy approach takes the blue card in turn $$$1$$$. But to reach $$$max_2$$$, we must choose the red card in turn $$$1$$$, which turns our score into $$$-2$$$, and then choose the blue card in turn $$$2$$$. So in this example, $$$max_1$$$ does not give $$$max_2$$$.
We see that $$$max_1$$$ does not help us find $$$max_2$$$ if we choose the blue card in the second turn. The reason is, if we want to maximize our score when choosing a blue card, we want our current score to be as small as possible, so we are looking for the minimum possible score. To maximize $$$b_i - k$$$ we should minimize $$$k$$$.
From this we can see that if we have both the maximum possible score $$$max_{i - 1}$$$ and the minimum possible score $$$min_{i - 1}$$$ after turn $$$i - 1$$$, we can determine $$$max_{i} = \max(max_{i - 1} - a_i, b_i - min_{i - 1})$$$, because these two expressions cover the best outcomes from the red and blue options respectively. Similarly, the new minimum $$$min_{i} = \min(min_{i - 1} - a_i, b_i - max_{i - 1})$$$ comes from either taking the red card from the previous minimum, or the blue card from the previous maximum. And from $$$max_{i}, min_{i}$$$ we can move forward to $$$max_{i + 1}, min_{i + 1}$$$ and so on.
Thus, we start with $$$max_0 = min_0 = 0$$$ and calculate each $$$max_i, min_i$$$ throughout the process using the formulas above. The answer is $$$max_n$$$. This runs in $$$O(n)$$$ time complexity.
#include<bits/stdc++.h>
#define ll long long
using namespace std;
const int MAXN = 1e5 + 10;
int n;
int a[MAXN], b[MAXN];
int main() {
int t; cin >> t;
while (t--) {
int n; cin >> n;
for (int i = 1; i <= n; ++i) cin >> a[i];
for (int i = 1; i <= n; ++i) cin >> b[i];
ll mx = 0, mn = 0;
for (int i = 1; i <= n; ++i) {
ll new_mx = max(mx - a[i], b[i] - mn);
ll new_mn = min(mn - a[i], b[i] - mx);
mx = new_mx, mn = new_mn;
}
cout << mx << endl;
}
return 0;
}
2173C-Kanade's Perfect Multiples Idea & Solution: paulzrm
Consider the cases $$$n = 1$$$ and $$$n = 2$$$. What do you observe?
Try to determine the "first" element that must belong to $$$B$$$.
Assume the minimum element in $$$A$$$ is $$$x$$$. We now prove that $$$x$$$ must be included in $$$B$$$.
There are no elements smaller than $$$x$$$, which means $$$x$$$ has no proper divisor (other than $$$x$$$ itself) inside $$$A$$$. If we do not put $$$x$$$ into $$$B$$$, then the second rule would be violated.
Thus we can determine the first element of $$$B$$$, and then check whether all its multiples $$$\le k$$$ are contained in $$$A$$$. We mark these multiples as "finished".
Now think greedily. We never need to process elements that are already marked "finished". Each time, we just find the smallest unfinished element in $$$A$$$, check it, and add it to $$$B$$$ if possible.
A loose upper bound on the running time is $$$O\bigl(n d(n) \log n\bigr)$$$, where $$$d(n)$$$ is the divisor function. However, this bound is rarely attained in practice: many operations are skipped thanks to the "finished" marks, so the algorithm runs much faster than this worst-case estimate.
#include<bits/stdc++.h>
using namespace std;
const int mxn=2e5+5;
set<int>s;
map<int,int>u;
int n,a[mxn],k;
void solve(){
s.clear(),u.clear();
cin>>n>>k;
for(int i=1;i<=n;i++)cin>>a[i],s.insert(a[i]),u[a[i]]=1;
vector<int>ans;ans.clear();
while(s.size()){
int t=*s.begin();
ans.push_back(t);
s.erase(s.begin());
for(int cur=t;cur<=k;cur+=t){
if(!u[cur]){
cout<<"-1\n";
return;
}
auto t=s.find(cur);
if(t!=s.end())s.erase(t);
}
}
cout<<ans.size()<<'\n';
for(int i:ans)cout<<i<<' ';cout<<'\n';
}
int main(){
ios_base::sync_with_stdio(false);
cin.tie(0),cout.tie(0);
int T=1;
cin>>T;
for(;T--;)solve();
return 0;
}
2173D-Taiga's Carry Chains Idea: chen_zida Solution: paulzrm
What is the answer when $$$k \geq 32$$$?
We can use dynamic programming to solve it.
When $$$k \geq 32$$$, we can, at each step, add the smallest $$$t$$$ such that adding $$$2^{t}$$$ to the current $$$n$$$ produces a carry. After $$$k$$$ such steps, $$$n$$$ will become a power of $$$2$$$, and the answer will be $$$\operatorname{popcount}(n) + k - 1$$$.
Now we observe that the total number of carries in the end is $$$\operatorname{popcount}(n) + k - \operatorname{popcount}(n')$$$, where $$$n'$$$ is the final value of $$$n$$$ after all operations. To maximize this quantity, we want to minimize the final $$$\operatorname{popcount}(n')$$$.
To this end, let $$$dp[i][j][c]$$$ denote the optimal value we can obtain after processing the $$$i$$$ least significant bits, having used $$$j$$$ operations in total, where $$$c \in {0,1}$$$ indicates whether there is a carry from the $$$(i-1)$$$-th bit.
For the $$$i$$$-th least significant bit, we have two options: either select this bit (i.e., use one operation on this bit) or not select it.
Let $$$n_i$$$ be the $$$i$$$-th bit of the original $$$n$$$, and let $$$t_i$$$ be the sum at this bit after adding everything in. If we select this bit, then $$$t_i = 1 + c + n_i$$$; otherwise, $$$t_i = c + n_i$$$. From $$$t_i$$$, we can determine whether this bit produces a carry to the next position (the next carry is $$$c' = \lfloor t_i / 2 \rfloor$$$) and whether the $$$i$$$-th bit of $$$n'$$$ is $$$1$$$ or $$$0$$$ (the resulting bit is $$$t_i \bmod 2$$$), which allows us to perform the DP transition.
The time complexity is $$$O((\log_2 n)^2)$$$.
#include<bits/stdc++.h>
#define int long long
using namespace std;
const int inf=1e9+7;
const int mxb=64,mxk=32;
int dp[mxb+2][mxk+2][2];
void Min(int&x,int y){x=min(x,y);}
void solve(){
int n,k;cin>>n>>k;
int pc=__builtin_popcount(n);
if(n==0){
cout<<max(0ll,k-1)<<'\n';
return;
}
if(k>=32){
cout<<k+pc-1<<'\n';
return;
}
for(int i=0;i<=mxb;i++)for(int j=0;j<=k;j++)dp[i][j][0]=dp[i][j][1]=inf;
dp[0][0][0]=0;
for(int i=0;i<mxb;i++){
int ni=(n>>i)&1ll;
for(int u=0;u<=k;u++)for(int c=0;c<=1;c++){
int cur=dp[i][u][c];
if(cur>=inf)continue;
{
int sum=ni+c,bit=sum&1,nc=sum>>1;
Min(dp[i+1][u][nc],cur+bit);
}
if(u+1<=k){
int sum=ni+1+c,bit=sum&1,nc=sum>>1;
Min(dp[i+1][u+1][nc],cur+bit);
}
}
}
int bst=inf;
for(int u=0;u<=k;u++)for(int c=0;c<=1;c++){
int val=dp[mxb][u][c];
if(val<inf)Min(bst,val+c);
}
int ans=k+pc-bst;
cout<<ans<<'\n';
}
signed main(){
ios_base::sync_with_stdio(false);
cin.tie(0),cout.tie(0);
int T;cin>>T;
for(;T--;)solve();
return 0;
}
2173E-Shiro's Mirror Duel Idea & Solution: paulzrm It's my favourite problem :-)
$$$\lfloor 2.5n\rfloor$$$ is the expected number of operations. We add $$$800$$$ more to ensure that every correct solution would pass.
Think about how to solve it within $$$3n$$$, and how can we improve that.
We split the process into two phases. In the first phase, we make every pair $$$x$$$ and $$$n - x + 1$$$ symmetric; in the second phase, we turn the permutation into $$$1, 2, \dots, n$$$.
Let $$$pos[x]$$$ be the current position of $$$x$$$. We iterate $$$x$$$ from $$$1$$$ to $$$n$$$ in increasing order. If $$$pos[x] + pos[n + 1 - x] \neq n + 1$$$, then $$$x$$$ and $$$n + 1 - x$$$ are not symmetric. In this case, we query $$$(pos[x],\, n - pos[n + 1 - x] + 1)$$$. No matter whether the interactive judge mirrors the array and then swaps or not, this operation will always make $$$x$$$ and $$$n + 1 - x$$$ symmetric. When $$$n$$$ is odd, we should first move $$$(n+1)/2$$$ to the middle.
Next, we move these groups to their correct target positions in order. Suppose the target positions of a pair are $$$i$$$ and $$$n - i + 1$$$, while it is currently at $$$j$$$ and $$$n - j + 1$$$. We query $$$(i, j)$$$; this will surely place either the pair itself or its mirror into the correct positions.
Now we compute the expectation. Let $$$T$$$ be the expected number of queries spent in this second phase for a single group. After we query once, with probability $$$1/2$$$ the group is already correctly placed and we are done after $$$1$$$ query; with probability $$$1/2$$$ it is mirrored into another group’s target positions, in which case we have spent $$$1$$$ query and still expect $$$T$$$ more. Thus $$$T = 1 + \frac{1}{2} \cdot 1 + \frac{1}{2}(T + 1)$$$, which solves to $$$T = 4$$$.
There are $$$n/2$$$ groups, so this phase needs $$$2n$$$ queries in expectation. The first phase uses exactly $$$n/2$$$ queries (one per group), so the total expected number of queries is $$$5n/2$$$.
#include<bits/stdc++.h>
using namespace std;
const int mxn=40004;
int n,a[mxn],pos[mxn];
int mirror(int x){
return n-x+1;
}
pair<int,int>ask(int x,int y){
cout<<"? "<<x<<' '<<y<<endl;
fflush(stdout);
int u,v;cin>>u>>v;
return {u,v};
}
void solve(){
cin>>n;
for(int i=1;i<=n;i++){
cin>>a[i];
pos[a[i]]=i;
}
if(n%2==1){
int mid=(n+1)/2;
while(pos[mid]!=mid){
auto p=ask(mid,pos[mid]);
int u=p.first,v=p.second;
swap(a[u],a[v]);
pos[a[u]]=u,pos[a[v]]=v;
}
}
for(int i=1;i<=n/2;i++){
int x=pos[i],y=pos[mirror(i)];
if(x+y!=n+1){//not mirror
auto p=ask(x,mirror(y));
int u=p.first,v=p.second;
swap(a[u],a[v]);
pos[a[u]]=u,pos[a[v]]=v;
}
}
for(int i=1;i<=n/2;i++){
while(pos[i]!=i or pos[mirror(i)]!=mirror(i)){
if(pos[i]!=i){
auto p=ask(i,pos[i]);
int u=p.first,v=p.second;
swap(a[u],a[v]);
pos[a[u]]=u,pos[a[v]]=v;
}else{
auto p=ask(mirror(i),pos[mirror(i)]);
int u=p.first,v=p.second;
swap(a[u],a[v]);
pos[a[u]]=u,pos[a[v]]=v;
}
}
}
cout<<"!"<<endl;
fflush(stdout);
}
int main(){
int T=1;
cin>>T;
for(;T--;)solve();
return 0;
}
2173F-Isla's Memory Thresholds Idea & Solution: paulzrm
Consider the process. First note that, because the array $$$a$$$ is non-increasing, the length of each segment whose sum first reaches $$$\ge x$$$ and is then cleared is nondecreasing.
We use the square-root decomposition idea. We fix a threshold $$$B$$$ that separates “short” segments from “long” ones. For segments of length at most $$$B$$$, we handle them using precomputed data; for segments of length greater than $$$B$$$, we use a slower direct procedure.
Since there are at most $$$q$$$ distinct values of $$$x$$$, we can first collect all queries offline and discretize these values. Then we build a table $$$\text{f[i][len]}$$$ that records the rightmost starting position of a segment of length $$$\text{len}$$$ whose sum is at least the $$$i$$$-th largest value of $$$x$$$.
When answering a query, we simply enumerate $$$\text{len} = 1, 2, \dots, B$$$ and, starting from the current left endpoint, quickly skip over and process all segments of length $$$\text{len}$$$. The time complexity for this part is $$$O(B)$$$ per query.
For segments of length $$$ \gt B$$$, we directly binary search the ending position of each such segment. Since every such segment has length at least $$$B$$$, we make at most $$$n/B$$$ jumps, so this part runs in $$$O\bigl((n/B)\log n\bigr)$$$ time.
If we choose $$$B = \sqrt{n\log n}$$$, the two parts balance and the total time complexity becomes $$$O\bigl(n\sqrt{n\log n}\bigr)$$$.
#include<bits/stdc++.h>
#define ll long long
using namespace std;
const int mxn=2e5+5;
const int B=512;
int T,f[B+3][mxn],n,a[mxn],q;
ll sum[mxn],N=1000000001;
int l_[mxn],r_[mxn],x_[mxn];
int id[mxn];
void solve(){
cin>>n>>q;
vector<int>v;v.clear();
for(int i=1;i<=n;i++)
cin>>a[i],
sum[i]=sum[i-1]+a[i];
for(int i=1;i<=q;i++)
cin>>l_[i]>>r_[i]>>x_[i],
v.push_back(x_[i]);
v.push_back(0);
v.push_back(1000000008);
sort(v.begin(),v.end());
v.erase(unique(v.begin(),v.end()),v.end());
for(int i=1;i<=q;i++)
id[i]=lower_bound(v.begin(),v.end(),x_[i])-v.begin();
for(int i=1;i<=B;i++){
int k=0;
for(int j=n-i;j>=0;j--)
for(;v[k]<=min(sum[j+i]-sum[j],N);k++)
f[i][k]=j+1;
}
for(int t=1;t<=q;t++){
int l,r,x,cnt=0;
l=l_[t],r=r_[t],x=x_[t];
for(int i=1;i<=B;i++)
if(f[i][id[t]]>=l){
int w=min(r-l+1,f[i][id[t]]-l+i)/i;
cnt+=w;
l+=w*i;
}l--;
while(1){
if(sum[r]-sum[l]<x){
cout<<cnt<<' '<<sum[r]-sum[l]<<'\n';
break;
}cnt++;
int t=x,lo=l-1,hi=n+1,md;
ll req=sum[l]+t;
for(;lo<hi-1;){
md=lo+hi>>1;
if(sum[md]<req)lo=md;
else hi=md;
}
l=hi;
}
}
for(int i=0;i<max(n,(int)v.size())+3;i++)sum[i]=0;
for(int j=0;j<B+3;j++)for(int i=0;i<max(n,(int)v.size())+3;i++)f[j][i]=0;
}
signed main(){
ios_base::sync_with_stdio(0);
cin.tie(0),cout.tie(0);
T=1;cin>>T;
while(T--)solve();
return 0;
}
There exists an $$$O(n\sqrt{n})$$$ solution.








