2173A-Sleeping Through Classes Idea & Solution: HHH666666
From the statement we can conclude that a class can be slept through $$$\textbf{if and only if}$$$:
- The class itself is not an important class.
- The class is not within $$$k$$$ classes after any important class.
Here's an $$$O(nk)$$$ approach:
We directly simulate the process: for each important class at position $$$i$$$, mark positions $$$i+1, i+2, \dots, \min(i+k, n)$$$ as “must stay awake”. After processing all important classes, count the classes that are neither important nor marked. This $$$O(nk)$$$ solution is acceptable under the constraints.
There's also an $$$O(n)$$$ solution:
For each unimportant class, we only need to focus on the nearest important class before it. Thus we maintain a variable $$$last$$$ storing the index of the most recent important class (or $$$-\infty$$$ if none seen yet), then for each $$$i$$$ from $$$1$$$ to $$$n$$$, if it's an important class we let $$$last = i$$$, otherwise if $$$last + k \lt i$$$, the class is able to be slept through and we add $$$1$$$ to the answer.
#include<bits/stdc++.h>
using namespace std;
int n,k;string s;
void solve(){
cin>>n>>k>>s;
int ans=0;
for(int i=0;i<n;i++){
if(s[i]=='0'){
int ok=1;
for(int j=max(0,(int)i-k);j<i;j++){
if(s[j]=='1'){
ok=0;
break;
}
}
ans+=ok;
}
}
cout<<ans<<endl;
}
int main(){
ios_base::sync_with_stdio(false);
cin.tie(0),cout.tie(0);
int T;cin>>T;
for(;T--;)solve();
return 0;
}
2173B-Niko's Tactical Cards Idea & Solution: HHH666666
Consider a simple greedy algorithm: For each turn, choose the option that gives the higher immediate score. That is, if our current score is $$$k$$$, it becomes $$$\max(k - a_i, b_i - k)$$$ after this turn. Why doesn't the algorithm work?
Try to improve the algorithm in Hint 1.
The greedy approach in Hint 1 fails because it assumes that if we know the maximum possible score after turn $$$i$$$ (denoted by $$$max_i$$$), we can determine $$$max_{i + 1}$$$ from it. However, this assumption is wrong and here is a counterexample:
$$$a = [2, 0], b = [1, 0]$$$
Here $$$max_1 = 1$$$ and $$$max_2 = 2$$$. The greedy approach takes the blue card in turn $$$1$$$. But to reach $$$max_2$$$, we must choose the red card in turn $$$1$$$, which turns our score into $$$-2$$$, and then choose the blue card in turn $$$2$$$. So in this example, $$$max_1$$$ does not give $$$max_2$$$.
We see that $$$max_1$$$ does not help us find $$$max_2$$$ if we choose the blue card in the second turn. The reason is, if we want to maximize our score when choosing a blue card, we want our current score to be as small as possible, so we are looking for the minimum possible score. To maximize $$$b_i - k$$$ we should minimize $$$k$$$.
From this we can see that if we have both the maximum possible score $$$max_{i - 1}$$$ and the minimum possible score $$$min_{i - 1}$$$ after turn $$$i - 1$$$, we can determine $$$max_{i} = \max(max_{i - 1} - a_i, b_i - min_{i - 1})$$$, because these two expressions cover the best outcomes from the red and blue options respectively. Similarly, the new minimum $$$min_{i} = \min(min_{i - 1} - a_i, b_i - max_{i - 1})$$$ comes from either taking the red card from the previous minimum, or the blue card from the previous maximum. And from $$$max_{i}, min_{i}$$$ we can move forward to $$$max_{i + 1}, min_{i + 1}$$$ and so on.
Thus, we start with $$$max_0 = min_0 = 0$$$ and calculate each $$$max_i, min_i$$$ throughout the process using the formulas above. The answer is $$$max_n$$$. This runs in $$$O(n)$$$ time complexity.
2173C-Kanade's Perfect Multiples Idea & Solution: paulzrm
Consider the cases $$$n = 1$$$ and $$$n = 2$$$. What do you observe?
Try to determine the "first" element that must belong to $$$B$$$.
Assume the minimum element in $$$A$$$ is $$$x$$$. We now prove that $$$x$$$ must be included in $$$B$$$.
There are no elements smaller than $$$x$$$, which means $$$x$$$ has no proper divisor (other than $$$x$$$ itself) inside $$$A$$$. If we do not put $$$x$$$ into $$$B$$$, then the second rule would be violated.
Thus we can determine the first element of $$$B$$$, and then check whether all its multiples $$$\le k$$$ are contained in $$$A$$$. We mark these multiples as "finished".
Now think greedily. We never need to process elements that are already marked "finished". Each time, we just find the smallest unfinished element in $$$A$$$, check it, and add it to $$$B$$$ if possible.
A loose upper bound on the running time is $$$O\bigl(n d(n) \log n\bigr)$$$, where $$$d(n)$$$ is the divisor function. However, this bound is rarely attained in practice: many operations are skipped thanks to the "finished" marks, so the algorithm runs much faster than this worst-case estimate.
2173D-Taiga's Carry Chains Idea: chen_zida Solution: paulzrm
What is the answer when $$$k \geq 32$$$?
We can use dynamic programming to solve it.
When $$$k \geq 32$$$, we can, at each step, add the smallest $$$t$$$ such that adding $$$2^{t}$$$ to the current $$$n$$$ produces a carry. After $$$k$$$ such steps, $$$n$$$ will become a power of $$$2$$$, and the answer will be $$$\operatorname{popcount}(n) + k - 1$$$.
Now we observe that the total number of carries in the end is $$$\operatorname{popcount}(n) + k - \operatorname{popcount}(n')$$$, where $$$n'$$$ is the final value of $$$n$$$ after all operations. To maximize this quantity, we want to minimize the final $$$\operatorname{popcount}(n')$$$.
To this end, let $$$dp[i][j][c]$$$ denote the optimal value we can obtain after processing the $$$i$$$ least significant bits, having used $$$j$$$ operations in total, where $$$c \in {0,1}$$$ indicates whether there is a carry from the $$$(i-1)$$$-th bit.
For the $$$i$$$-th least significant bit, we have two options: either select this bit (i.e., use one operation on this bit) or not select it.
Let $$$n_i$$$ be the $$$i$$$-th bit of the original $$$n$$$, and let $$$t_i$$$ be the sum at this bit after adding everything in. If we select this bit, then $$$t_i = 1 + c + n_i$$$; otherwise, $$$t_i = c + n_i$$$. From $$$t_i$$$, we can determine whether this bit produces a carry to the next position (the next carry is $$$c' = \lfloor t_i / 2 \rfloor$$$) and whether the $$$i$$$-th bit of $$$n'$$$ is $$$1$$$ or $$$0$$$ (the resulting bit is $$$t_i \bmod 2$$$), which allows us to perform the DP transition.
The time complexity is $$$O((\log_2 n)^2)$$$.
2173E-Shiro's Mirror Duel Idea & Solution: paulzrm It's my favourite problem :-)
$$$\lfloor 2.5n\rfloor$$$ is the expected number of operations. We add $$$800$$$ more to ensure that every correct solution would pass.
Think about how to solve it within $$$3n$$$, and how can we improve that.
We split the process into two phases. In the first phase, we make every pair $$$x$$$ and $$$n - x + 1$$$ symmetric; in the second phase, we turn the permutation into $$$1, 2, \dots, n$$$.
Let $$$pos[x]$$$ be the current position of $$$x$$$. We iterate $$$x$$$ from $$$1$$$ to $$$n$$$ in increasing order. If $$$pos[x] + pos[n + 1 - x] \neq n + 1$$$, then $$$x$$$ and $$$n + 1 - x$$$ are not symmetric. In this case, we query $$$(pos[x],\, n - pos[n + 1 - x] + 1)$$$. No matter whether the interactive judge mirrors the array and then swaps or not, this operation will always make $$$x$$$ and $$$n + 1 - x$$$ symmetric.
Next, we move these groups to their correct target positions in order. Suppose the target positions of a pair are $$$i$$$ and $$$n - i + 1$$$, while it is currently at $$$j$$$ and $$$n - j + 1$$$. We query $$$(i, j)$$$; this will surely place either the pair itself or its mirror into the correct positions.
Now we compute the expectation. Let $$$T$$$ be the expected number of queries spent in this second phase for a single group. After we query once, with probability $$$1/2$$$ the group is already correctly placed and we are done after $$$1$$$ query; with probability $$$1/2$$$ it is mirrored into another group’s target positions, in which case we have spent $$$1$$$ query and still expect $$$T$$$ more. Thus $$$T = 1 + \frac{1}{2} \cdot 1 + \frac{1}{2}(T + 1)$$$, which solves to $$$T = 4$$$.
There are $$$n/2$$$ groups, so this phase needs $$$2n$$$ queries in expectation. The first phase uses exactly $$$n/2$$$ queries (one per group), so the total expected number of queries is $$$5n/2$$$.
2173F-Isla's Memory Thresholds Idea & Solution: paulzrm
Consider the process. First note that, because the array $$$a$$$ is non-increasing, the length of each segment whose sum first reaches $$$\ge x$$$ and is then cleared is nondecreasing.
We use the square-root decomposition idea. We fix a threshold $$$B$$$ that separates “short” segments from “long” ones. For segments of length at most $$$B$$$, we handle them using precomputed data; for segments of length greater than $$$B$$$, we use a slower direct procedure.
Since $$$x \le n$$$, we can build a table $$$f[i][\text{len}]$$$ that records the rightmost starting position of a segment of length $$$\text{len}$$$ whose sum is at least $$$i$$$. We only need to build this table for $$$\text{len} \le B$$$.
When answering a query, we simply enumerate $$$\text{len} = 1, 2, \dots, B$$$ and, starting from the current left endpoint, quickly skip over and process all segments of length $$$\text{len}$$$. The time complexity for this part is $$$O(B)$$$ per query.
For segments of length $$$ \gt B$$$, we directly binary search the ending position of each such segment. Since every such segment has length at least $$$B$$$, we make at most $$$n/B$$$ jumps, so this part runs in $$$O\bigl((n/B)\log n\bigr)$$$ time.
If we choose $$$B = \sqrt{n\log n}$$$, the two parts balance and the total time complexity becomes $$$O\bigl(n\sqrt{n\log n}\bigr)$$$.




