Hello, Codeforces!
Sorry for the late editorial. Our national olympiad just finished and we were sorting everything out.
We also apologize for weak tests on D that made an incorrect solution pass. Many testers submitted the same solution and we mistakenly thought that it was correct given the low bound on the number of batteries. Still, we hope that the contest was a fun and valuable experience for all contestants.
The editorial was prepared by efishel and me. If you have any questions about the problems or the solutions, please ask in the comments.
2155A - Эль фучо
When two teams in the same group face off against each other, exactly one of them stays in the group.
The answer for all $$$n$$$ is $$$2n-2$$$.
Let's consider the winners' group first. Note that exactly one team drops to the losers' bracket for every match in this group. Only one team is left in the winners' group right before the end, so $$$n-1$$$ matches must have been played in the winners' group right before the final match.
Similar logic applies to the losers' group. $$$n-1$$$ teams in total drop down to the losers' group over the course of the tournament. For every match played between two teams in this group, exactly one team is eliminated from the tournament. A single team in left in this group right before the final match, so $$$n-2$$$ matches must have been played in the losers' group right before the final match.
Finally, we add the final match to our calculation, bringing the total to
#include <bits/stdc++.h>
using namespace std;
int main() {
int n, t;
cin >> t;
while(t--) {
cin >> n;
cout << 2*n-2 << "\n";
}
}
Alternatively, we can simulate the process directly.
Suppose there are $$$a$$$ teams in the winners' group and $$$b$$$ teams in the losers' group right before a round.
First, $$$\left \lfloor \frac{b}{2} \right \rfloor$$$ matches are played among the teams in the losers' group, so $$$\left \lfloor \frac{b}{2} \right \rfloor$$$ are eliminated in this step. We subtract this number from $$$b$$$ and add $$$\left \lfloor \frac{b}{2} \right \rfloor$$$ to the total number of matches played.
Then, $$$\left \lfloor \frac{a}{2} \right \rfloor$$$ matches are played among the teams in the winners' group, so $$$\left \lfloor \frac{a}{2} \right \rfloor$$$ of them drop down to the losers' group. We subtract this number from $$$b$$$ and add $$$\left \lfloor \frac{b}{2} \right \rfloor$$$ to the total number of matches played.
After simulating $$$\mathcal{O}(\log (n))$$$ rounds as described above, both brackets are left with a single team, so we add $$$1$$$ to our total.
Time complexity: $$$\mathcal{O}(\log (n))$$$
#include <bits/stdc++.h>
using namespace std;
int main() {
int n, t, winners, losers, ans;
cin >> t;
while(t--) {
cin >> n;
winners = n;
losers = 0;
ans = 0;
while(max(losers, winners) > 1) {
ans += losers/2;
losers = (losers+1)/2;
ans += winners/2;
losers += winners/2;
winners++;
winners /= 2;
}
ans++;
cout << ans << "\n";
}
}
2155B - Великий побег Авраама
There is exactly one value of $$$k$$$ that doesn't work
Two arrows facing each other trap Abraham forever.
We claim that the only value of $$$k$$$ for which the answer is $$$\texttt{NO}$$$ is $$$k = n*n-1$$$.
If $$$k = n^2-1$$$ were possible, Abraham should be trapped forever from exactly $$$n^2-(n^2-1) = 1$$$ starting cell, but the arrow in that cell would necessarily point towards the outside of the grid, or in the direction of another cell from which Abraham will eventually escape, so this situation is impossible and we should print $$$\texttt{NO}$$$.
All other values of $$$k$$$ are achievable with the following construction:
Fill up rows of the grid with $$$\texttt{U}$$$ from left to right and from top to bottom. You should place $$$k$$$ of this letter in total.
Fill every cell that doesn't have a $$$\texttt{U}$$$ already and isn't in the last row with a $$$\texttt{D}$$$. (These cells might be non-existent)
Fill the leftmost unoccupied cell in the last row with an $$$R$$$ and all the cells to its right with an $$$L$$$. (Note that at least one $$$L$$$ is placed).
This works because, if Abraham steps on a $$$\texttt{U}$$$, he will be pushed onto another $$$\texttt{U}$$$, and so on until he leaves the board.
On the other hand, if he steps on a $$$\texttt{D}$$$, he will be carried along all the way to an $$$\texttt{R}$$$ or $$$\texttt{L}$$$ in the last row. It's easy to see that these directions necessarily trap Abraham.
Time complexity: $$$\mathcal{O}(n^2)$$$
t = int(input())
for q in range(t):
n, k = (int(x) for x in input().split())
if(k == n*n-1):
print("NO")
else:
print("YES")
for i in range(n):
s = ""
for j in range(n):
if(k > 0):
s += "U"
k -= 1
elif(i == n-1 and j == n-1):
s += "L"
elif(i == n-1):
s += "R"
else:
s += "D"
print(s)
2155C - Плащи древних волшебников
What's the maximum value that $$$|a_{i+1} - a_i|$$$ can attain for any $$$i$$$?
Bound the number of possible cape arrangements
Suppose that Harry's list corresponds to at least one arrangement of capes. The main claim is that any two consecutive values in his list differ by at most $$$1$$$.
Note that when Harry walks from position $$$i$$$ to position $$$i+1$$$, the visibility of wizard $$$1$$$ to $$$i-1$$$ and $$$i+2$$$ to $$$n$$$ is unaffected, so there are four possible situations regarding the other $$$2$$$ wizards:
Wizards $$$i$$$ and $$$i+1$$$ wear their capes to their left. Wizard $$$i+1$$$ becomes visible and wizard $$$i$$$ can still be seen from position $$$i+1$$$, so $$$a_{i+1}-a_i = 1$$$
Wizards $$$i$$$ and $$$i+1$$$ wear their capes to their right. Wizard $$$i+1$$$ remains visible while wizard $$$i$$$ becomes invisible from position $$$i+1$$$, so $$$a_{i+1}-a_i = -1$$$
Wizard $$$i$$$ wears his cape to his left while wizard $$$i+1$$$ wears his cape to his right. Both wizards are still visible from position $$$i+1$$$, so $$$a_{i+1}-a_i = 0$$$
Wizard $$$i$$$ wears his cape to his left while wizard $$$i+1$$$ wears his cape to his right. Wizard $$$i+1$$$ becomes visible, but wizard $$$i$$$ becomes invisible when moving to position $$$i+1$$$, so $$$a_{i+1}-a_i = 0$$$
In summary, if both wizards wear their capes the same way, the difference between the corresponding entries in Harry's list is $$$1$$$. Else, it is $$$0$$$. Note that this determines a unique possibility (it might be invalid) for the entire cape arrangement based on how wizard $$$1$$$ is wearing his cape only if $$$|a_{i+1}-a_i| \lt 2$$$ for all $$$1 \le i \lt n$$$.
Thus, there are at most two possible arrangements for the wizards' capes. We can construct both of them and check whether they are consistent with Harry's list.
Time complexity: $$$\mathcal{O}(n)$$$
def validar(x, n, a):
flag = False
visibles = 1
for i in range(2, n+1):
if(x[i] == 1):
visibles += 1
if(visibles == a[1]):
flag = True
for i in range(1, n):
if(x[i] == 1 and x[i+1] == 1):
visibles -= 1
elif(x[i] == 0 and x[i+1] == 0):
visibles += 1
if(a[i+1] != visibles):
flag = False
break
return flag
t = int(input())
for i in range(t):
sol1 = []
sol2 = []
sol1.append(0)
sol2.append(0)
sol1.append(0)
sol2.append(1)
flag = True
cont = 0
n = int(input())
arr = []
arr.append(0)
arr.extend(map(int, input().split()))
for j in range(1, n):
if(arr[j+1]-arr[j] > 1):
flag = False
break
if(arr[j+1]-arr[j] == 0):
sol1.append(1-sol1[j])
sol2.append(1-sol2[j])
else:
sol1.append(sol1[j])
sol2.append(sol2[j])
if(not flag):
print(0)
continue
if(validar(sol1, n, arr)):
cont += 1
if(validar(sol2, n, arr)):
cont += 1
print(cont)
2155D - Батарейки
Place the batteries in order on a circumference.
If there are $$$a$$$ working batteries, can we bound the distance between some pair of them in terms of $$$n$$$ and $$$a$$$?
Arrange the batteries in clockwise order on a circumference such that the arc length between any two consecutive batteries is $$$1$$$, and let the indices of the working batteries be $$$1 \le b_1 \lt b_2 \lt \dots \lt b_a \le n$$$.
Let the distance between two working batteries $$$b_i$$$ and $$$b_{i+1}$$$ be $$$b_j-b_i$$$ if $$$i \lt a$$$ and $$$b_1+n-b_a$$$ if $$$i = a$$$. Note that this matches the length of a circle arc between $$$b_i$$$ and $$$b_{i+1}$$$ that doesn't contain any other working battery.
Observe that the sum of the distances between $$$b_i$$$ and $$$b_{i+1}$$$ for $$$1 \le i \le a$$$ (indices taken $$$\bmod a$$$) is exactly $$$n$$$, since the corresponding circle arcs make up the entire circumference. Since there are $$$a$$$ pairs of working batteries, by Pigeonhole Principle there is some distance that is less than or equal to $$$\left \lfloor \frac{n}{a} \right \rfloor$$$
Consequently, the following strategy works: For every $$$i$$$ from $$$1$$$ to $$$\left \lfloor \frac{n}{a} \right \rfloor$$$, and for every $$$j$$$ from $$$1$$$ to $$$n$$$, query batteries $$$j$$$ and $$$j+i \bmod n$$$ (if $$$j+i = 0$$$, query with battery $$$n$$$) which is equivalent to querying battery $$$j$$$ and the one $$$i$$$ positions ahead clockwise.
The above takes at most
trials, as desired.
#include <bits/stdc++.h>
using namespace std;
void main_() {
int n, x;
cin >> n;
for (int i = 1; i < n; i++)
for (int j = 1; j <= n; j++) {
x = (i+j)%n;
if(x == 0) {
x = n;
}
cout << j << " " << x << endl;
int res;
cin >> res;
if (res) {
return;
}
}
}
int main() {
int t = 1;
cin >> t;
while (t--)
main_();
return 0;
}
2155E - Мимо и Юю
When $$$n = 1$$$, the total number of turns is fixed.
When $$$n = 2$$$, the winning and losing states depend only on the parity of the amount of tokens in each column.
For convenience, let's denote with $$$d_i$$$ the number of tokens located in column $$$i$$$.
It's worth noting that the game lasts a finite amount of turns because, the array $$$[ d_m, d_{m-1}, \dots d_1 ]$$$ becomes lexicographically smaller (strictly) after each turn.
Note that we can think of each token independently, as tokens are only removed one at a time by the players.
We first deal with the case $$$n = 1$$$. Let $$$c_i$$$ be the column where token $$$i$$$ is located. We will prove by induction that token $$$i$$$ contributes $$$2^{c_i-2}$$$ turns to the game whenever $$$c_i \ge 2$$$. That is, $$$2^{c_i-2}$$$ turns are necessary and sufficient to remove token $$$i$$$ and all the tokens it spawns from the board.
Base case $$$c_i = 2$$$: A player moves token $$$i$$$ to column $$$1$$$. No tokens are added to the board, so $$$1 = 2^0$$$ turns are played in total.
Otherwise, $$$c_i-2$$$ tokens will be added to the board after choosing token $$$i$$$, one in each of columns $$$2, 3, \dots {c_i-1}$$$. By the induction hypothesis, removing these tokens and all the tokens they spawn will take
turns. Adding the initial turn to the total, we get $$$2^{c_i-2}-1+1 = 2^{c_i-2}$$$ turns in total.
The answer is equivalent to the parity of the number of turns performed in total, obtained by summing the contribution of turns by each token. Only tokens in column $$$2$$$ contribute an odd amount of turns ($$$2^0 = 1$$$ turn) while every other token contributes an even amount of turns ($$$0$$$ or $$$2^{j-2}$$$ turns), so we just need to print $$$\texttt{Mimo}$$$ if there is an odd number of tokens on column $$$2$$$ or $$$\texttt{Yuyu}$$$ if there is an even amount.
The only other case is when $$$n \ge 2$$$. We claim that the second player wins if and only if, for all $$$i \ge 2$$$,
- If $$$d_i = 0$$$ for all $$$i \ge 2$$$: The game is automatically a losing state.
- $$$d_i \equiv 0 \; \bmod 2$$$ for all $$$i \ge 2$$$ turns out to also be a losing state. Any move would cause some $$$d_j$$$ (in particular, $$$j = b_1$$$) to go from even to odd; thus, the opponent is given a winning game, as we will prove below.
We claim that $$$d_j \equiv 1 \; \bmod 2$$$ for some $$$j \ge 2$$$ is a winning state. Denote the maximum of all such $$$j$$$ as $$$j_{max}$$$. Our objective will be to give the opponent a game which falls under the first or second bullet point ($$$d_i \equiv 0 \; \bmod 2$$$ for all $$$i \ge 2$$$). If such a thing is always possible, after a finite amount of moves, the state given to the opponent will necessarily be exactly that of the first bullet point (the "minimal" losing state), proving these states are winning.
By definition of $$$j_{max}$$$, $$$d_i \equiv 0 \; \bmod 2$$$ for all ($$$i \gt j_{max}$$$). Performing a turn with chosen token $$$c$$$ in column $$$j_{max}$$$ causes $$$d_{j_{max}}$$$ to become $$$0 \bmod 2$$$. For all columns $$$i$$$ ($$$i \lt j_{max}$$$) a move that achieves $$$d_i \equiv 0 \; \bmod 2$$$ is as follows:
- If $$$d_i \equiv 0 \; \bmod 2$$$: Make sure $$$2$$$ tokens are added to column $$$i$$$.
- If $$$d_i \equiv 1 \; \bmod 2$$$: Make sure $$$1$$$ token is added to column $$$i$$$.
Such a move is possible because $$$n \ge 2$$$. This is an example of how it could look like:

Time complexity: $$$\mathcal{O}(k)$$$ or $$$\mathcal{O}(k\log m)$$$ depending on implementation.
#include <bits/stdc++.h>
using namespace std;
using ll = long long;
using vll = vector <ll>;
void tc () {
ll n, m, k;
cin >> n >> m >> k;
set <ll> st;
for (ll id = 0; id < k; id++) {
ll i, j;
cin >> i >> j;
if (j == 1) continue;
if (n == 1 && j != 2) continue;
if (st.count(j))
st.erase(j);
else
st.insert(j);
}
cout << (st.size() > 0 ? "Mimo" : "Yuyu") << '\n';
}
int main () {
cin.tie(nullptr) -> sync_with_stdio(false);
ll T; cin >> T; while (T--) { tc(); }
return 0;
}
2155F - Цветное дерево Хуана
Consider the graph formed by the vertices with color $$$x$$$ and the edges between them. What happens if a query contains two vertices in the one of this graph's connected components ("color components")?
After assigning IDs to each "color component" (for all colors), we can calculate the answer for two vertices by counting the number of components which contain both of them.
We can precompute the answer for all possible queries containing vertex $$$x$$$ with DFS.
Determining all answers with only one of the two aforementioned methods is very expensive. Is there some way to balance the complexity by using one over the other in certain situations?
There are some solutions with square-root complexity with an additional log factor that can pass if optimized enough, but we will present an $$$\mathcal{O}((n+q+s)\sqrt s)$$$ solution.
Let's first transform the problem from calculating the intersection of $$$C_w$$$ for all nodes $$$w$$$ in the path from $$$a_i$$$ to $$$b_i$$$ ($$$\left| C_{a_i} \cap \dots \cap C_{b_i} \right|$$$) to calculating the intersection of 2 sets: $$$D_{a_i}$$$ and $$$D_{b_i}$$$. We will calculate $$$\left|D_{a_i} \cap D_{b_i}\right|$$$ in average square-root complexity, using the best of 2 algorithms for each type of query.
First, let's define the sets $$$D_1, D_2, \dots D_n$$$. Let a maximal color component be a set of vertices, all of which contain some color $$$x$$$ in their sets, such that there is no other vertex with color $$$x$$$ adjacent to some vertex of the maximal component. We will assign an arbitrary ID to each maximal color component, and $$$D_u$$$ will contain the IDs of the maximal color components which node $$$u$$$ belongs to.
Note that $$$\left| D_u \right| = \left| C_u \right|$$$. Also note that the answer now is simply $$$\left|D_{a_i} \cap D_{b_i}\right|$$$. A possible $$$\mathcal{O}\left(n+s+k\right)$$$ algorithm calculating the sets $$$D_i$$$ is to maintain $$$k$$$ stacks while doing a DFS where, when visiting node $$$u$$$, stack $$$c$$$ contains the ancestors of node $$$u$$$ with color $$$c$$$. Checking the top of the stack suffices to know if a new component is created, or inherited from one's parent.
Here are 2 algorithms which calculate the cardinality of the intersection: One answers a query for a given pair of nodes $$$a_i$$$, $$$b_i$$$ and runs in $$$\mathcal{O}\left( |D_{a_i}|+|D_{b_i}| \right)$$$. The other calculates the answer between a fixed node $$$u$$$ and all $$$n$$$ nodes in $$$\mathcal{O}\left( n+s \right)$$$.
The first algorithm relies on sorting the sets $$$D_u$$$ (stored in lists or arrays), and then using the two-pointers technique to calculate the intersection of two sorted arrays in linear time. The second algorithm roots the tree in $$$u$$$ and starts a DFS, where the frequency of all $$$k$$$ colors across ancestors is maintained. For every node $$$v$$$, it calculates if the frequency of every color in $$$C_v$$$ is equal to its depth.
Let $$$B$$$ be a fixed integer. We precompute answers when $$$|D_u| \ge B$$$ using the $$$\mathcal{O}\left( n+s \right)$$$ algorithm. Otherwise, we use the $$$\mathcal{O}\left( |D_{a_i}|+|D_{b_i}| \right)$$$ algorithm for queries in which $$$|D_{a_i}|, |D_{b_i}| \le B$$$. The complexity of this approach is $$$\mathcal{O}\left( qB + (s/B)(n+s) \right)$$$ because there are at most $$$s/B$$$ vertices whose sets have a cardinality greater than $$$B$$$. Setting $$$B$$$ to be around $$$\sqrt s$$$, we get that our procedure runs in $$$\mathcal{O}\left(q\sqrt s + \sqrt s (n+s) \right) = \mathcal{O}\left( (n+q+s)\sqrt s \right)$$$.
Time complexity: $$$\mathcal{O}\left( (n+q+s)\sqrt s \right)$$$
Memory complexity: $$$\mathcal{O}\left( s + n\sqrt s \right)$$$
#include <bits/stdc++.h>
using namespace std;
using ll = long long;
using vll = vector <ll>;
const ll MAXN = 3E5+16, B = 800;
vll adj[MAXN];
vll cols[MAXN], comps[MAXN];
ll comp_col[MAXN];
ll timer = 1, cur = 1;
ll last[MAXN];
bool active[MAXN];
ll curMap[MAXN];
int preProc[MAXN/B+16][MAXN];
void dfs1 (ll u, ll par) {
vll th(cols[u].size()), th2(cols[u].size());
comps[u].resize(cols[u].size());
for (ll i = 0; i < cols[u].size(); i++) {
ll c = cols[u][i];
th[i] = last[c];
th2[i] = comp_col[c];
comp_col[c] = (last[c] == par ? comp_col[c] : timer++);
comps[u][i] = comp_col[c];
last[c] = u;
}
for (ll v : adj[u]) {
if (v == par) continue;
dfs1(v, u);
}
for (ll i = 0; i < cols[u].size(); i++) {
ll c = cols[u][i];
last[c] = th[i];
comp_col[c] = th2[i];
}
}
void tc () {
ll n, k, s, q;
cin >> n >> k >> s >> q;
fill(adj, adj+n, vll({}));
for (ll i = 1; i < n; i++) {
ll u, v;
cin >> u >> v;
u--; v--;
adj[u].push_back(v);
adj[v].push_back(u);
}
timer = 1;
cur = 1;
fill(cols, cols+n, vll({}));
fill(comps, comps+n, vll({}));
fill(comp_col, comp_col+k, -16);
fill(last, last+k, -16);
for (ll i = 0; i < s; i++) {
ll x, c;
cin >> x >> c;
x--; c--;
cols[x].push_back(c);
}
dfs1(0, 0);
for (ll u = 0; u < n; u++) {
sort(comps[u].begin(), comps[u].end());
}
for (ll u = 0; u < n; u++) {
if (cols[u].size() < B) continue;
// >= B
for (ll c : comps[u]) active[c] = true;
for (ll v = 0; v < n; v++) {
preProc[cur][v] = 0;
for (ll c : comps[v]) {
preProc[cur][v] += active[c];
}
}
for (ll c : comps[u]) active[c] = false;
curMap[u] = cur++;
}
while (q--) {
ll u, v;
cin >> u >> v;
u--; v--;
if (cols[u].size() >= B) {
cout << preProc[curMap[u]][v] << ' ';
continue;
}
if (cols[v].size() >= B) {
cout << preProc[curMap[v]][u] << ' ';
continue;
}
ll ans = 0;
ll p1 = 0;
for (ll p2 = 0; p2 < comps[v].size(); p2++) {
while (p1 < comps[u].size() && comps[u][p1] < comps[v][p2]) p1++;
if (p1 == comps[u].size() || comps[u][p1] != comps[v][p2]) continue;
ans++;
}
cout << ans << ' ';
}
cout << '\n';
}
int main () {
cin.tie(nullptr) -> sync_with_stdio(false);
ll T; cin >> T; while (T--) { tc(); }
return 0;
}







Hola, Codeforces!