Unfortunately, the round turns out to be very unbalanced :( I'm deeply sorry about this issue.
I hope you still enjoy the problems.
2134A - Painting With Two Colors
Check the parity of numbers $$$a$$$, $$$b$$$, and $$$n$$$.
Let us first consider the $$$b$$$ blue cells. Since all $$$b$$$ cells appear in the final painting, for the coloring to be symmetric, they must be placed at the middle of the grid. Consequently, $$$b$$$ and $$$n$$$ must have the same parity.
Next, we examine two possible cases:
- If $$$b \geq a$$$, then the blue cells can fully cover all red cells. In this situation, the value of $$$a$$$ does not affect the final coloring.
- If $$$b \lt a$$$, then the red cells are not completely covered by the blue ones. Therefore, the red cells must also be placed in the middle of the grid, which means $$$a$$$ and $$$n$$$ must have the same parity as well.
Thus, by checking these conditions, we can determine whether a symmetric coloring exists. This requires only $$$\mathcal{O}(1)$$$ time.
#include <iostream>
using namespace std;
int main() {
ios::sync_with_stdio(false);
cin.tie(0);
int t;
cin >> t;
while (t--) {
int n, a, b;
cin >> n >> a >> b;
if (a <= b) {
cout << ((n % 2) == (b % 2) ? "YES\n" : "NO\n");
} else {
cout << ((n % 2) == (b % 2) && (n % 2) == (a % 2) ? "YES\n" : "NO\n");
}
}
}
t = int(input())
for _ in range(t):
n, a, b = map(int, input().split())
if a <= b:
if n % 2 == b % 2:
print('YES')
else:
print('NO')
else:
if n % 2 == a % 2 and n % 2 == b % 2:
print('YES')
else:
print('NO')
There are (at least) two solutions to the problem. Vote for your solution!
Find an integer $$$g \gt 1$$$ and integers $$$c_1, c_2, \ldots, c_n$$$ such that each $$$a_i + k \cdot c_i$$$ is divisible by $$$g$$$.
Decide the value of $$$g$$$ only based on $$$k$$$. There may be more than one possible choices.
The problem is equivalent to finding an array of $$$n$$$ non-negative integers $$$c_1, c_2, \ldots, c_n$$$ such that $$$0 \le c_i \le k$$$ for $$$1 \le i \le n$$$ and
The idea is as follows. If we can find an integer $$$g \gt 1$$$ and integers $$$c_1, c_2, \ldots, c_n$$$ such that each $$$a_i + k \cdot c_i$$$ is divisible by $$$g$$$ (for $$$1 \le i \le n$$$), then the GCD condition is satisfied.
There are (at least) two ways to construct such a solution. The simplest solution is to take $$$g = k+1$$$ and define $$$c_i = a_i \bmod (k+1)$$$. This works because
Hence every term is divisible by $$$k+1$$$.
Alternatively, you can also find the smallest integer $$$x$$$ such that $$$\gcd(x,k) = 1$$$, and set $$$g = x$$$. Note that $$$g \leq 29$$$, since $$$k \leq 10^9$$$ and the product of all prime numbers from $$$2$$$ to $$$29$$$ already exceeds $$$10^9$$$.
Then, for each $$$a_i$$$, we can add $$$k$$$ to it until it becomes divisible by $$$g$$$. In fact, the value of $$$c_i$$$ can be determined explicitly (though it is not required) as
where $$$k^{-1}$$$ is the modular inverse of $$$k$$$ modulo $$$g$$$.
Thus, in either method, we can always construct a valid array so that the GCD of the resulting numbers is greater than $$$1$$$.
#include <iostream>
#include <vector>
using namespace std;
int main() {
ios::sync_with_stdio(false);
cin.tie(0);
int t;
cin >> t;
while (t--) {
int n, k;
cin >> n >> k;
vector<long long> a(n);
for (auto &i : a) {
cin >> i;
i += (i % (k + 1)) * k;
}
for (auto i : a)
cout << i << ' ';
cout << '\n';
}
}
t = int(input())
for _ in range(t):
n, k = map(int, input().split())
a = list(map(int, input().split()))
for i in range(n):
a[i] += (a[i] % (k + 1)) * k
print(*a)
#include <iostream>
#include <numeric>
#include <vector>
using namespace std;
int main() {
ios::sync_with_stdio(false);
cin.tie(0);
int t;
cin >> t;
while (t--) {
int n, k;
cin >> n >> k;
vector<long long> a(n);
for (auto &i : a) {
cin >> i;
}
for (int g = 2;; g++) {
if (gcd(g, k) != 1)
continue;
for (auto &i : a) {
while (i % g != 0)
i += k;
}
for (auto i : a)
cout << i << ' ';
cout << '\n';
break;
}
}
}
import math
t = int(input())
for _ in range(t):
n, k = map(int, input().split())
a = list(map(int, input().split()))
for g in range(2, k + 2):
if math.gcd(g, k) > 1:
continue
ok = True
b = list(a)
for i in range(n):
if b[i] % g == 0:
continue
for j in range(1, k + 1):
b[i] += k
if b[i] % g == 0:
break
if b[i] % g != 0:
ok = False
if ok:
print(*b)
break
Only subarrays with length equal to $$$2$$$ or $$$3$$$ are useful.
Propose a greedy algorithm to decide which element to perform an operation.
For convenience, if $$$n$$$ is even, we append $$$a_{n+1}=0$$$ to the end of the array. Note that $$$a_{n+1}$$$ cannot be modified.
The first key observation is the following:
Observation. If all subarrays $$$[a_l, a_{l+1}, \ldots, a_r]$$$ with the following properties satisfy the condition, then all subarrays satisfy the condition:
- $$$r = l + 2$$$,
- $$$l$$$ and $$$r$$$ are both odd.
Proof. Consider such subarrays where both $$$l$$$ and $$$r$$$ are odd:
&=& a_l \;+\; \displaystyle \left( \sum_{\substack{l+2 \leq i \leq r-2 \\ i \ \text{odd}}} 2a_i \right) \;+\; a_r \\[1.2em]
&\geq& a_l \;+\; \displaystyle \left( \sum_{\substack{l+2 \leq i \leq r-2 \\ i \ \text{odd}}} a_i \right) \;+\; a_r \\[1.2em]
&=& \displaystyle \sum_{\substack{l \leq i \leq r \\ i \ \text{odd}}} a_i \end{array}$$$
The second inequality holds because all numbers are non-negative.
Thus, checking only these subarrays is sufficient: all other subarrays will automatically satisfy the condition because they involve fewer elements at odd positions. $$$\square$$$
We will call these subarrays important in the following discussion.
Now we find the minimum number of operations. Instead of directly minimizing the number of operations, we can think in terms of constructing the final array $$$b$$$ after the operations. Our goal is to maximize the sum of the elements in $$$b$$$.
Initially, we set $$$b_i = 0$$$ for all $$$i$$$. Notice that it is never beneficial to reduce values at even indices. Therefore, we can safely set $$$b_i = a_i$$$ for every even $$$i$$$.
The task is then to determine the values of $$$b_i$$$ at odd indices. It turns out that the following greedy algorithm works:
- Repeatedly pick the smallest odd index $$$i$$$ such that $$$b_i \lt a_i$$$, and increasing $$$b_i$$$ by $$$1$$$ does not violate the condition for any important subarray. Then increment $$$b_i$$$ by $$$1$$$.
This greedy algorithm ensures that we maximize the sum of $$$b$$$ while preserving validity. It can be simulated in $$$\mathcal{O}(n)$$$ time.
#include <algorithm>
#include <iostream>
#include <vector>
using namespace std;
int main() {
ios::sync_with_stdio(false);
cin.tie(0);
int t;
cin >> t;
while (t--) {
int n;
cin >> n;
vector<int> a(n);
for (auto &i : a)
cin >> i;
vector<int> b(n + 1);
long long ans = 0;
for (int i = 0; i < n; i += 2) {
int mn = a[i];
if (i >= 2)
mn = min(mn, a[i - 1] - b[i - 2]);
if (i + 1 < n)
mn = min(mn, a[i + 1]);
b[i] = mn;
ans += (a[i] - b[i]);
}
cout << ans << '\n';
}
}
t = int(input())
for _ in range(t):
n = int(input())
a = list(map(int, input().split()))
b = [0] * (n + 1)
ans = 0
for i in range(0, n, 2):
mn = a[i]
if i >= 2:
mn = min(mn, a[i - 1] - b[i - 2])
if i + 1 < n:
mn = min(mn, a[i + 1])
b[i] = mn
ans += (a[i] - b[i])
print(ans)
Consider the diameter of the tree. What do you discover?
Each sliding operation increases the diameter of the tree by at most $$$1$$$.
Now about the construction. How to perform an operation at each step that increases the diameter by exactly $$$1$$$?
We begin with the following observation:
Observation. Each sliding operation increases the diameter of the tree by at most $$$1$$$.
Proof. Consider an operation $$$(a,b,c)$$$, any pair of indices $$$(u,v)$$$, and the unique path $$$p$$$ between $$$u$$$ and $$$v$$$. If $$$p$$$ does not pass through $$$b$$$, or if $$$b$$$ is an endpoint of $$$p$$$, then the length of $$$p$$$ increases by at most $$$1$$$ before and after the operation. Otherwise, let $$$i_1$$$ and $$$i_2$$$ denote the two neighbors of $$$b$$$ on the path $$$p$$$. Then:
- If $$$i_1 = a$$$ and $$$i_2 = c$$$ (or vice versa), the length of $$$p$$$ does not change.
- If $$$i_1 = a$$$ and $$$i_2 \neq c$$$ (or vice versa), the length of $$$p$$$ increases by $$$1$$$.
- If $$$i_1 \neq a$$$ and $$$i_2 = c$$$ (or vice versa), the length of $$$p$$$ decreases by $$$1$$$.
- In all other cases, the length of $$$p$$$ does not change.
Thus, for any path in the tree, its length can increase by at most $$$1$$$. Consequently, the diameter of the tree increases by at most $$$1$$$ after a single operation. $$$\square$$$
Let $$$l$$$ denote the diameter of the input tree. To transform it into a path graph (whose diameter is $$$n-1$$$), it is clear that at least $$$(n-1) - l$$$ operations are required.
The remaining question is whether it is always possible to perform an operation at each step that increases the diameter by exactly $$$1$$$. This turns out to be true. To see this, take any diameter of the tree and root the tree at one of its endpoints. In each operation, select an edge adjacent to the diameter. Suppose this edge is $$$(u,v)$$$, where $$$u$$$ lies on the diameter and $$$v$$$ does not. Then perform the operation with $$$a$$$ as the parent of $$$u$$$ (in the rooted tree), $$$b = u$$$, and $$$c = v$$$. One can verify that this increases the diameter by $$$1$$$.
From the above construction, we conclude not only that the minimum number of operations is $$$(n-1)-l$$$, but also that we can explicitly construct a sequence of operations that transforms the given tree into a path graph with this minimum number of moves.
Finally, finding the first sliding operation can be done in $$$\mathcal{O}(n)$$$ time.
#include <algorithm>
#include <functional>
#include <iostream>
#include <vector>
using namespace std;
int main() {
ios::sync_with_stdio(false);
cin.tie(0);
int t;
cin >> t;
while (t--) {
int n;
cin >> n;
vector<vector<int>> G(n);
for (int i = 0; i < n - 1; i++) {
int u, v;
cin >> u >> v;
u--, v--;
G[u].push_back(v);
G[v].push_back(u);
}
vector<int> dist(n), p(n);
function<void(int, int)> dfs = [&](int now, int par) {
p[now] = par;
for (int i : G[now]) {
if (i != par) {
dist[i] = dist[now] + 1;
dfs(i, now);
}
}
};
dist[0] = 0;
dfs(0, -1);
int x = max_element(dist.begin(), dist.end()) - dist.begin();
dist[x] = 0;
dfs(x, -1);
int y = max_element(dist.begin(), dist.end()) - dist.begin();
if (dist[y] == n - 1) {
cout << -1 << '\n';
} else {
vector<int> on_diameter(n);
int now = y;
while (now != -1) {
on_diameter[now] = 1;
now = p[now];
}
int a = -1, b = -1, c = -1;
for (int u = 0; u < n; u++) {
for (int v : G[u]) {
if (on_diameter[u] && !on_diameter[v]) {
a = p[u], b = u, c = v;
break;
}
}
if (a != -1)
break;
}
cout << a + 1 << ' ' << b + 1 << ' ' << c + 1 << '\n';
}
}
}
An intuitive starting point may be to perform a throw query on every coordinate $$$i$$$. For which $$$i$$$ we can decide the value of $$$a_i$$$ immediately?
For those incides whose $$$a_i$$$ is unknown, try to perform some swap queries on them to get the values of $$$a_i$$$ for them.
No two unknown indices are adjacent.
An intuitive starting point is to perform a throw query on every coordinate $$$i$$$ ($$$1 \le i \le n$$$). Let $$$d_i$$$ denote the total number of jumps the ball makes when starting from coordinate $$$i$$$. For convenience, define $$$d_{n+1} = d_{n+2} = 0$$$.
Notice that for $$$1 \le i \le n$$$, if $$$d_{i+1} \neq d_{i+2}$$$, then we can immediately determine $$$a_i$$$: if $$$d_i = d_{i+1} + 1$$$, then $$$a_i = 1$$$; otherwise $$$a_i = 2$$$.
The difficult part of the problem is when $$$d_{i+1} = d_{i+2}$$$. We call an index $$$i$$$ \textbf{unknown} if $$$d_{i+1} = d_{i+2}$$$. In such cases, we can deduce $$$d_i$$$ without issuing a query, thus saving it for later use. The following observation is crucial:
Observation. No two unknown indices are adjacent.
Proof. If $$$i$$$ is unknown, then $$$d_i = d_{i+1} + 1$$$, hence $$$d_i \neq d_{i+1}$$$. Therefore, $$$i-1$$$ cannot be unknown. $$$\square$$$
To exploit this fact, we adopt the following strategy: After computing all $$$d_i$$$, for each unknown index $$$i$$$ (in increasing order), we swap the boxes at positions $$$i$$$ and $$$i+1$$$, and then perform a throw query at coordinate $$$i+1$$$. Since index $$$i+1$$$ is not unknown, this reveals $$$a_i$$$. The special case is $$$i = n$$$: here, we swap boxes $$$n-1$$$ and $$$n$$$, then perform a throw query on coordinate $$$n-1$$$ to obtain $$$a_n$$$.
To sum up:
- For $$$i$$$ from $$$n$$$ down to $$$1$$$: perform a throw query on coordinate $$$i$$$. If $$$d_{i+1} = d_{i+2}$$$, skip the query for $$$i$$$ and mark $$$i$$$ as unknown (but still compute $$$d_i$$$). Otherwise, determine both $$$a_i$$$ and $$$d_i$$$.
- For each $$$i$$$ from $$$1$$$ to $$$n-1$$$: if $$$i$$$ is unknown, swap box $$$i$$$ with box $$$i+1$$$ and perform a throw query at coordinate $$$i+1$$$. This reveals $$$a_i$$$.
- Finally, swap $$$n-1$$$ and $$$n$$$, and perform a throw query on coordinate $$$n-1$$$ to determine $$$a_n$$$.
For every unknown index we use $$$2$$$ queries, and for each known index we use $$$1$$$. Since no two unknown indices are adjacent, there are at most $$$\lceil \frac{n}{2} \rceil$$$ unknown indices. Hence, the total number of queries is at most $$$\left\lceil \frac{3n}{2} \right\rceil$$$.
#include <iostream>
#include <vector>
using namespace std;
int main() {
ios::sync_with_stdio(false);
cin.tie(0);
int t;
cin >> t;
while (t--) {
int n;
cin >> n;
vector<int> a(n);
auto swap = [&](int x) {
cout << "swap " << x + 1 << endl;
};
auto throw_ball = [&](int x) {
cout << "throw " << x + 1 << endl;
int ret;
cin >> ret;
return ret;
};
auto answer = [&]() {
cout << "!";
for (auto i : a)
cout << ' ' << i;
cout << endl;
};
vector<int> jumps(n + 2);
for (int i = n - 1; i >= 0; i--) {
if (jumps[i + 1] == jumps[i + 2]) {
jumps[i] = jumps[i + 1] + 1;
} else {
jumps[i] = throw_ball(i);
if (jumps[i] == jumps[i + 1] + 1)
a[i] = 1;
else
a[i] = 2;
}
}
for (int i = 0; i + 1 < n; i++) {
if (a[i] == 0) {
swap(i);
int jumps_i = throw_ball(i + 1);
if (jumps_i == jumps[i + 2] + 1)
a[i] = 1;
else
a[i] = 2;
}
}
swap(n - 2);
int jumps_last = throw_ball(n - 2);
if (jumps_last == 2)
a[n - 1] = 1;
else
a[n - 1] = 2;
answer();
}
}
Thanks to Misuki for coming up with a better solution! My original solution has a much larger constant.
Consider two groups of numbers; one is $$${0, 2}$$$ and another is $$${1, 3}$$$. What do you discover?
Consider the segments formed by $$${0, 2}$$$ (same for $$${1, 3}$$$). For each $$$i$$$ and $$$j$$$, find the number of ways to arrange them into $$$i$$$ segments so that the total number of adjacent pairs inside all segments contribute oddness $$$j$$$.
Enumerate all binary strings consisting of $$$c_0$$$ zeros and $$$c_2$$$ twos. Then split these strings into several substrings.
Brute force over the number of runs of the binary string. Find the contribution to the count.
Consider any rearrangement $$$b$$$. We color all elements $$$b_i$$$ with value $$$0$$$ or $$$2$$$ as red, and those with value $$$1$$$ or $$$3$$$ as blue. Then the array can be viewed as alternating red and blue segments.
We observe the following:
- Adjacent pairs within a red segment contribute $$$0$$$ or $$$2$$$ to the oddness. The same holds for blue segments.
- Adjacent pairs between red and blue segments contribute exactly $$$1$$$ to the oddness.
- The number of red and blue segments differs by at most $$$1$$$.
Hence, if we can calculate the following:
- $$$r_{i,j}$$$: number of ways to arrange $$$c_0$$$ zeros and $$$c_2$$$ twos into $$$i$$$ segments, so that the total number of adjacent pairs inside all segments contribute oddness $$$j$$$.
- Similarly define $$$b_{i,j}$$$ for the blue digits ($$$1$$$ and $$$3$$$)
Then the final answer can be computed in $$$\mathcal{O}(n^3)$$$ by performing convolutions between the arrays pairs $$$(r_i, b_i), (r_{i+1}, b_i), (r_i, b_{i+1})$$$.
Now about computing $$$r_{i,j}$$$. This reduces to counting tuples of $$$i$$$ binary strings formed from $$$c_0$$$ zeros and $$$c_2$$$ twos, such that the total number of alternating adjacent pairs inside them equals $$$j$$$.
The idea is as follows. First, enumerate all binary strings consisting of $$$c_0$$$ zeros and $$$c_2$$$ twos. Then split these strings into several substrings.
Suppose a binary string has $$$l$$$ runs. Let $$$l_1 = \left\lceil \frac l 2 \right\rceil$$$ be the number of runs of $$$0$$$, and $$$l_2 = l - l_1$$$ the number of runs of $$$2$$$ (or vice versa). The number of such binary strings is
In such strings, there are $$$l-1$$$ adjacent pairs with different numbers, and $$$c_0 + c_2 - l$$$ adjacent pairs with the same numbers.
Now split the string into several substrings. Assume we choose $$$k_1$$$ positions among the $$$l-1$$$ differing adjacencies, and $$$k_2$$$ positions among the $$$c_0 + c_2 - l$$$ equal adjacencies. The number of ways to do this is
This partition produces $$$k_1 + k_2 + 1$$$ substrings, each contributing oddness $$$2 \cdot (l - 1 - k_1)$$$.
Thus, the contribution to $$$r_{k_1 + k_2 + 1, \, 2 \cdot (l - 1 - k_1)}$$$ is
Enumerating all possible tuples $$$(l, k_1, k_2)$$$ suffices to compute all $$$r_{i,j}$$$. This computation takes $$$\mathcal{O}(n^3)$$$ time.
By the same process, we can compute $$$b_{i,j}$$$ in $$$\mathcal{O}(n^3)$$$. Finally, convolving the results yields the number of distinct permutations for each oddness value. The entire algorithm runs in $$$\mathcal{O}(n^3)$$$ time.
#include <iostream>
#include <vector>
using namespace std;
constexpr int MAXN = 800;
// Modified from https://github.com/ToxicPie/NaCl
template <typename T> struct M {
constexpr static T MOD = 1'000'000'007;
T v;
M(T x = 0) {
v = (-MOD <= x && x < MOD) ? x : x % MOD;
if (v < 0)
v += MOD;
}
explicit operator T() const { return v; }
bool operator==(const M &b) const { return v == b.v; }
bool operator!=(const M &b) const { return v != b.v; }
M operator-() { return M(-v); }
M operator+(M b) { return M(v + b.v); }
M operator-(M b) { return M(v - b.v); }
M operator*(M b) { return M((long long)v * b.v % MOD); }
M operator/(M b) { return *this * (b ^ (MOD - 2)); }
friend M operator^(M a, int b) {
M ans(1);
for (; b; b >>= 1, a *= a)
if (b & 1)
ans *= a;
return ans;
}
friend M &operator+=(M &a, M b) { return a = a + b; }
friend M &operator-=(M &a, M b) { return a = a - b; }
friend M &operator*=(M &a, M b) { return a = a * b; }
friend M &operator/=(M &a, M b) { return a = a / b; }
};
using Mod = M<int>;
int main() {
ios::sync_with_stdio(false);
cin.tie(0);
vector<vector<Mod>> binom_(MAXN + 1, vector(MAXN + 1, Mod(0)));
for (int i = 0; i <= MAXN; i++)
for (int j = 0; j <= i; j++) {
if (j == 0 || j == i)
binom_[i][j] = 1;
else
binom_[i][j] = binom_[i - 1][j] + binom_[i - 1][j - 1];
}
auto binom = [&](int x, int y) {
if (x >= y && y >= 0 && x >= 0)
return binom_[x][y];
else
return Mod(0);
};
int t;
cin >> t;
while (t--) {
vector<int> cnt(4);
for (auto &i : cnt)
cin >> i;
int n = cnt[0] + cnt[1] + cnt[2] + cnt[3];
vector<vector<Mod>> r(n + 1, vector(n + 1, Mod(0))),
b(n + 1, vector(n + 1, Mod(0)));
auto fill_table = [&](int x, int y, vector<vector<Mod>> &table) {
for (int i = 2; i <= x + y; i++) {
vector<pair<int, int>> segments;
if (i & 1)
segments = {{i / 2, i - i / 2}, {i - i / 2, i / 2}};
else
segments = {{i / 2, i / 2}, {i / 2, i / 2}};
for (auto [l1, l2] : segments) {
if (x < l1 || y < l2)
continue;
auto mul = binom(x - 1, l1 - 1) * binom(y - 1, l2 - 1);
int different = i - 1, same = (x + y) - 1 - different;
for (int j = 0; j <= different; j++)
for (int k = 0; k <= same; k++) {
auto mul2 = binom(different, j) * binom(same, k);
int c = different - j;
table[j + k + 1][c] += mul * mul2;
}
}
}
};
fill_table(cnt[0], cnt[2], r);
fill_table(cnt[1], cnt[3], b);
vector<Mod> ans(2 * (n - 1) + 1);
for (int i = 1; i <= (n + 1) / 2; i++)
for (int j = 0; j <= n; j++)
for (int k = 0; k <= n; k++) {
int oddness = (j + k) * 2 + (i * 2 - 1);
if (oddness >= 0 && oddness <= 2 * (n - 1))
ans[oddness] += r[i][j] * b[i][k] * 2;
}
for (int i = 1; i <= (n + 1) / 2 && i + 1 <= n; i++)
for (int j = 0; j <= n; j++)
for (int k = 0; k <= n; k++) {
int oddness = (j + k) * 2 + i * 2;
if (oddness >= 0 && oddness <= 2 * (n - 1)) {
ans[oddness] += r[i][j] * b[i + 1][k];
ans[oddness] += r[i + 1][j] * b[i][k];
}
}
for (int i = 0; i <= 2 * (n - 1); i++)
cout << ans[i].v << " \n"[i == 2 * (n - 1)];
}
}








Speedforces, but still better than no contest. Thanks for the contest!
thanks for speedy editorial for speedyforces round
Only if the writers had swapped D and E
And B and C :)
deleted
Fun round, problems were really nice. I think I also hit Master for the first time! Somehow I bricked A for an eternity and thought that I was on the path to a total disaster...
I am the opposite, speedy boi as usual, stuck on the harder one, got 3 bugs for E
So am I. I solve the first three question in 25 min. But during the left time, I only solve a E.
i cannot able to solve B , new to codeforces ,any advice for solving such type of problems
Me too, I actually ended up getting Problem B before Problem A!
Who got stuck on D and didn't see E?
me , but was E easier to solve ?
i didn't even had time to try it lol
so I am not able to solve D most of the times so I generally don't read E unless I see high solve count on dashboard page.
for me yes
I finished E 15 minutes after contest. It feels sad :(
thank you
Unfortunately, I got
wow that D problem was out of this world
Nice solution for D, the "diameter increment" property is a very interesting "invariant". I wish I had observed it during the contest.
Very nice problems but E was by far too easy. Obviously easier than D and arguably even div2C difficulty
Queueforces, but great problem, thank you!!
yeah for me also pretests were stuck in queue.. was it because of too many submissions
And between C and D there's a huge gap……
Swapping D and E makes a lot of sense. Wasn't it the case with testers?
My original proposal was D and E swapped. During testing we found the gap between C and D (which is current E), so we decided to swap DE but E got nerfed to only output the minimum number of operations. Most testers tested with this version, and it seemed like the right difficulty. Then ~10 days before the contest my coordinator advised me to add the first operation output and made the current problem set.
So I don't know, maybe the first operation output really adds a lot to the problem, or the gap is still huge even with current DE swapped, it is really hard to tell.
D was very easy for me, maybe because initially I started to consider the diameter of the tree and understood that optimal operation will increase diameter with one.
I am surprised that the problem D is hard for the most participants.
I don't know why , but my dumbass brain was convinced that sliding to the leaf node is only way to increase the path length
I mean it is true that the endpoints of the diameters have to be leaves
I was stuck doing E, didn't even finish it.
but I did consider finding diameter on D, with like 30% confidence,
which isn't enough for me to try it before E.
I still feel it's more non-trivial than E
Can anyone tell why this is giving wrong answer https://mirror.codeforces.com/contest/2134/submission/335699285
It works on my VSCode :(
It seems like you're not doing cout.flush()?
C is too easy to guess, especially when D is this hard
Couldn't solve D but the idea is so cool.
Does there exist $$$O(n^2)$$$ or $$$O(n^2logn)$$$ solution for F? The last convolution can be speed up but I am not sure how to speed up the other part.
I am not sure about convolution, but I was trying generating functions for $$$dp[c0][c1][c2][c3][len][last\_symbol]$$$, and you can create it with 5 vars for $$$c0, c1, c2, c3, len$$$, and 4 functions for $$$last\_symbol$$$, and there will be a very unpleasant system, $$$l = len$$$:
$$$F_0 = a(1 + l^0F_0 + l^1F_1 + l^2F_2 + l^1F^3)$$$ $$$F_1 = b(1 + l^1F_0 + l^0F_1 + l^1F_2 + l^2F^3)$$$ $$$F_2 = c(1 + l^2F_0 + l^1F_1 + l^0F_2 + l^1F^3)$$$ $$$F_3 = d(1 + l^1F_0 + l^2F_1 + l^1F_2 + l^0F^3)$$$
Maybe if you solve it, there will be explicit and efficient formula for coefficients.
I am not sure if this direction even works. I was trying to optimize the first part of editorial
Sorry for the late reply, but it seems that we can solve F in $$$\mathcal{O}(n^2 \log n)$$$ time with an adaptation from maspy's write-up.
Let's define $$$r'_{i,j}$$$: number of ways to arrange $$$c_0$$$ zeros and $$$c_2$$$ twos into $$$i$$$ segments, so that the total number of adjacent pairs inside all segments is $$$j$$$.
Let's calculate $$$r'_{i,j}$$$ in increasing order of $$$i$$$. For $$$i = 1$$$ it's easy. For $$$i \gt 1$$$, we try to take a combination of $$$i - 1$$$ segments and make one more split to make $$$i$$$ segments. We have the following formula:
$$$r'_{i,j} = (j + 1) \cdot r'_{i - 1, j + 1} + (c_0 + c_2 - 1 - j) \cdot r'_{i - 1, j}$$$
This is simply the addition of two cases whether the split happens at an adjacent pair with same/different numbers.
However $$$r'_{i,j}$$$ is not the real count. We have to divide it by $$$i!$$$ because we overcount each way of splitting by $$$i!$$$ times.
Therefore $$$r_{i,j}$$$ can be calculated in $$$\mathcal{O}(n^2)$$$ time and the whole algorithm runs in $$$\mathcal{O}(n^2 \log n)$$$ time.
That's interesting, thanks a lot
Hold on, im new here, so the contest is not rated?
No, it just takes a bit after the contest before the ratings are updated.
Oh i see the rating now, thanks
No problem! Happy to help.
Despite my performance in this contest was good, but frankly this one is speedforces. Don't be surprised if you find a specialist outperformed or a CM underperformed.
Thanks for the contest ! , I think I'll finally Be MathModel , It was a fun speed run.
A is good.
B was funny , The construction is $$$a_i:=a_i+(a_i \bmod (k+1) \cdot k)$$$ , my favorite problem of the round.
C good as well.
Given that speed running , I wonder how I'm estimated to get expert because I feel like I'm far from expert smh.
Funny round , Thanks henotrix !
This might be stupid, but how does someone come up with this elegant looking formula for B.
number theory stuff & do some test cases with hand, the desired gcd is a number you can reach with adding ks from 0 to k times, so number of possible variations for an element is k + 1, so if we set out desired gcd to be k + 1, we can change an element's mod over k + 1 however we want since each time we add k the number's mod decreases by 1, we want to make all elements 0 mod (k + 1) so if an element is cong. to a % (k + 1) we add k (a) times
Number Theory always is a bane to my conscious.
Thanks for breaking it down. Cheers.
I solved E 15 minutes after contest. It was such a cool problem, but my coding skills really need to be improved.
C felt much easier than B.
it was easy but i got stuck at B the whole time and didn't even bother to check C as i assume it to be hard :(
I got stuck on it too. Solved C in like 15 minutes while it took around 50 for B
how did you approach B i am not getting it
I got the modulus idea immediately. After that I just worked out some cases. I am not practicing much right now so the implementation took some time.
For D, we can construct the whole operation sequences in $$$\mathcal{O}(n)$$$.
And implement the checker in $$$\mathcal{O}(n\log^2n)$$$
If the problem D is changed to this, how would it be rated in the cf rating?
I would prefer to buff D to the checker version and swap with E.
Why was there no hacking phase? For people in Python doing B and doing naive modulo each element, you could make a brutal test with a lot of testcases, with k very huge and odd, and the mod step taking a long time, and every element in the arrays odd so that there was a better solution, but you didn't implement it, there would be a TLE IMO unless I am mistaken.
Edit: I underestimated Python lol. But, it would definitely trip people using print for huge numbers.
Because an unsuccessful hack attempt in div. 1 or 2 costs you 50 points.
That was unexpected lol
C is pretty good, it took me quite a while to sol. Wonder why everone got it so quickly?? anyway, thanks for the contest :3
Yeah, C was great, everyone got it because they just observed it for subarray size 2 and size 3, which directly gives away the answer to C, that greedily remove from ahead odd index then behind index
Almost got E. Spent too much on D but failed. Only little time left to figure out how to determine a[1] when the length is an odd number...
i dont know how the hell in B $$$ a_i + k*c_i \equiv c_i + k*c_i $$$ why $$$a_i = c_i$$$
can some explain it to me
we defined c_i = a_i % (k + 1), hence a_i and c_i are congruent modulus (k + 1)
What is c[i] bro? I dont understand what did u said..
Consider this: you want all of the indices $$$a_i$$$ to have some common factor. So, you can either add 0 or add k to $$$a_i$$$. Hence, we can say that $$$a_i \leftarrow a_i + c_i*k$$$ where $$$c_i$$$ depends on what $$$a_i$$$ is. Now one observation that can be made is that if you take $$$c_i=a_i \mod k+1$$$, you can write your new assigned $$$a_i$$$ as $$$ a_i + (a_i \mod k+1)*(k+1-1)$$$ which further simplifies to $$$(a_i - (a_i \mod k+1)) + (k+1)*(a_i \mod k+1)$$$ which you can see is divisible by $$$k+1$$$. So, we can assign such an $$$a_i$$$ to each of the element, so that that it becomes divisible by $$$k+1$$$. Also notice our factor with k is $$$a_i \mod k+1$$$ which is less than or equal to $$$k$$$, so we are at most only doing k operations.
No worries about the imbalance. Any contest where the LLM crowd is not on the top of the leaderboard is a good one these days.
One of the best contests in CF.
is this round or contest is rated or not ?
In problem B's editorial, it says, you can also find the smallest integer x such that gcd(x,k)=1. Why should it be coprime with k? I didn't get this point. Can anyone explain this, please
modular inverse of a (mod m) exists only if a and m are relatively prime
I am not sure what coprimes are, but the goal is to find a common divisor (factor) for all of b.
Lets say we try some x. If x is a (prime) factor of k, then adding k to b, (where k % x==0) will not change the remainder.
Hence, you have to find a prime p which is not a factor of k (k % p !=0). Thereby you are guaranteed to change the remainder and it will at some point be divisible by p.
Since k<10**9, you only need to try primes up to 30 or so.
When you have X and K as coprime, and a number A!
If you keep adding K to A and take modulo X then you'll see that, within X additions, you would have covered all the range 0 to X — 1.
i'm not getting that in problem B if we solve by the first approach how we ensure that at the same time the number of operations required are at most k ??
The goal is to make GCD of all the elements = k + 1. So, in the end each a[i] % (k + 1) = 0. Suppose initially a[i] % (k + 1) = x. Then x belongs to [0, k].
If x = 0, then we do not need to add k during any of the k operations. Else, if k != 0 then (a[i] + k) % (k + 1) = x — 1. Therefore, every time you add k to a[i] the modulo (k + 1) decreases by 1. Since a[i] % (k + 1) can be at most k, if you keep adding k for
a[i] % (k + 1)times then it will become 0. And,a[i] % (k + 1)can be at most k.very nice explanation thank you so much!!!
I did not understand, why do we need to make gcd k + 1 ?
we need a number coprime with k, and k + 1 is the simplest and guaranteed to be coprime with k
swap(B, C); swap(D, E);
I have a doubt in the E problem, we are using n throw queries first and then if there are n/2 unknown indices, doesn't that mean we are performing n/2 swap and n/2 throw queries so isn't it total 2n queries or am I mistaken?
I got it know we skip in the first compute so only n/2 queries max at first. Really good question
We are not using n throw queries first. We can know which indices will be unknown and can skip them. Let u be the no. of unknown indices, the total ops will be: n-u throws + u swaps + u throws = n + u. u can be at most n/2, so it turns out to be <= 3n/2
yeah got it, Thanks
Can anybody please help me with this D submission? Why does it fail? on pretest 2, test 33
335722832
On a quick glance your bfs function assumes 1-based indexing for nodes whereas solve uses 0-based indexing. Only problematic line I can think of is
cf(i, 1, n) {. Change it tofor (int i = 0; i < n; ++i) {.Thank You Very Very Very Much!
Fun round! Thanks for letting me hit CM orz
I want to share some feedback about problem D. I very much like this version of the problem even though it is much harder compared to usual Div2 D. I saw in one of your comments that it was initially about just asking the minimum number of operations. That would be the exact type of problem that I(and hopefully many others) hate because the difference in difficulty between guessing the solution and proving the solution is like 400 points and it basically encourages everyone to guess solutions in contests if they want to improve their rating. I am fine with having an imbalanced round that doesn't have any guess problems. However, if a problem's proof is pretty much of the same difficulty compared to guessing(Example: Today's problem B), then I don't mind it. Thanks for the contest!
Could anyone share their motivation behind thinking of the diameter of the tree in problem D (other than taking an educated guess)?
I think that the idea of making a tree a line can be translated to increasing the diameter of the tree until it is the maximum. The line pushes you to the diameter.
The AC rate between ABC and DEF is insane lol
A, B and C is too easy, they should be in A, B and C in Div 3
Very good contest, the diff between C and D is crazy
The time I solved problem A and B beat most people in this contest, cuz I took alomost 2 hours...
And you solved Problem C in 5 minutes.:)
:( :D
Please tell me if my approach for Problem D is correct or not
My approach for D The best operation would be: To do the operation from a node with degree > 2 in the direction of the leaf node. Since there could be multiple combinations of leaf nodes and nodes with degree > 2, we need to choose the combination that has the minimum distance between the leaf node and the node with degree > 2.
To find the best combination, I am using BFS simultaneously from all the leaf nodes, and as soon as I encounter any node with the maximum degree, I return the BFS for that particular combination. This, I believe, is O(n) time complexity because I don't add any node in the queue twice.
In my approach b --> node with maximum degree (>2) c --> either a leaf node (if the leaf node is directly connected to b), or a directly connected node in the direction of the leaf node (computed above using BFS) a --> It will be any node directly connected to b (doesn't matter)
But, my solution is getting TLE on TC-2 (335747332)
I believe my solution's time complexity is O(n). Please correct me if I am wrong (Currently, I am not worried if my approach is correct or not; I am more worried about why my solution is giving TLE)
I also got idea of same way...like multispurce bfs from leaves to find which node is closest to any leaf and has a b c type of configuration and then print the parent of that node coming from the smallest distance leaf with the node and any other child... Criteria for selection of a node will be if degree greater than 2 @henotrix
henotrix please shed light on my idea
Everyone said C is easy, but not in my case :( Maybe im stupid or something but i feel it quite greedy and seem to be wrong. Can someone explain it mathematically?
Try to see that, if the condition of A[i] >= A[i — 1] + A[i + 1], holds true for all such possible values of 2 <= i <= N — 1, then no matter what subarray size you take, it'll never be violated.
During Update you just need to make sure the gap of A[i] — (A[i — 1] + A[i + 1]), should be first used to reduce the A[i + 1] to the very extreme untill it reaches zero if it's possible, so that you can get the benefit for the later 3 length subarray segment {A[i], A[i + 1], A[i + 2]}
In editorial for 2134B - Add 0 or K the statement "take $$$k+1$$$ as $$$g$$$" seems not very trivial to me, but note $$$\{0.k, \dots, k.k\}$$$ is a complete residue system modulo $$$k+1$$$, which means by adding these to a set of numbers you can always make them divisible by $$$k+1$$$. That's how I got it. maybe you find it useful.
Yeah helpful
No, my rating drops after I participated in this contest!
bruh i solved D when the contest got only 20 minutes to end.
I feel lucky to solve it but also sad because i didn't see E.
Problem B is so-so good
How does the checker for problem D works??
I think it calculates the set of element that satisfies b is on the diameters(if many, then any of it) ,a is connected to b and on diameter too, c connected to b.Then it check if your answer is on the set. I guess the element have to be on the set ,any other operations cannot expand the diameter therefore would fail to reach the minimum numbers of operations.
If the diameter's length increased after performing an operation, operation was optimal.
I spent a long time on problem B and solved it with exgcd
Should swap E and D.I solved the first three problems very fast and tried very hard to solve D in the remaining time but failed, didn't even look at E...After contest find E is easier than D..(D's property about diameter is really hard to notice).Anyway thanks for the contest!
According to me the question statement for D was incorrect.. it was written that every neighbour of 'b' will be slided to 'c', but in the solution we can slide each neighbour independently to either 'b' or 'c'..! The question was not well framed..!^_^
I have a slightly different solution for problem E that only does one pass.
Start by finding $$$a_n$$$ and $$$a_{n-1}$$$. Also find $$$a_{n-2}$$$ if $$$n$$$ is odd. This can be done in $$$3$$$ or $$$5$$$ operations. (always query $$$n-1$$$).
Now assume you know $$$d_i$$$ and $$$d_{i+1}$$$. You want to use at most $$$3$$$ queries to find $$$a_{i-1}$$$ and $$$a_{i-2}$$$ (and then find corresponding $$$d$$$).
Case 1: $$$d_i = d_{i+1} \implies$$$ query $$$i-2$$$. If the answer is $$$d_i + 1$$$ then $$$a_{i-2} = 2$$$, otherwise it is $$$1$$$. Swap $$$i-2$$$ and $$$i-1$$$ and redo the query. In total we have $$$3$$$ queries.
Case 2: $$$d_i \neq d_{i+1} \implies$$$ query $$$i-1$$$. If the answer is $$$d_i$$$ then $$$a_{i-1} = 2$$$, otherwise it is $$$2$$$. Swap $$$i-2$$$ and $$$i-1$$$ and redo the query. In total we have $$$3$$$ queries.
You figured out $$$2$$$ values in $$$3$$$ queries. The total is $$$\frac{3n}{2}$$$ for even $$$n$$$ and $$$\frac{3n+1}{2}$$$ for odd $$$n$$$.
This is basically the same solution given in the editorial, but only one pass is required.
I wonder if there is a strategy that can do better in general (i.e. no matter the original configuration do at most $$$\left\lceil{\frac{3n}{2}}\right\rceil-1$$$)
Why not swap D and E
I solved 2134B - Add 0 or K using a slightly different approach.
Let's recall $$$g$$$ as some integer $$$ \gt 1$$$ such that each $$$a_i + k \cdot c_i$$$ is divisible by $$$g$$$ then:
Now, we need to apply Fermat's Little Theorem. So, $$$g$$$ and $$$k$$$ must be coprime.
The immediate value for $$$g$$$ that came to my mind is $$$k + 1$$$ as $$$gcd(k,\ k + 1) = 1$$$. So,
Then the smallest $$$c_i$$$ that satisfies the equation is $$$(-a_i \cdot k^{k\ -\ 1})\ \%\ (k + 1)$$$, and the final answer is:
I hope my explanation is clear :) I know that I overthink it, but this is the first approach that came to my mind, and it took me a lot of time to verify during the contest.
Problem E is clever!
I will write down my initial approach, which gives correct answer, but exceeds the query limt (at max $$$2n-1$$$ queries) Observations: $$$\newline$$$ $$$(1)$$$ It is easy to see that throwing at coordinate $$$n-1$$$, will directly reveal $$$a_{n-1}$$$ (infact its given as hint 1) $$$\newline$$$ $$$(2)$$$ If $$$jumps[i+1] \ne jumps[i+2]$$$, then we can determine $$$a_i$$$ in exactly one query. $$$\newline$$$ $$$(3)$$$ If we write down $$$jumps[i]$$$, then if $$$jumps[i] = jumps[i+1] \implies jumps[i+1] \ne jumps[i+2]$$$ $$$\newline$$$
Now, my algo was as follows: $$$\newline$$$ 1) Query $$$throw \, n-1$$$ [uniquely determines $$$a_{n-1}$$$ from observation $$$(1)$$$ ] $$$\newline$$$ 2) Perform $$$swap \, n-1$$$ and $$$throw \, n-1$$$, this will determine $$$a_n$$$ and $$$jumps[n]$$$ $$$\newline$$$ 3) For $$$i = n-2 \, \text{to} \, 1$$$, if $$$jumps[i+1] \ne jumps[i+2]$$$, query $$$throw \, i$$$ and get $$$a_i , jumps[i]$$$, else : $$$swap \, i$$$, and query $$$throw \, i+1$$$ and get $$$a_i$$$.
$$$\newline$$$ The trouble? It exactly follows the observations as the editorial, but in the worst case, it can go in the else part of step 3, for every $$$i$$$, thus giving $$$2n-1$$$ queries :(
Consider when $$$a = [ \underbrace{1 \dots 1}_{n-1 \text{ times}} 2 ]$$$, my described algo will hit $$$2n-1$$$ queries exactly.
Though, my idea and observations were the same, but this way you would exceed queries :(
My solution is quite similar to yours but doesn't exceed query limit. It is easy to show that if throws[i + 1] == throws[i + 2] then a_{i-1} can be determined in one query.
Mine is
1) Find out a_n and a_{n-1} in the same way as you.
2) Then for i = n — 2 to 2:
Similarly determine a_1 in the case of odd n.
Since I either use 1 query per element or 3 queries for 2 elements. The total queries is bounded by ceil(3n/2).
Code
This fails on TC140 in Test 2 and I have no idea why.
Can someone help me fix it?
I think I was doing a similar mistake, when you perform swap, you are actually swapping the values of $$$a_i$$$, but when outputting, are you making sure to output the original $$$a$$$ and not the swapped one?
PS: I checked your code on some manual test case, it works fine, maybe you can try some edge cases, logic looks fine, you might be doing something silly.
Yeah I'm maintaining an ans array and a cur array, swaps only happen in cur array and I output the ans array. I already tested it with a lot of cases, can't figure out where it fails at all.
If you know jumps[i+2] and jumps[i+3], you can work out positions i and i+1 in 3 operations:
This uses 3 queries for every 2 values.
At the end we might might be left with position 1 if n is odd, but we have two queries to play with. If jumps[2] != jumps[3], we can query position 1 directly, otherwise we swap positions 1 and 2 and query the new position 2 (as jumps[3] and jumps[4] must be different in this case).
Note that we can set jumps[n+1] = jumps[n+2] = 0 to make things easier.
Wow, thats a really nice way! Thanks, I appreciate the approach!
In problem 2134B - Add 0 or K, can anyone explain why $$$a_i ← a_i + k⋅(a_i\ mod (k+1))$$$ is correct?
This is how I thought during the contest:
Consider $$$a_i \equiv x (mod \, (k+1))$$$, where $$$0 \le x \le k$$$. Now, observe that $$$k \equiv -1 (mod \, (k+1))$$$. Thus, adding $$$k$$$ each time will result in the mod lowering by 1. $$$x \rightarrow x-1 \rightarrow \dots 0 \rightarrow k \rightarrow \dots$$$ (since we can use at most $$$k$$$ operations, it is guaranteed we will hit $$$0$$$ exactly once).
Hence, if you find $$$a_i + x \cdot k \equiv a_i - x (mod \, (k+1))$$$ (which will be equal to $$$0$$$ (mod ($$$k+1$$$)))
For example: consider $$$a_i = 117 , k = 11$$$, clearly $$$117 \equiv 9 (mod 12)$$$, hence $$$117 + 9 \times 11 = 216 \equiv 0 (mod 12)$$$.
I hope that is clear.
Thank you, but why did you think about $$$k + 1$$$?
Mostly intuition, but you can see that the problem statement says that it is always possible, so in at most $$$k+1$$$ operations (adding 0 itself is an operation), we can always reach $$$0$$$ (mod $$$g$$$), which when applied to $$$k+1$$$ fits perfectly. It was kind of reverse engineering, and testing out some cases.
Let
x = a[i] mod (k+1), and note thatk = -1 mod (k+1).Thanks for your explanation, but I mean, how to come up with this formula initially? why $$$k + 1$$$?
The key observation / thing to know is that
k = -1 mod (k+1), e.g.4 = -1 mod 5or999 = -1 mod 1000. And if we have have a remainder ofxwe can make it 'disappear' by addingxtimes-1.Thanks!
After reading the editorial --> D is such an amazing problem.
For problem b, I didn't get it why k+1 is the best gcd, can anyone explain please
Can anyone tell me why my F code always gets TLE on test #16? Is it just me? If it is not, I dreadfully need a tutorial to optimize constant.
OK, I solved it myself. Only because of pre-processing fact and infact to get combinations. It is too slow.
guys why in problem B we can say, that (ai + k ⋅ ci ≡ ci + k ⋅ ci)? a[i] = c[i]?
I have a solution for problem E that seems slightly simpler in my opinion: going backwards from n to 1, consider pairing adjacent indices, and we will attempt to solve the values for each of the pairs in 3 queries. If n is odd, we shall simply try to solve for i=1 in 2 queries after having solved for the rest of the indices.
I maintain two arrays, val[1..n] and dp[1..n + 2], where val[i] is the value of the i th index and dp[i] corresponds to the value of the throw(i) (in the current hidden array configuration) for i <= n, and dp[n + 1] = dp[n + 2] = 0.
I also maintain the list of swaps that are made, and in the end I shall go in the reverse order of this list to restore the original array.
For each pair (i + 1, i) belonging to {(n,n-1), (n-2,n-3)..( ,2 or 1)} we calculate val[i] and val[i + 1] by observing the following:
If n is odd, we have 2 queries left to use for ascertaining val[1].
We notice that either dp[2] != dp[3] or dp[3] != dp[4]
(proof by contradiction: if dp[2] == dp[3] == dp[4], but val[2] = 1 or 2, which means dp[2] == 1 + dp[3] or 1 + dp[4], which contradicts the original assumption).
If dp[2] != dp[3], we simply ask throw(1) and if it equals 1 + dp[2], then val[1] == 1. If it is not equal to 1 + dp[2], then val[1] == 2.
If dp[2] == dp[3], then dp[3] != dp[4]. So we first swap the values at indices 1 and 2, and then query throw(2) to determine the value of the new val[2] which corresponds to the previous val[1]. If throw(2) == 1 + dp[3] then val[2] == 1 else val[2] == 2.
I am solving problem A: if the test case is n=5,a=4,b=1; what will be the answer :NO but the answer is:YES right? according to the editorial it is giving NO can you review on this and tell me what the thing i went wrong on?
Attention!
Your solution 335622185 for the problem 2134A significantly coincides with solutions MASTER_RAFAT/335621768, wwwglll/335622185. Such a coincidence is a clear rules violation. Note that unintentional leakage is also a violation. For example, do not use ideone.com with the default settings (public access to your code). If you have conclusive evidence that a coincidence has occurred due to the use of a common source published before the competition, write a comment to post about the round with all the details. More information can be found at http://mirror.codeforces.com/blog/entry/8790. Such violation of the rules may be the reason for blocking your account or other penalties. In case of repeated violations, your account may be blocked.
First of all, I don't know this person, and the time between his code submission and mine was less than a minute. Secondly, this is a relatively simple problem, and having similar logic in the code is quite common. You can check all my submitted code—it was entirely written by me.
how should one start to think when they come at
I was unfamiliar with modular until i began programming, and its hard even on paper. So I was clueless as $$$ g $$$ and $$$ c_i $$$ were both unavailable
For the solution 2 of problem B in the editorial, how to prove $$$c_i$$$ is guaranteed to be less or equal than $$$k$$$?
Problem D & E is a bit tricky and ad-hoc, but I like them!
Can anyone explain why the idea of choosing g = k+1,helps or any good resources to understand this better with follow up questions.
Honestly, the editorial for C was kinda bad. Why is there no proof for the greedy approach?
There is a small mistake in the editorial for D.
If $$$b$$$ is an endpoint of $$$p$$$ and $$$p$$$ does not pass through $$$a$$$ or $$$c$$$, the length of $$$p$$$ will increase by one.
Consider the figure above. Suppose $$$p$$$ is the unique path between nodes $$$3$$$ and $$$2$$$. After applying the operation $$$4$$$, $$$3$$$, $$$2$$$; the length of $$$p$$$ changes from $$$2$$$ to $$$3$$$.
Tagging: henotrix
It may remain the same length or increase by $$$1$$$. I have updated the statement, thanks.