I hope you all enjoyed the contest!
| Predictor | A | B | C | D | E | F | G | H |
|---|---|---|---|---|---|---|---|---|
| AksLolCoding | 800 | 900 | 1000 | 1400 | 1500 | 1800 | 1800 | — |
| ALnQ417 | 800 | 900 | 1100 | 1300 | 1500 | 1600 | — | — |
| bookcat | 800 | 1200 | 1200 | 1500 | 1600 | 1600 | 1600 | 2000 |
| chromate00 | 800 | 800 | 1000 | 1400 | 1400 | 1800 | 1800 | 2200 |
| cry | 800 | 1000 | 1200 | 1600 | 1700 | 2100 | 2100 | 2400 |
| Edeeva | 800 | 800 | 1000 | 1300 | 1400 | 1600 | 1900 | — |
| efishel | 800 | 1000 | 1000 | 1500 | 1500 | 1800 | 1900 | 2200 |
| _Equinox | 800 | 800 | 800 | 1200 | 1500 | 1600 | 1800 | — |
| fatalerror | 800 | 1400 | 1200 | 1600 | 1700 | 1900 | 2000 | 2100 |
| -firefly- | 800 | 800 | 900 | 1400 | 1400 | 1600 | 1800 | 2500 |
| nifeshe | 800 | 800 | 1000 | 1500 | 1600 | 1800 | 1900 | 2400 |
| Non-origination | 800 | 800 | 1000 | 1600 | 1300 | 1600 | 1800 | — |
| reirugan | 800 | 900 | 1200 | 1500 | 1500 | 1800 | 1800 | — |
| temporary1 | 800 | 800 | 900 | 1400 | 1300 | 1800 | 1900 | 2200 |
| white_two | 800 | 800 | 1200 | 1400 | 1400 | 1900 | — | — |
Idea: -firefly-
Preparation: -firefly-
Analysis: temporary1
The step $$$2$$$ doesn't affect step $$$1$$$.
import sys
input = lambda: sys.stdin.readline().rstrip()
ii = lambda: int(input())
mii = lambda: map(int, input().split())
lii = lambda: list(mii())
def solve():
n = ii()
A = lii()
B = lii()
return sum(a - b for a, b in zip(A, B) if a > b) + 1
for _ in range(ii()):
print(solve())
Idea: -firefly-
Preparation: -firefly-
Analysis: reirugan
There are no much difference between the series when $$$n=k$$$ and $$$n=k+1$$$.
Construct the series from the left to the right greedily.
import sys
input = lambda: sys.stdin.readline().rstrip()
ii = lambda: int(input())
mii = lambda: map(int, input().split())
lii = lambda: list(mii())
def solve():
n = ii()
if n == 1:
return [0]
ans = [-1, 3] * (n // 2)
if n % 2:
ans.append(-1)
else:
ans[-1] = 2
return ans
for _ in range(ii()):
print(*solve())
Idea: Tobo
Preparation: Tobo, -firefly-, Friedrich
Analysis: Tobo
Consider the operations modulo $$$k$$$.
import sys
input = lambda: sys.stdin.readline().rstrip()
ii = lambda: int(input())
mii = lambda: map(int, input().split())
lii = lambda: list(mii())
from collections import defaultdict
from random import getrandbits
RD = getrandbits(31)
def solve():
n, k = mii()
A = lii()
B = lii()
cnt = defaultdict(int)
for x in A:
r = x % k
cnt[min(r, k-r) ^ RD] += 1
for x in B:
r = x % k
cnt[min(r, k-r) ^ RD] -= 1
return all(v == 0 for v in cnt.values())
for _ in range(ii()):
print('YES' if solve() else 'NO')
Idea: -firefly-
Preparation: -firefly-
Analysis: __baozii__
What is the minimum diameter achievable?
How are leaves significant?
import sys
input = lambda: sys.stdin.readline().rstrip()
ii = lambda: int(input())
mii = lambda: map(int, input().split())
lii = lambda: list(mii())
def solve():
n = ii()
g = [[] for _ in range(n)]
deg = [0] * n
for _ in range(n-1):
u, v = mii()
u -= 1; v -= 1
g[u].append(v)
g[v].append(u)
deg[u] += 1
deg[v] += 1
if n <= 3:
return 0
c1 = sum(deg[u] == 1 for u in range(n))
mx = max(sum(deg[v] == 1 for v in g[u]) for u in range(n))
return c1 - mx
for _ in range(ii()):
print(solve())
Idea: -firefly-
Preparation: -firefly-
Analysis: __baozii__
import sys
input = lambda: sys.stdin.readline().rstrip()
ii = lambda: int(input())
mii = lambda: map(int, input().split())
lii = lambda: list(mii())
def solve():
n = ii()
A = lii()
B = lii()
if A[-1] != B[-1]:
return False
for i in range(n-2, -1, -1):
if A[i] != B[i] and A[i] ^ A[i+1] != B[i] and A[i] ^ B[i+1] != B[i]:
return False
return True
for _ in range(ii()):
print('YES' if solve() else 'NO')
Idea: efishel, __baozii__
Preparation: -firefly-
Analysis: temporary1, reirugan
The grid possesses a significant property. Try to generate some grids with random $$$a$$$ and $$$b$$$, and understand the distribution of $$$0$$$'s and $$$1$$$'s.
For each point $$$(x, y)$$$, you can derive the answer formula like $$$\min(f(x,y), g(x,y))$$$. Note that $$$\displaystyle\min(f,g)=\frac{f+g}{2}-\frac{|f-g|}{2}$$$
#include <bits/stdc++.h>
using namespace std;
#define f first
#define s second
#define ll long long
#define pii pair<int,int>
#define pli pair<ll,int>
#define pll pair<ll,ll>
#define tiii tuple<int,int,int>
#define tiiii tuple<int,int,int,int>
#define pb push_back
#define eb emplace_back
#define emp emplace
#define mkp make_pair
#define mkt make_tuple
#define vctr vector
#define arr array
#define all(x) x.begin(), x.end()
#define amin(a,b) a = min(a,b)
#define amax(a,b) a = max(a,b)
#define brick(x) cout << #x << " = " << (x) << " | "
#define dbg(x) cout << #x << " = " << (x) << '\n'
#define vdbg(a) cout << #a << " = "; for(auto _x : a)cout << _x << ' '; cout << '\n'
#define adbg(a,n) cout << #a << " = "; for(int _i = 1; _i <= n; ++_i)cout << a[_i] << ' '; cout << '\n'
#define adbg0(a,n) cout << #a << " = "; for(int _i = 0; _i < n; ++_i)cout << a[_i] << ' '; cout << '\n'
mt19937 rng(static_cast<uint32_t>(chrono::steady_clock::now().time_since_epoch().count()));
int uid(int a, int b) { return uniform_int_distribution<int>(a,b)(rng); }
ll uld(ll a, ll b) { return uniform_int_distribution<ll>(a,b)(rng); }
const int MOD = 1e9+7; // 998244353;
int a[200005];
int b[200005];
int c[200005][2];
int d[200005][2];
ll pf[400005][2];
int cnt[400005];
int main() {
ios_base::sync_with_stdio(false);
cin.tie(NULL);
int tt;
cin >> tt;
while (tt--) {
int n;
cin >> n;
string t;
cin >> t;
for (int i = 1; i <= n; ++i) {
a[i] = (t[i-1] == '1');
}
cin >> t;
for (int i = 1; i <= n; ++i) {
b[i] = (t[i-1] == '1');
}
const int o = n;
for (int i = 0; i <= 2*n; ++i) {
pf[i][0] = 0;
pf[i][1] = 0;
cnt[i] = 0;
}
for (int i = 1; i <= n; ++i) {
c[i][0] = c[i-1][0]+(a[i] == 0);
c[i][1] = c[i-1][1]+(a[i] == 1);
pf[o+c[i][0]-c[i][1]][0] += c[i][0];
pf[o+c[i][0]-c[i][1]][1] += c[i][1];
cnt[o+c[i][0]-c[i][1]] += 1;
}
for (int i = 1; i <= 2*n; ++i) {
pf[i][0] += pf[i-1][0];
pf[i][1] += pf[i-1][1];
cnt[i] += cnt[i-1];
}
for (int i = 1; i <= n; ++i) {
d[i][0] = d[i-1][0]+(b[i] == 0);
d[i][1] = d[i-1][1]+(b[i] == 1);
}
ll ans = 0;
for (int i = 1; i <= n; ++i) {
int z = d[i][1]-d[i][0];
// 0 worth it when delta <= z
ans += d[i][0]*(ll)cnt[o+z]+pf[o+z][0];
ans += d[i][1]*(ll)(n-cnt[o+z])+pf[2*n][1]-pf[o+z][1];
}
cout << ans << '\n';
}
return 0;
}
Idea: -firefly-
Preparation: -firefly-
Analysis: __baozii__
For a set $$$S$$$, how many operations are required to transform $$$S$$$ into $$$S \setminus {\min(S)}$$$?
$$$S$$$ can be represented as a binary number (though very large).
How to calculate the answer when elements in $$$S$$$ are very large?
MOD = 10 ** 9 + 7
dp = [0] * 31
dp[0] = 1
for i in range(1, 31):
dp[i] = i + 1
for j in range(i):
dp[i] = dp[i] * dp[j] % MOD
def dfs(v, k):
if k == 0:
return 1
ans = v + 1
v = min(v - 1, 30)
k -= 1
for j in range(v + 1):
if k >= 1 << j:
ans = ans * dp[j] % MOD
k -= 1 << j
else:
ans = ans * dfs(j, k) % MOD
break
return ans
for _ in range(int(input())):
n, k = map(int, input().split())
s = list(map(int, input().split()))
for i in range(n):
s[i] -= 1
s.sort()
ans = 1
for v in s:
if v <= 30 and k >= 1 << v:
k -= 1 << v
ans = ans * dp[v] % MOD
else:
ans = ans * dfs(v, k) % MOD
break
print(ans)
Idea: __baozii__, -firefly-
Preparation: -firefly-
Analysis: -firefly-
Let's consider an easier problem: how do we find one coprime pair from $$$a$$$?
If there exists one coprime pair, it means there exists an $$$i$$$ such that
In other words, there are at least one other indices $$$j$$$ such that $$$\gcd(a_i, a_j)=1$$$.
Therefore, we strengthen the problem that we need to find $$$\displaystyle\sum_{j=1}^{n}[\gcd(a_i, a_j)=1]$$$ for each $$$i$$$. Here, we introduce two ways to solve the formula.
In the following discussion, we denote $$$f(d)$$$ to be the count of elements in $$$a$$$ divisible by $$$d$$$. We can precompute $$$f(1), f(2), \dots, f(m)$$$ in $$$O(m\log m)$$$ time using harmonic trick.
For each $$$a_i$$$, find the set of distinct prime factors of $$$a_i$$$. Let's call this set $$$P_i$$$. For each prime factor $$$p_k \in P_i$$$, let $$$S_k$$$ be the set of elements in the array $$$a$$$ that are divisible by $$$p_k$$$.
Any number that is NOT coprime to $$$a_i$$$ must share at least one prime factor with $$$a_i$$$, so it must belongs to the set union $$$S_1 \cup S_2 \cup \dots \cup S_k$$$. By PIE we can calculate the size of the union:
Here:
- $$$|S_r|$$$ is the count of elements in $$$a$$$ divisible by $$$p_r$$$, which is $$$f(p_r)$$$.
- $$$|S_r \cap S_s|$$$ is the count of elements in $$$a$$$ divisible by both $$$p_r$$$ and $$$p_s$$$, that is, divisible by $$$p_rp_s$$$, which is $$$f(p_rp_s)$$$.
- Similarly, $$$|S_r \cap S_s \cap S_t|$$$ is $$$f(p_rp_sp_t)$$$.
Since $$$f(d)$$$ has been precomputed, we can compute $$$|S_1 \cup S_2 \cup \dots \cup S_k|$$$ for each $$$i$$$ in $$$O(2^{|P_i|})$$$ time. Because $$$2 \times 3 \times 5 \times 7 \times 11 \times 13 \times 17 \times 19 = 9699690 \gt 10^6$$$, the size of $$$P_i$$$ won't exceed $$$7$$$. Therefore, the operation number is roughly $$$2^7 \cdot n \le 3 \cdot 10^7$$$, which is acceptable.
After finding the count of elements that is NOT coprime to $$$a_i$$$, we can then easily obtain $$$\displaystyle\sum_{j=1}^{n}[\gcd(a_i, a_j)=1]$$$ by subtraction.
Another advanced approach to find the result is Mobius inversion. We will assume that you know the definition and property of mobius function $$$\mu(n)$$$ in the following discussion.
Since we know that $$$\displaystyle[\gcd(x, y) = 1] = \sum_{d|\gcd(x, y)}\mu(d)$$$, we can expand the orginal formula as:
Let's swap the order of summation and get
Because $$$d | \gcd(a_i, a_j)$$$ is equivalent to $$$d|a_i$$$ and $$$d|a_j$$$, we can simplify the formula to
Thus, we can enumerate the divisors of $$$a_i$$$ for each $$$i$$$ and compute the summation above by brute force, which yields a $$$O(n\sqrt{m})$$$ or $$$O(nd(n))$$$ time.
Let's build a graph $$$G$$$ where vertex $$$i$$$ and $$$j$$$ has an undirected edge if and only if $$$\gcd(a_i, a_j) = 1$$$. The problem becomes finding a match of size $$$2$$$ in the graph. However, we only has each vertex's degrees based on the discussion above. How do we find the match only based on degrees?
We use a greedy approach to obtain the match of size $$$2$$$. Let's say $$$G$$$ has at least one edge.
- Choose a vertex $$$u$$$ with the maximum degree.
- From all vertices that is connected to $$$u$$$, pick a vertex $$$v$$$ with the minimum degree.
- Remove vertices $$$u$$$ and $$$v$$$ and their associated edges from $$$G$$$ to get a new graph $$$G^\prime$$$.
- If there are at least one remaining edge in $$$G^\prime$$$, $$$G$$$ has a match of size $$$2$$$ that contains $$$(u, v)$$$ and the remaining edge. Otherwise, it doesn't have a match of size $$$2$$$.
We will prove that if $$$G^\prime$$$ contains no edges, $$$G$$$ doesn't have a match of size $$$2$$$.
Because $$$G^\prime$$$ has no edge, all of the vertices that is not isolated in $$$G$$$ connect to $$$u$$$, $$$v$$$ or both. It means $$$G$$$ has only one connected component that contains more than two vertices. To make disucssion easier, we will use $$$G$$$ to refer to $$$G$$$'s major connect component. That means all of the vertices other than $$$u$$$ and $$$v$$$ have a degree of $$$1$$$ or $$$2$$$ in $$$G$$$.
- If all vertices other than $$$u$$$ and $$$v$$$ have a degree of $$$1$$$. If at least one vertex is connected to $$$u$$$, the minimum degree that $$$u$$$'s connected vertices becomes $$$1$$$. Since the algorithm choose $$$v$$$ with the minimum degree, $$$v$$$ also has a degree of $$$1$$$, so all vertices are connected to $$$u$$$. In this case, $$$G$$$ is a star-shaped tree, which doesn't have a match of size $$$2$$$ for obvious reason. On the other hand, if all vertices are connected to $$$v$$$, the degree of $$$v$$$ is greater than $$$u$$$, which is impossible become the degree of $$$u$$$ is maximum.
- If exactly one vertex other than $$$u$$$ and $$$v$$$ has a degree of $$$2$$$, it must connect to both $$$u$$$ and $$$v$$$. Similar to the discussion above, we can prove that no other vertex can be connected to either $$$u$$$ or $$$v$$$. In this case, $$$G$$$ is a three-vertex loop, or $$$K_3$$$, which doesn't have a match of size $$$2$$$.
- If more than two vertices other than $$$u$$$ and $$$v$$$ have a degree of $$$2$$$, it means $$$v$$$ has a degree of more than $$$3$$$. Let's denote a vertex that connects to both $$$u$$$ and $$$v$$$ as $$$w$$$, then $$$\mathrm{deg}(w) = 2 \lt 3 \le \mathrm{deg}(v)$$$. Because $$$w$$$ is also connected to $$$u$$$, it contradicts to the step $$$2$$$ of the algorithm.
For our problem, for each $$$i$$$, the degree is $$$\displaystyle\sum_{j=1}^{n}[\gcd(a_i, a_j)=1] - [a_i = 1]$$$. We can then brute force to find the first edge, remove it, and then try to find the second edge.
#include <bits/stdc++.h>
using i64 = long long;
constexpr int V = 1e6 + 10;
std::array<int, V> mu, comp;
std::vector<int> primes;
void solve() {
int n, m = 0;
std::cin >> n;
std::vector<int> a(n);
for (int i = 0; i < n; ++i) {
std::cin >> a[i];
m = std::max(m, a[i]);
}
std::vector<int> mul(m+1), deg(n), mp(m+1);
for (int i = 0; i < n; ++i) {
mp[a[i]]++;
}
for (int j = 1; j <= m; ++j) {
for (int x = j; x <= m; x += j) {
mul[j] += mp[x];
}
}
for (int i = 0; i < n; ++i) {
for (int j = 1; j * j <= a[i]; ++j) {
if (a[i] % j == 0) {
deg[i] += mu[j] * mul[j];
if (j * j < a[i]) {
deg[i] += mu[a[i] / j] * mul[a[i] / j];
}
}
}
if (a[i] == 1) --deg[i];
}
std::array<int, 4> ans;
int u = 0, v = 0, mind = INT32_MAX;
for (int i = 1; i < n; ++i) {
if (deg[i] > deg[u]) {
u = i;
}
}
if (!deg[u]) {
std::cout << "0\n";
return;
}
deg[u] = 0;
for (int i = 0; i < n; ++i) {
if (i == u) continue;
if (std::gcd(a[i], a[u]) == 1) {
--deg[i];
if (mind > deg[i]) {
mind = deg[i];
v = i;
}
}
}
ans[0] = u;
ans[1] = v;
deg[v] = 0;
for (int i = 0; i < n; ++i) {
if (i == u || i == v) continue;
if (std::gcd(a[i], a[v]) == 1) {
--deg[i];
}
}
u = 0;
for (int i = 1; i < n; ++i) {
if (deg[i] > deg[u]) {
u = i;
}
}
if (!deg[u]) {
std::cout << "0\n";
return;
}
v = -1;
for (int i = 0; i < n; ++i) {
if (i == u || !deg[i]) continue;
if (std::gcd(a[i], a[u]) == 1) {
v = i;
break;
}
}
assert(v >= 0);
ans[2] = u; ans[3] = v;
for (int i = 0; i < 4; ++i) {
std::cout << ans[i] + 1 << " \n"[i == 3];
}
}
int main() {
std::cin.tie(nullptr)->sync_with_stdio(false);
int T = 1;
std::cin >> T;
mu[1] = 1;
for (int i = 2; i < V; ++i) {
if (!comp[i]) {
primes.emplace_back(i);
mu[i] = -1;
}
for (auto p: primes) {
int q = i * p;
if (q >= V) break;
comp[q] = true;
if (i % p == 0) {
mu[q] = 0;
break;
} else {
mu[q] = -mu[i];
}
}
}
while (T--) solve();
return 0;
}









fast editorial :)
this is the leetcode equivalent of people saying like 1/4 gang ->
I'm really into reading editorial after bombing H with 12 submissions :)
Really like problem H's Easter egg. Noticed this small detail after bombing H :)
H is prolly the hardest question that i have ever encountered during my cp career (atleast for me)
I tried spamming H with my definitely-WA code for no reason :)
Ohh, lol , but like H was really hard
you are a pupil and he is a expert and i am a specialist
Ur right but like what’s the point of this?
fast editorial, nice :D
All the rate the problem sections are linked with each other. This shouldnt be the case right?
I.E. if i rated for E it also shows up on A,B,C,D,F,G,H
Fixed now
I like editorials that explain the solutions step by step, hint by hint. Thanks -firefly- for the great and fast editorial!!!
Cool H, love it. Sadly I don't have time to do other problems.
NICE EDITORIAL!
333317238 can anyone plz tell why is this incorrect for C
can you try using keys of one of the maps(cs for example) to iterate over as r? also try breaking when r becomes bigger than k / 2. Basically i want you to avoid unordered_set usage to see if TLE will become accepted.
ohk thx I'll update then check
What's the problem with my solution to problem B?
333439457
You allocate a vector of size k. However k can be as large as $$$10^9$$$.
Yeah, but it should work, how exactly does allocating a vector of size 1e9 cause a runtime error? Sorry for being silly)
An int in C++ is 4 bytes, therefore 4*10^9 bytes ~ 3 Gigabytes of memory allocated.
thx)
Size of
long long intinC++: $$$64$$$ bits ($$$8$$$ bytes) Thus, avectoroflong long intof size $$$10^9$$$ will occupy $$$8 \times 10^9$$$ bytes1 GB = $$$10^9$$$ bytes
Thus, its size will be 8 GB :(
thx)
fail to allocate will result in a
std::bad_allocexception.thx)
yeah bro I faced the same problem. Try using a map which selectively allocates memory as you don't require all 1e9 values which are allocated in an array by default.
thx)
Thanks for an Amazing Contest..
The input is 1 6 and the output is -1 3 -1 1 -1 1, why not
Because sum of last two elements is not positive
if we choose subarray from index 3 to 5 (1 based indexing) your sum comes out to be 0 hence its wrong.
can anyone please provide code for the 2131D - Arboris Contractio??
Submission
Could you please link the submission link, as your code took up a lot of space? UPD: Now it doesn't :D
Link to my submission: 333403089
Thanks
dfs: 333953619 with explanation in comments
Great problems and fast editorials, thanks!
Great problems!
I dont understand why my solution for D fails, I find the node that has maximum number of leaf nodes with depth 1 then I root the tree at this node, now for attaching all the other leaf nodes to this root node will take one operation per leaf, can someone tell me what's wrong with this approach?
I did something similar (or maybe the same) that worked. I found the node that has the most leaves attached. Then the answer was the {total number of leaves} minus {most leaves on one node}.
It works now I messed up the implementation a little bit
I did not see your code, but i have a suspicion that there is a difference between what you wrote in plain English vs what you wrote in code. Consider this tree:
1 — 2 — 3 — 4 — 5
Now, does your code treat $$$3$$$ as root? if so, it will find 2 leaves of $$$1$$$ and $$$5$$$ whereas $$$2$$$ or $$$4$$$ as root will only find leaves $$$5$$$ or $$$1$$$ respectively, resulting the answer to be 1.
I implemented my code in a way so that i take the highest degree vertex as root, and find leaves that are not connected to it. This was "almost" correct, except the fact that the above mentioned case needs a tie-breaker to treat $$$2$$$ or $$$4$$$ as root instead of $$$3$$$ even though they all have same degree.
Problem H can also be solved if sum of M is not bounded by $$$10^6$$$.
Submission: 333367734
Problems C and D are interesting, although E seems a little too easy for its place. Unfortunately, I lost too much time on B and couldn't solve F. But overall this was a nice contest!
I solved H using random. I shuffled the initial array and searched for the first coprime pair in the prefix, then the second one. 22 iterations were enough.
Nowcoder has a problem that is same to 2131H nearly:牛客练习赛137F,2131H just is the special case when g=1 in Nowcoder Practice 137F。
I just submit my code after changing g=1 and get AC.
An intersting thing is that the writer this round "__baozi__" had participated the contest in Nowcoder. Maybe just a coincidence(
how to solve for gcd values >=2 with queries?
I reduced H to having a bipartite graph of prime factors and indices, then there is $$$ gcd(a_i, a_j) = 1 $$$ iff $$$ dist(i, j) \gt 4 $$$ but I couldn't solve further. Can it be solved like this? here dist refers to minimum distance
Am I the only one who thinks problem F was slightly more difficult than problem G?
yeah... I didn't solve G but pretty sure I already know how to do it without looking at editorial. But I got stuck doing F not knowing how to quickly sum all the costs/contributions. Should've done G instead.
Definitely, I couldn't do F in contest, and now I did G with little trouble.
Hey everyone,
I've come across some people cheating again. I already wrote a blog about it last time and reported them in the known cheaters’ database, but unfortunately, they’re still getting away with it—and even showing off the ratings they earned dishonestly, especially around campus.
I'm honestly not sure what more can be done, but it's frustrating to see people benefiting from this while others are putting in real effort. Any suggestions on how to handle this in a more effective way?
https://mirror.codeforces.com/blog/entry/134156 rating is just a number
If anyone struggling to find corner case for D, consider this tree:
1 — 2 — 3 — 4 — 5
Now, does your code treat $$$3$$$ as root? if so, it will find 2 leaves of $$$1$$$ and $$$5$$$ that are not connected to root whereas $$$2$$$ or $$$4$$$ as root will only find leaves $$$5$$$ or $$$1$$$ respectively, resulting the answer to be 1.
I implemented my code in a way so that i take the highest degree vertex as root, and find leaves that are not connected to it. This was "almost" correct, except the fact that the above mentioned case needs a tie-breaker to treat $$$2$$$ or $$$4$$$ as root instead of $$$3$$$ even though they all have same degree.
To Get answer basically you have to take all leaf nodes and connect them to your root along with all that come in between so you can count how many leaf nodes current node is conncted to and treat that as degree and take maximum of that and leafs which are not connected to that root you selected with maximum leaf is your answer.
the explanation for E is easily explained!
thanks for a good div 3 round, enjoyed a lot
Personally, I found E to be bit easier than it should have been for it's position. It was an exciting contest. Kudos to the authors and organisers !
Great problems and fast editorials, thanks!
If anyone codes in Java please reply to this comment
In E, please tell me what wrong in this approach. For every element of the array if they are not equal i move towards right taking xor of elements on the way, till i reach a point where the cumulative xor becomes equal to the element i started with lets say index of that element is 'r'. then i move backwards from 'r' till the initial index. updating all the element in the process cumulatively, if in doing so i encounter any index a[i]!=b[i] i return "NO" else i repeat from 'r+1' index the same algorithm.
https://mirror.codeforces.com/contest/2131/submission/333453490 this is the code, please help!
If I understand correctly, in the problem statement, it says the XOR operation can be performed at most once for each index.
yes i know, if i frame my algorithm in other words, i start from index 'l' and 'r' doing xor of element from l to r gives a[i], then i say that all the element from l to r should be like, a[i]= a[i]^ a[i+1] ^ a[i+2].......^a[r] a[i+1]= a[i+1] ^ a[i+2] ^a[i+3] .... ^a[r] a[i+2]= a[i+2]^ a[i+3] ^ a[i+4] .....^ a[r]
I found it! It arises from repeating from the 'r+1' index, which neglects index 'r'. Here's a test case that outputs "YES" on yours, which is incorrect: a=[101,100,011,001], b=[010,111,10101010,01].
Taking a look at index 3, $$$10101010$$$ of course cannot be created with XOR of any of the elements in array $$$a$$$. However, your algorithm looks at 101^100^011==010, and skips to index 4 without taking a look at index 3.
I ran it and it gives "NO", and that checking of index 3 is being done in this part of code
https://ide.usaco.guide/OXKfvhJptJSF_8NYFqW
input: 1 4 5 4 3 1 2 7 100000 1
The compiler here outputs 'YES'.
yes, you are right, it worked
https://mirror.codeforces.com/contest/2131/submission/333469695
the mistake was that i assigned the value of ind to i ,ind was where it would start checking again or 'r'. but i forgot the increment done by the for loop.
what is wrong in my code? same approach as yours
link to my submission
the problem in your code is that you set prev technically to b[i], which gets updated if you enter the while loop, but the prev remains b[i+1], when you get a[i]==b[i] as then there is no condition to update the prev to b[i].
this works, just changed the prev update statements.
https://mirror.codeforces.com/contest/2131/submission/333494286
thanks..understood.
In problem E why a[i] ^ b[i+1] should be equal to b[i] ? Can anyone explain
b[i+1] essentially represents a[i+1] ^ ... ^ a[j], where i+1 <= j <= n.
For a particular a[i], we can only XOR with a[i+1] that is fixed. Now a[i+1] can be as in original a array or it has already XORed itself with its neighbour(a[i+2]) such that it is wanting to be equal to b[i+1] after xoring it with its neighbour. So, for a[i] we have two choices , either XOR it with original a[i+1], or changed a[i+1] which is aimed to be equal to its own corresponding element, i.e., b[i+1].
got it thanks :)
I find problem B statement is very confusing to me. Also I'd rate E <= D.
Why does B in every contest choke me even harder than C :(
Nice contest, When I solved A in 3-4 mins I thought finally I'll go around pupil but than B humbled me by not getting one test case and C was just :), Tried solving in python TLE, Tried using optimized code TLE than converted the python code to CPP using tools and boom it worked :) I guess this time it's negative delta for me
Very nice E
For me C was harder than E lol. Don't know how alot of people solved it.
Same for me
I didn't find C too difficult. You have to check if S and T are equal after find mod k for each of its elements.
Can anyone explain me the solution of C?
My solution on C gave Time limit exceeded on PyPy3 but the same code worked on Python3. I think it's due to hash-collision problems.
E was pretty easy this time. When I thought of my logic, I was very sure that it would fail on the 2nd test case (I thought it cound not be this easy), but unexpectedly, it got accepted. I was very happy :)
I just traversed the array two times:-
1st time from starting, updating the values for every a[i], where a[i]^a[i+1] == b[i] and if a[i]==b[i] then continue.
2nd time from behind, again checking the same conditions, but with updated a[i] values.
This was a great contest, thanks for the authors.
I have some idea to discuss:
I don't really know how to implement the fft algorithm, but I have a speculation that problem F can be solved using that algorithm (which I was surprised that tags did not include) ...
If we consider each cell to be of depth $$$d_{(i,j)}$$$, resembling the number of operations needed to reach that cell starting from the origin cell $$$(0,0)$$$, then say: if a neighboring cell is unreachable (without performing an operation) from the current cell $$$(i,j)$$$, therefore, the depth of that cell is of depth $$$d_{(i + \Delta a, j + \Delta b)} = d_{(i,j)} + 1$$$
then notices the connected areas, each having cells with the same depth, we notice that the grid looks like this (assuming cell $$$(0,0)$$$ has a depth of $$$0$$$ :
We can say that the total cost (the answer of the problem) is the sum of the total area of the same depth, multiplied by its depth, in other words : $$$\text{ans} = \sum_{k=0}^{\text{count}(\text{distinct depths})} ( k \times \sum_{d_{Area}=k} \text{Area}$$$ )
Which when having the width an height of each rectangular area (which can be found using the cumulative count of ones over the dimension, height and width), we can get the total area, and the depth of each rectangle, however, this is still &O(n^2)&
But with some fft implementation (which I don't honestly know how), we can use the fact that the total sum of areas of each depth equals $$$\sum_{k=0}^{depth} \text{length}_k \times \text{width}_{depth - k}$$$, which I recognize as the convolution of two arrays of some length, say $$$l_1$$$ for the lengths array, and $$$l_2$$$ for the widths array, resulting in an array of length $$$l_1 + l_2 - 1$$$, which can be computed (somehow) using fft.
I apologize if I ran into some incorrect conclusion due to my lack of experience and knowledge on the topic, I hope you help me proved this conclusion either right or wrong.
If I understood you correctly, d(i,j)'s formula is wrong. When doing an operation, table's row or column is inverted, while your formula works as if only 1 cell was impacted. For example, in a case a = "1", b = "000000", d(1,6) = 6, which isn't true.
To be honest, I did not understand why would the algorithm fail when more than 1 cell is impacted by an operation, I would like to suggest an example so we can communicate better, consider the following input:
We can conclude that the shape would look like this:
The when cutting into areas and marking them with their depth (cost of reaching them if we start from $$$(0,0)$$$) we get the following:
Or we can represent using arrays of heights and widths as:
meaning the areas would be distributed in the following manner:
meaning the final output should equal $$$32$$$ (based on my calculations).
If you still see an issue with this approach I would be thankful to have an insight.
Sum should be equal to 28 and the matrix d should look like this:
0 0 1 2
1 1 2 3
1 1 2 3
2 2 3 4
I now realize what you were telling me about the single cell, thanks for pointing this out, the issue emerged because I mistakenly thought that the whole width/height segment (all rows that intersect the area) is affected by a single operations, and therefore there was an issue in area division, because the depth does not go up we the bit switches, but rather when you encounter a $$$1$$$ bit, and in this case, (as you kindly mentioned) the area partition would look like this:
Meaning the calculations would be:
precisely what you told me, meaning the algorithm (after adjusting the definition/method of area partitioning) is supposed to work well for any test case (for as far as I see)
In conclusion, do you think there is an fft algorithm optimization that would make this algorithm solvable in $$$O(n log n)$$$ time complexity?
And thanks for your input.
I guess that is still an incorrect transition. For example, let's look at another example:
1
5
00111
00000
Let's only focus on the first row of the matrix:
0 0 1 1 1
d's first row should look like the following:
0 0 1 2 2
I don't think we can efficiently calculate matrix d, without the observation described in the editorial.
To be honest, I struggled to understand the solution described in the editorial, so I tried to come up with an alternative solution, building on the thoughts I had during contest, but I froze when I reach the fft form in my solution, and because I did not reach that topic yet, I tried to get insight through comments.
Back to the example you provided:
We would have the grid:
Therefore, the area partition would be:
Meaning the answer should look like:
Which makes perfect sense to me, is there anything wrong with this?
In addition, I would really like to have your say on the fft matter, It is practically the main purpose of the comment. Thanks for your input.
I made a mistake in my last response, here is another testcase:
1
5
11110
00000
matrix d:
1 1 1 1 2
1 2 2 2 3
1 2 3 3 4
1 2 3 4 4
1 2 3 4 4
It can't be described by 2 vectors height, weight. If any matrix could be described in such a way, your solution could indeed be optimized with fft.
Thanks for the confirmation, I'll demonstrate my approach for the example provided:
Area partition:
Calculations:
Which makes perfect sense to me.
Again, I'm grateful you helped me confirm my assumptions, I hope you have a wonderful day/night!
I tried to hack one solution to the problem H, but got "Unexpected verdict"
For example, hack number 1142852
UPD: fixed :)
My Solution to F
Thanks for the nice contest! Indeed, I believe G was more approachable than F, but maybe that's just because I'm bad xd
Anyways, there is an alternate $$$O(n)$$$ solution to F (then again, you could argue that the main solution is also solvable in $$$O(n)$$$ with counting sort since the differences are at most [-2*n, 2*n]) which is achievable even if you forget that math exists (happened to me).
Can you Help me with some tips for thinking while problem solving, when I was solving F, I was immediately able to recognize that grid represented 'b' string row wise and 'a' string column wise and I wrote down the matrix for last testcase, also an intuition crossed my mind that on a path minimum of count of (0 and 1) will be the answer. But this was initial vague idea then i started thinking along lines of DP trying to formulate transitions from 0->1 , 0->0 , 1->0 and 1->1. (Basically got lost), so what could i have done differently?
Ok, so, there was this really nice blogpost written some time ago. There, they mentioned that often times it can be useful to pretend that you're inside that world. That is to say, imagine you had to move over the board to reach a certain $$$(x, y)$$$.
This line of reasoning can help you understand that, as an example, if we need to reach $$$(x, ?)$$$, then it never makes sense to first go to some $$$(x + 1, ?)$$$ so as to get a better answer. After that, if you pretend to move across the board, then you could notice the fundamental idea that if we ever go from a certain parity to a different parity value (like, from 0 to 1 or from 1 to 0) on a certain column, we'd always be forced to change some value. Therefore, we arrive to the idea of using the minimum of the zeros and ones.
Basically, pretending that you're in that world can help sometimes.
can someone please tell me whats wrong in my solution for E. i feel like it follows the idea on the editorial, which is changing the values of a[i] such that a[i]^a[i+1] == b[i], and then change the values of a[i] such that a[i]^b[i+1] == b[i].
333476770
Since == operator has higher precedence than ^, you have to have the a[i]^a[i+1] part inside brackets. Otherwise it'll evaluate as a[i]^(a[i+1]==b[i])
yea i figured that out. thx man!
lol I literally did prob a by simulating the operations and passed
I've tried to implement a stupid solutions was: "Random 4 positions and repeat it 100 times", of course, it did not work :))))))
Had a clean solution for E. It follows this logic:
Finds the x such that a_i ^ x = b_i, if x != a_i+1 and x != b_i+1, impossible. a_n-1 must equal b_n-1 as well. Now, we can do a left to right sweep making all a_i which depend on a_i+1 ok. Then, do a right to left sweep which makes all a_i which depend on b_i+1 being ok which happens only if a_i+1 == b_i+1, i.e, i+1 is ok.
333481914
My rating prediction is far away from real estimation, plz ignore it XD
C and D is good! D tells us that we should understand the sample test better when we solve problems.(Chinglish)
I think E is easier than D.
i didn't realize Div3 is unrated for me untill i finish D. haha Then i turn to H. H is really a good problem
Can anyone please share some tips for problem like C. I have learn a lot of data structures and algorithms like segment tree or mst and already solved some problem that have higher rating than this, like over 1500. But these kind of problem always make me feel frustrated.
I proved it mathematically first then coded it i dont know but it could help, it helped for me but i use unordered map which made it fail, should have used map. proving something mathematically helps a lot
F is such a beautiful maths problem, loved it!!
Guys did ratings change for y'all?
Thanks to -firefly- for maintaining Div-3 standard
We can calculate the sum (problem F) in a different way:
The sum is $$$\sum_{i,j} (\text{if } c_{0,0,i}+c_{0,1,j} \geq c_{1,0,i}+c_{1,1,j} \text { then } c_{1,0,i}+c_{1,1,j}\text{ else } c_{0,0,i}+c_{0,1,j})$$$.
We can transform the condition to $$$\sum_{i,j} (\text{if } c_{0,0,i}-c_{1,0,i} \geq c_{1,1,j} - c_{0,1,j} \text { then } c_{1,0,i}+c_{1,1,j}\text{ else } c_{0,0,i}+c_{0,1,j})$$$.
Now, we define $$$d_i = c_{0,0,i}-c_{1,0,i} \text{ and }e_i = c_{1,1,i} - c_{0,1,i}$$$.
It seems like $$$d$$$ is just $$$prea$$$ and $$$e$$$ is just $$$preb$$$ in the editorial.
After that it's enough to do 2 pointers.
Where can i report cheaters? anyone has any idea ?
https://cf-cheater-database.vercel.app/
I totally agree with efishel's rating prediction.
In D's editorial, can anyone explain this statement — "For vertices with depth larger than 1 , notice that in every operation, only one of the vertices on the path selected can be a leaf."
I think the author means that the path should include a leaf because it's the best choice. And since it's a simple path originating at the root, it admits a single leaf. Note that the answer would be different if we could choose any vertex on the path as the common target for the operation, as the path would admit two leaves. Hope this makes sense.
Pls correct me if I am wrong but why can't we choose a path starting and ending at a leaf? For eg the diameter of a tree? Why do we have just one leaf in a path?
Because it wouldn't be the best choice. Consider a star-shaped tree. In this case, if you operated on a path with two leaves, one of them would receive the other as neighbor and the tree diameter would increase. Even if you repeated this process while the diameter is not minimal, you'd waste operations. You may think of it as an "unstable" kind of operation.
I wrote the below code for problem C, but it said TLE on test case 10. My code is linear and I used a Counter. I know hash collisions are there but is it really that bad for 10⁵ elements?
Yeah exactly. Even I got the same issue.
I got TLE on test case 12 too
Try my hack test case. On my local environment, it takes ~16 seconds for your solution to run this test (with pypy).
Can anybody tell me why did I get TLE on this?? 333331424 It got submitted in first try, but now during systems testing I got TLE
Wally West speed editorial
Spent too much time on D
Can someone suggest more problems like D?
I got a time limit exceeded error on problem C although I believe my solution is O(n). Other valid submissions are also O(n). Can anyone tell me what went wrong?
333372793
Never use
unordered_map.Why tho
AFAIK, most problems on CF are protected against misuse of
unordered_*containers. You should provide a custom hash function. Here's a working solution usingunordered_multiset: 333563568. See also this blog post: https://mirror.codeforces.com/blog/entry/62393I am getting wrong answer on test 5 in G. This is my submission link 333433429.
Can anyone pls help !!
my code to c worked during the contest but in testing it says TLE anyone knows the reason or have same issue?
you probably used unordered_map, gives tle due to hash collisions
thanks my mistake i forgot its worst case TC, i thought i dont need it sorted but i forgot about the other things :( dropped to 10k from 5k
how this hash collisions works? I am really confused I have solved so many questions with unordered_map with same constraints but never thrown tle
refer to this blog https://mirror.codeforces.com/blog/entry/62393
Someone please tell me why when I using same code with unordered_map it showing tle but not with map for problem C (333565895)
unordered_map worst case complexity is n^2 if i remeber correctly so it doesnt work
Has anyone rating been updated or not? It's been 35 hours, and my rating still hasn't been updated.
Don't rush, the server seems to be under repair for some technical reasons
Is using a hash table for Problem C gonna get hacked?"
Generally, when you use a hash table, you risk $$$100\%$$$ danger being hacked during a Div. 3, Div. 4, or Educational round.
Bonus for E: solve for without the condition of 'at most one operation' i.e the operation of taking xor with next index can be done any number of times. I missed 'at most once' and solved the harder question, ultimately costing me the question as there was less time left for correct solution.
This version is an introductory XOR basis problem.
For problem D, I didn't understand how to test all possible roots and paths in O(n), I tried to select the node with max edges count because it would make shorter paths, but I get wrong answer, can anyone help me?
I had the same problem. You should choose the node with the most connected leaves, not the most connected edges.
Great I got it, thanks bro I appreciate that
Can anyone tell me what’s wrong with this solution https://mirror.codeforces.com/contest/2131/submission/333322773 compared to this one https://mirror.codeforces.com/contest/2131/submission/333639039 ? The first gave me a runtime error, but I don’t know why.
https://mirror.codeforces.com/contest/2131/submission/333580687
Where did this submission go wrong?
Who gives a Tree Problem in Div3D??????
this editorial is very good
Why is there a Detective Conan ref in the problems bro
I don't see a Detective Conan ref tho
Idk but the last problem there's a subsequence of "Sea, You & Me" inside the problem statement, which spans a few chapters in Detective Conan (idk it might point to sth else tho)
It is a reference to a BGM of Summer Pockets. The checker comment also refers to one of its BGM.
Yeah, I just realized on the way home :( It was "Sea, Tel & I" in Detective Conan (maybe future problem ig idk uwu)
Is the game free?
No
Aw dang
What is the problem with my solution for Problem C? Can anyone please tell why it is giving incorrect result for test 2?
333731906
Can someone give me link to problems like
Cand can also share tips on how to approach these type of problem. Some kind of general technique or pattern you see on these type of problem.It will be very helpful, as i am struggling in these type of probelms.
Why TLE in 12th Test case ? is it because i am using unordered_map ?
submission
Try solving problem E but instead of one operation you get infinite :)
Think of the operation as "bubbling down" an xor from $$$A_{i+1}$$$ to $$$A_i$$$. Because we have infinite operations, we can afford to propagate left any $$$A_{j\, \gt \,i}$$$ to xor with $$$A_{i}$$$.
Read the hint. We can use XOR basis! Iterating in reverse, for each element $$$A_i$$$, all we must do is check if $$$A_i \oplus B_i$$$ is in the XOR-span of the suffix $$$ { A_{i+1}, \dots, A_n } $$$, then add $$$A_i$$$ to the basis! Correctness: reachable arrays are $$$ b_i = a_i \oplus \bigoplus_{j\, \gt \,i} c_{i,j} \, a_j $$$ where $$$c_{i,j} \in {0,1}$$$ indicates whether $$$a_j$$$ is included in the XOR for $$$b_i$$$ (unit upper-triangular ops over $$$\mathbb{F}_2$$$). Time $$$O(n \cdot 30)$$$.
Can someone help me explain why I used function dfs in pypy3 but got RE in problem D ? https://mirror.codeforces.com/contest/2131/submission/333368249
You have to simulate the recursion in python because it has a default recursion limit and if you change it to like 2e5 you may risk MLE.
OK,thanks for your reply!
G and H are probably the most beautiful d3 problems I've ever solved! (H with edi). This contest is definitely miles ahead of regular d3s in terms of quality. Loved it!
can someone please explain solution for problem F, like used by @jiangly submission
D is so simple (and nice!) yet I was scared by the need to choose the root myself. I have learned something. Thank you for the contest!
I still can't get how the summation in problem F works,
we are essentially trying to find ∑ |f(x) — g(y) |, over x and y from 1 to n, for both. And f and g are some functions. The way I think this is supposed to work is you first go through all values of f(x) and g(y), and store them into 2 separate vectors fx and gy and then sort them, and then find their prefix sum, and then start with 2 pointers, let's say x and y, from 0, while(x<n && y<n) and then ans = 0; if(fx[x] < gy[y]){ ans+=(prefy[n] — prefy[y]) — ( (n-y) * fx[x++]); } since i can take difference of all values from current y till the end of y, for this current x.And also increment the x.
It would be a real help if someone could review this process.
dang,I didn't knew code forces is going to remove the spaces.
I hope this one comes out find.
We are essentially trying to find ∑ |f(x) — g(y) |, over x and y from 1 to n, for both.
And f and g are some functions. The way I think this is supposed to work is you first go through all values of f(x) and g(y), and store them into 2 separate vectors fx and gy and then sort them, and then find their prefix sum, and then start with 2 pointers.
Let's say x and y, from 0, while(x<n && y<n)
and then ans = 0;
if(fx[x] < gy[y]){
}
since i can take difference of all values from current y till the end of y, for this current x.
And also increment the x.
Hello, my handle is sm_10.
My submission 333439059 for problem 2131F was flagged for plagiarism.
I want to clarify that I did not copy from other participants.
I used the prefix-sum + binary search approach after studying this publicly available StackOverflow discussion that predates the contest:
SOF-Link
The similarity with other solutions seems to come from the fact that this is a standard known technique. I did not share my code with anyone.
Why is there a discrepancy between the textual description of the solution to Problem F and the code? There is no content related to n * n * (n + 1). It would be really great if someone could reply to me.
Can someone explain C to me?
My solution TLE's
But I don't understand what is going on in the editorial solution with RD (a random 32 bit integer?)
As for problem F, it's really hard to come up with $$$\min(a, b) = \dfrac{a + b}{2} - \dfrac{|a-b|}{2}$$$ for people who haven't met a trick like this (like me)
can you please what this trick is called.
it's just an equation with no specific name
Ohh thanks :|
A simpler solution/code for question 2131F - Unjust Binary Life {Question F unjust binary life}.
Essentially the same concept but an easier implementation without the minimum to average part 356146228
another very interesting way to solve G:
368307753 It is a bit slow but much shorter than the editorial solution
I think this turns the problem into almost a full DP+observation problem