| Predictor | A | B | C | D | E | F | G | H |
|---|---|---|---|---|---|---|---|---|
| AksLolCoding | 800 | 800 | 900 | 1300 | 1500 | 1800 | 1900 | 2300 |
| SpyrosAliv | 800 | 800 | 1100 | 1200 | — | 1600 | 1900 | 2200 |
| wuhudsm | 600 | 800 | 1300 | 1500 | 2100 | 1900 | 2100 | 2200 |
| Dominater069 | 800 | 800 | 1000 | 1200 | 1900 | 1800 | 1800 | 2000 |
| Proof_by_QED | 800 | 800 | 1100 | 1300 | 1500 | 1700 | 1900 | 2400 |
| reirugan | 800 | 800 | 1300 | 1400 | 1600 | 1800 | 2100 | 2400 |
| -firefly- | 800 | 800 | 1200 | 1300 | — | 1700 | 1800 | 2300 |
| Edeeva | 800 | 800 | 1200 | 1400 | 1500 | 1800 | 2000 | — |
| Intellegent | 800 | 800 | 1300 | 1400 | 1500 | 1700 | 1800 | 2400 |
| beaten_by_ai | 800 | 800 | 1200 | 1100 | 1600 | 1800 | 1900 | 2200 |
| macaquedev | 800 | 800 | 1100 | 1400 | 1300 | 1000 | 2100 | — |
| efishel | 800 | 800 | 1200 | 1200 | 1300 | 1700 | 2000 | 2300 |
| __baozii__ | 800 | 800 | 1200 | 1500 | 1500 | 1800 | 1700 | 2300 |
Thanks to reirugan for helping proofread the editorial!
2117A — False Alarm
Author: yse
When is the optimal time to use the button?
It's not necessary to use the button when a door is already open. Therefore, it's always optimal to use the button as soon as we hit a closed door.
Let's call the position of the first closed door $$$l$$$ and the position of the last closed door $$$r$$$. The length of this interval is $$$r - l + 1$$$, so we need to check that $$$x$$$ is greater than or equal to $$$r - l + 1$$$.
#include <bits/stdc++.h>
using namespace std;
void solve() {
int n, x;
cin >> n >> x;
int l = 1e5, r = -1;
for(int i = 0; i < n; i++) {
int door;
cin >> door;
if(door == 1) {
l = min(l, i);
r = max(r, i);
}
}
cout << (x >= r - l + 1 ? "YES" : "NO") << endl;
}
int main() {
int t;
cin >> t;
while(t--) solve();
}
2117B — Shrink
Author: yse
When are we unable to remove the maximum element?
We can always remove the maximum element as long as it exists between two elements. How can we generalize this?
In a permutation, the maximum element can only occur once. Since all other elements are guaranteed to be smaller than maximum, we can remove the maximum if there exists an element on its left and another on its right.
After we remove the maximum element, another maximum appears, which we also want to remove. This process can keep going until the length of the permutation becomes $$$2$$$, since both remaining elements are on the ends and can not be removed.
Therefore, any permutation with $$$1$$$ and $$$2$$$ on the ends is acceptable, since any value greater than $$$2$$$ will eventually be a maximum in between two elements.
#include <bits/stdc++.h>
using namespace std;
void solve() {
int n;
cin >> n;
for(int i = 2; i <= n; i++) cout << i << ' ';
cout << 1 << endl;
}
int main() {
int t;
cin >> t;
while(t--) solve();
}
2117C — Cool Partition
Author: yse
The last segment of a valid partition must contain all distinct elements of the array.
From the first hint, we can say that any segment ending at some position $$$r$$$ must contain all distinct elements of the prefix $$$[1, r]$$$.
Claim: In a valid partition, if a segment ends at some position $$$r$$$, it must contain all distinct elements in the prefix $$$[1, r]$$$.
Call the segments $$$b_1,b_2,\dots,b_k$$$. Notice that an element in some segment $$$b_j$$$ must also appear in $$$b_{j+1}, b_{j+2}, ..., b_k$$$.
Now, let's prove by contradiction. Assume that some segment $$$b_j$$$ doesn't contain all distinct elements in $$$[1, r]$$$, where $$$r$$$ is the end position of $$$b_j$$$. Then, the distinct elements that don't appear in $$$b_j$$$ surely appear in some previous segment(s). This does not make a valid partition since at least one previous segment contains an element which $$$b_j$$$ does not have.
Let $$$d_i$$$ be the number of distinct elements in the range $$$[1, i]$$$. To maximize the number of segments, we make them as small as possible. Let $$$r$$$ be the end position for the current segment, we will iterate from $$$r$$$ to the left until we find the first position $$$l$$$ such that the number of distinct elements in $$$[l, r] = d_r$$$, then increment our answer and set $$$r = l - 1$$$. We repeat until $$$l=1$$$.
#include <bits/stdc++.h>
using namespace std;
void solve() {
int n;
cin >> n;
vector<int> v(n);
vector<int> distinct(n), freq(n + 1);
int total = 0;
for(int i = 0; i < n; i++) {
cin >> v[i];
freq[v[i]]++;
if(freq[v[i]] == 1) distinct[i] = 1;
distinct[i] += (i ? distinct[i - 1] : 0);
}
fill(freq.begin(), freq.end(), 0);
int ans = 0, end = n - 1;
total = 0;
for(int i = n - 1; i >= 0; i--) {
freq[v[i]]++;
if(freq[v[i]] == 1) total++;
if(total == distinct[end]) {
ans++;
for(int j = i; j <= end; j++) freq[v[j]] = 0;
end = i - 1;
total = 0;
}
}
cout << ans << endl;
}
int main() {
int t;
cin >> t;
while(t--) solve();
}
Similarly to the above solution, we can have a segment $$$[l, r]$$$ if and only if $$$[l, r]$$$ contains all of the same elements as $$$[1, r]$$$. We now build our segments greedily from the front, iterating through each element. Suppose the element we are currently on is $$$a_i$$$. We make the following claim: if we can end the current segment at $$$a_i$$$, we should do so. (There is one exception to this, which is covered after the proof.)
Suppose we are able to end the current segment at $$$a_i$$$. Clearly, if $$$a_i$$$ is the last element of the array, we must end the segment. Otherwise, suppose that instead, it is optimal to end the segment at $$$a_j$$$ for $$$j \gt i$$$. Then we could just instead end the segment at $$$a_i$$$, and merge the elements from $$$a_{i+1}$$$ to $$$a_j$$$ into the next segment. This doesn't cause any issues with legality, because by assumption, it is legal to end the current segment at $$$a_i$$$, and the subsequent segment only gets bigger so it has at least as many distinct values as before. Furthermore, this does not affect the number of segments we have, so this is also an optimal solution.
Now, we just end segments whenever we can. The only "special" case that needs to be taken care of, is whether or not the last segment is legal. But if it isn't, we can simply merge it into the last legal segment, which only makes this segment bigger and thus doesn't cause any issues since it has at least as many distinct values as before. It turns out that this doesn't really affect the implementation, since we just have to count the number of times we can end the segment.
So how do we do this quickly? Let's keep a set of all elements in $$$[1, i]$$$, and a set of all elements in $$$[l, i]$$$. Then if these sets are the same size, we know that $$$[l, i]$$$ contains all of the same elements as $$$[1, i]$$$. If this is the case, we will end the segment at $$$i$$$. Then we will increment our answer, and set $$$l:=i+1$$$ (in the implementation, we will clear the set of elements in $$$[l, i]$$$).
#include <bits/stdc++.h>
using namespace std;
void solve(){
int n, ans = 0;
cin >> n;
vector<int> a(n);
for(int i=0; i<n; i++) cin >> a[i];
set<int> cur, seen;
for(int i=0; i<n; i++){
cur.insert(a[i]);
seen.insert(a[i]);
if(cur.size() == seen.size()){
ans++;
seen.clear();
}
}
cout << ans << '\n';
}
int main(){
ios::sync_with_stdio(false);
cin.tie(NULL);
int t;
cin >> t;
while(t--) solve();
}
2117D — Retaliation
Author: yse
What happens when we do $$$1$$$ operation of the first type and $$$1$$$ operation of the second type?
If we perform both types of the operation once, each element $$$a_i$$$ is decreased by $$$n + 1$$$. Suppose we are able to explode the array with $$$x$$$ operations of the first type and $$$y$$$ operations of the second type. Then, let's pair as many operations as we can, allowing us to perform $$$\min(x, y)$$$ pairs of operations.
Now, assume we finished pairing the operations. This leaves us with some operations (possibly none) of only one type, so the array must become an arithmetic sequence; the absolute difference between two consecutive elements is the number of operations that we have to perform.
We don't yet know the number of pairs of operations $$$\min(x, y)$$$ that we have to perform, since we don't know the value of $$$x$$$ and $$$y$$$, but we now know the number of extra operations we perform after pairing the operations. Since we need to have an arithmetic sequence in order to explode the array, we should verify that the difference between adjacent elements are all the same. Let's call the number of extra operations $$$k = |x-y|$$$, which is equal to the difference between adjacent elements. If the array is increasing, the extra operations must be of the first type. Otherwise, they must be of the second type.
Now, let's first perform the $$$k$$$ operations on the array. Since the consecutive differences are all the same, after performing the $$$k$$$ extra operations, all elements should be equal. It remains to check that the elements are non-negative and are divisible by $$$n + 1$$$, so we can now perform pairs of operations to explode the array.
#include <bits/stdc++.h>
using namespace std;
void solve() {
int n;
cin >> n;
vector<long long> v(n);
for(auto &it : v) cin >> it;
long long diff = v[1] - v[0];
bool bad = 0;
for(int i = 2; i < n; i++) if(diff != v[i] - v[i - 1]) bad = 1;
if(bad) {
cout << "NO" << endl;
return;
}
for(int i = 0; i < n; i++)
v[i] = v[i] + (diff < 0 ? diff * (n - i) : -diff * (i + 1));
cout << (v[0] >= 0 && v[0] % (n + 1) == 0 ? "YES" : "NO") << endl;
}
int main() {
int t;
cin >> t;
while(t--) solve();
}
Suppose we did $$$x$$$ operations of the first type and $$$y$$$ operations of the second type to explode the array. What must the value of $$$a_1$$$ be?
From the first hint, we know the first element $$$a_1 = 1 \cdot x + n \cdot y$$$. What about the second element?
Let's consider the first two elements. Suppose we needed $$$x$$$ operations of the first type and $$$y$$$ operations of the second type to explode the array, then we know $$$a_1 = 1 \cdot x + n \cdot y$$$. For the second element, we know $$$a_2 = 2 \cdot x + (n - 1) \cdot y$$$.
Now, let's solve for $$$x$$$ and $$$y$$$. Subtracting $$$a_2$$$ from $$$a_1$$$ gives us
\begin{align*} a_1 — a_2 &= (x + n \cdot y) — (2 \cdot x + (n — 1) \cdot y) \newline &= y — x \newline \Longrightarrow x &= y — a_1 + a_2 \end{align*}
Let's substitute $$$y - a_1 + a_2$$$ for $$$x$$$ in the first formula.
\begin{align*} a_1 &= (y — a_1 + a_2) + n \cdot y \newline \Longrightarrow y &= \frac{2 \cdot a_1 — a_2}{n + 1} \end{align*}
Now, we have the value of $$$y$$$, which means we can easily find the value of $$$x$$$. From the formula $$$x = y - a_1 + a_2$$$, substitute $$$y$$$ with its value to get $$$x$$$. What remains is just to check if the array becomes $$$a_1 = a_2 = \dots = a_n = 0$$$ after all the operations.
#include <bits/stdc++.h>
using namespace std;
typedef long long ll;
void solve() {
ll n;
cin >> n;
vector<ll> v(n);
for(auto &it : v) cin >> it;
ll y = (2 * v[0] - v[1]) / (n + 1);
ll x = v[1] - v[0] + y;
if(y < 0 || x < 0) {
cout << "NO" << endl;
return;
}
for(int i = 0; i < n; i++) {
v[i] -= x * (i + 1);
v[i] -= y * (n - i);
}
for(int i = 0; i < n; i++) {
if(v[i] != 0) {
cout << "NO" << endl;
return;
}
}
cout << "YES" << endl;
}
int main() {
int t;
cin >> t;
while(t--) solve();
}
2117E — Lost Soul
If we can get a match at position $$$i$$$, then we can have at least $$$i$$$ matches. Why is this true?
For a certain position $$$i$$$, given the ability to remove index $$$i + 1$$$, we can pull any value from the range $$$[i + 2, n]$$$.
It's clear to see that if a match exists at position $$$i$$$, then we can repeatedly set $$$a_j := b_{j+1}$$$ and $$$b_j := a_{j+1}$$$ for all $$$j$$$, where $$$j$$$ starts as $$$i - 1$$$ and goes backwards until the start of the array. This allows us to have at least $$$i$$$ matches (including the match at position $$$i$$$).
Therefore, the optimal approach is to maximize the index $$$i$$$ where $$$a_i = b_i$$$. Suppose we can't remove any element. Then, for some $$$a_i$$$, we can set it to any $$$b_j$$$ such that $$$j \pmod 2 \neq i \pmod 2$$$ and $$$j \gt i$$$, or set it to any $$$a_j$$$ such that $$$j \pmod 2 = i \pmod 2$$$ and $$$j \gt i$$$.
However, if we wanted to set $$$a_i$$$ to some $$$b_j$$$ where $$$j \pmod 2 = i \pmod 2$$$, then we can just remove index $$$i + 1$$$ from the array. By similar reasoning, we can set $$$a_i$$$ to any $$$a_j$$$ where $$$j \pmod 2 \neq i \pmod 2$$$ and $$$j \gt i+1$$$. Notice that we removed index $$$i + 1$$$, so we can't set $$$a_i := a_{i+1}$$$. Now, we know that we can set any $$$a_i$$$ to any $$$a_j$$$ or $$$b_j$$$ such that $$$j \gt i + 1$$$. This can be generalized for $$$b_i$$$ as well.
So the solution is that, for each position $$$i$$$, we want to check if any $$$a_j$$$ or $$$b_j$$$, such that $$$j \gt i + 1$$$, matches $$$a_i$$$ or $$$b_i$$$, allowing us to get a match at position $$$i$$$. We also need to check if position $$$i$$$ already contains a match, or if $$$a_i = a_{i + 1}$$$ or $$$b_i = b_{i + 1}$$$.
#include <bits/stdc++.h>
using namespace std;
void solve() {
int n;
cin >> n;
vector<int> a(n), b(n);
for(auto &it : a) cin >> it;
for(auto &it : b) cin >> it;
vector<bool> seen(n + 1);
if(a.back() == b.back()) {
cout << n << endl;
return;
}
int ans = -1;
for(int i = n - 2; i >= 0; i--) {
if(a[i] == b[i] || a[i] == a[i + 1] || b[i] == b[i + 1] || seen[a[i]] || seen[b[i]]) {
ans = i;
break;
}
seen[a[i + 1]] = seen[b[i + 1]] = true;
}
cout << ans + 1 << endl;
}
int main() {
int t;
cin >> t;
while(t--) solve();
}
2117F — Wildflower
Author: yse
There can only be at most $$$2$$$ leaves. This means the tree can either be a chain or Y-shaped.
If the tree is a chain, subtree sums will always be strictly increasing as you move towards the root, which means that the answer for any chain is $$$2^n$$$.
Claim: For an answer to exist, the tree should have at most $$$2$$$ leaves.
We can use pigeonhole principle.
Suppose the tree has more than $$$2$$$ leaves. Then for each leaf $$$\ell$$$, its subtree sum is $$$a_\ell \in \lbrace 1,2 \rbrace$$$. By pigeonhole principle, among three leaves there must be two with the same value, forcing subtree sums to not be unique. Thus, it is impossible to make all values in $$$s$$$ pairwise distinct if there are $$$3$$$ or more leaves.
We only need to consider two cases, the case of $$$1$$$ leaf and the case of $$$2$$$ leaves.
- $$$1$$$ leaf case: We can see that $$$\lbrace 1,2 \rbrace$$$ are positive values, so $$$s$$$ is strictly increasing as you move towards the root. Therefore, any array $$$a$$$ will be valid. The answer for this case would be $$$2^n$$$.
- $$$2$$$ leaves case: Let $$$v$$$ be the lowest common ancestor between leaf $$$x$$$ and leaf $$$y$$$. From $$$v$$$ going up towards the root, we can see that it follows the same idea of the $$$1$$$ leaf case, which means that any vertex in the path between the root and $$$v$$$ can be assigned to either $$$1$$$ or $$$2$$$. Without loss of generality, suppose leaf $$$x$$$ has smaller depth than leaf $$$y$$$. Say we assigned $$$a_x = 1$$$ and $$$a_y = 2$$$; it is easy to see that we are forced to assign vertices to $$$2$$$ as we're going up, until we eventually finish assigning a branch. So the number of vertices that are free to be assigned to either $$$1$$$ or $$$2$$$ in $$$y$$$'s branch is simply $$$\texttt{depth}_y - \texttt{depth}_x$$$. If we assign $$$a_x = 2$$$ and $$$a_y = 1$$$ instead, then $$$y$$$ would have one more vertex forced to be assigned to $$$2$$$, meaning there are only $$$\texttt{depth}_y - \texttt{depth}_x - 1$$$ free vertices in $$$y$$$'s branch. Finally, the last case to be checked is if $$$\texttt{depth}_x = \texttt{depth}_y$$$. In this case, whether we assign $$$x$$$ to $$$1$$$ and $$$y$$$ to $$$2$$$, or we assign $$$x$$$ to $$$2$$$ and $$$y$$$ to $$$1$$$, all vertices in both branches are forced, so there are no free vertices in either branch.
Let the number of leaves be $$$\texttt{cnt}$$$. Then we have that: $$$\texttt{Answer}$$$ = \begin{cases} 0 &\text{if }\texttt{cnt} > 2 \newline 2^n &\text{if }\texttt{cnt} = 1 \newline (2^{\texttt{depth}_y — \texttt{depth}_x} + 2^{\texttt{depth}_y — \texttt{depth}_x — 1}) \cdot 2^{\texttt{depth}_v} &\text{if }\texttt{cnt} = 2\text{ and }\texttt{depth}_x < \texttt{depth}_y \newline 2^{\texttt{depth}_v} + 2^{\texttt{depth}_v} &\text{if }\texttt{cnt} = 2\text{ and }\texttt{depth}_x = \texttt{depth}_y \end{cases}
#include <bits/stdc++.h>
using namespace std;
const int N = 2e5 + 10, MOD = 1e9 + 7;
#define int long long
vector<int> adj[N], lens;
int pw[N];
int lca;
void dfs(int u, int par, int len) {
if(adj[u].size() > 2) lca = len;
bool leaf = true;
for(int v : adj[u]) {
if(v != par) {
dfs(v, u, len + 1);
leaf = false;
}
}
if(leaf) lens.push_back(len);
}
void solve() {
int n;
cin >> n;
for(int i = 1; i <= n; i++) adj[i].clear();
lens.clear();
lca = -1;
for(int i = 0; i < n - 1; i++) {
int u, v;
cin >> u >> v;
adj[u].push_back(v);
adj[v].push_back(u);
}
adj[1].push_back(0); // dummy node
dfs(1, 0, 1);
if(lens.size() > 2) cout << 0 << endl;
else if(lens.size() == 1) cout << pw[n] << endl;
else {
int diff = abs(lens[0] - lens[1]);
int x = diff + lca;
if(diff) cout << (pw[x] + pw[x - 1]) % MOD << endl;
else cout << (2 * pw[x]) % MOD << endl;
}
}
signed main() {
pw[0] = 1;
for(int i = 1; i < N; i++) pw[i] = (pw[i - 1] * 2) % MOD;
int t;
cin >> t;
while(t--) solve();
}
2117G — Omg Graph
Author: Intellegent
Try fixing the minimum edge on the path.
There are two common ways to solve this problem; one uses DSU and the other uses Dijkstra. I will explain the Dijkstra approach here:
The main idea here is to pick some edge $$$e$$$ and pretend that $$$e$$$ is the minimum-weight edge, then find the smallest maximum edge weight across all paths containing $$$e$$$. This method will only ever overestimate the answer, and equality holds when we choose $$$e$$$ to be the minimum-weight edge of the optimal path, so the answer must be the minimum of these values across all possible choices.
Lets define a new cost function. Instead of $$$cost(w_1,...,w_k) = (\min_{i = 1}^{k}{w_i}) + (\max_{i=1}^{k}{w_i})$$$, let's define $$$cost_j(w_1,...,w_k) = w_j + (\max_{i=1}^{k}{w_i})$$$, i.e. the weight of edge $$$j$$$ + the max weight. By the above reasoning, the minimum value of $$$cost_j(w_1,...,w_k)$$$ across all $$$j$$$ is the same as the minimum value of $$$cost(w_1,...,w_k)$$$.
This function is a lot easier to minimize than the initial one. Let's iterate over each edge $$$j$$$ — denote its endpoints to be $$$u_j$$$ and $$$v_j$$$, and its weight to be $$$w_j$$$ — and find the minimum value of $$$cost_j(w_1,...,w_k)$$$. Now we just need to find the minimum possible value of $$$\max_{i=1}^{k}{w_i}$$$ across all paths that contain $$$j$$$. To do this, we just want to minimize the max weight of the path from $$$1$$$ to $$$u$$$ and from $$$v$$$ to $$$n$$$, so we have reduced the problem to finding the minimum max weight on a path from $$$1$$$ to all other nodes, and finding the minimum max weight on a path from every node to $$$n$$$. But how do we do this?
The idea is very similar to Dijkstra — recall the proof that Dijkstra finds the minimum sum of weights on a path. The main idea is that whenever we pop a node from the heap, we know that the minimum cost to it has already been found. We can use the same reasoning to prove that if we change path cost from path sum to path max, then the algorithm will still work. So all we have to do is simply run this max Dijkstra variation from $$$1$$$ and from $$$n$$$.
Now we just need to find the minimum value of $$$w_i + \max(\mathrm{cost}(1, u_i), \mathrm{cost}(v_i, n), w_i)$$$ across all edges $$$i$$$.
#include<bits/stdc++.h>
using namespace std;
typedef long long ll;
#define debug(x) cout << #x << " = " << x << "\n";
#define vdebug(a) cout << #a << " = "; for(auto x: a) cout << x << " "; cout << "\n";
mt19937 rng(chrono::steady_clock::now().time_since_epoch().count());
int uid(int a, int b) { return uniform_int_distribution<int>(a, b)(rng); }
ll uld(ll a, ll b) { return uniform_int_distribution<ll>(a, b)(rng); }
void solve(){
int n, m;
cin >> n >> m;
vector<vector<array<int, 2>>> g(n);
for (int i = 0; i < m; i++){
int u, v, w;
cin >> u >> v >> w;
u--;
v--;
g[u].push_back({v, w});
g[v].push_back({u, w});
}
auto uwu = [&](int u) -> vector<ll>{
vector<ll> dis(n, 1e18);
vector<bool> vis(n);
priority_queue<array<ll, 2>> q;
q.push({0, u});
while (q.size()){
auto [d, c] = q.top();
d = -d;
q.pop();
if (vis[c])
continue;
vis[c] = true;
dis[c] = d;
for (auto [x, w] : g[c]){
if (vis[x])
continue;
q.push({-max(d, 1LL * w), x});
}
}
return dis;
};
ll ans = 1e18;
vector<ll> dis_s = uwu(0), dis_t = uwu(n - 1);
for (int u = 0; u < n; u++){
for (auto [v, w] : g[u]){
ans = min(ans, {max({dis_s[u], dis_t[v], 1LL * w}) + w});
}
}
cout << ans << "\n";
}
int main(){
ios::sync_with_stdio(false);
cin.tie(0);
int t;
cin >> t;
while (t--) solve();
}
2117H — Incessant Rain
Author: cry
Try to think of an offline solution.
Rephrase the problem to maximum subarray sums.
The unusual memory limit is meant to cut persistent segment tree solutions. Obviously it isn't perfect (I apologize if it caused issues), but it's the best I could do.
Let's first pretend that $$$x$$$ in the problem is fixed. For example, if $$$x = 2$$$, then we want to find the maximum $$$k$$$ such that $$$2$$$ is a $$$k$$$-majority of some subarray. To solve this problem, let's build an array $$$b_1, b_2, \ldots, b_n$$$ such that $$$b_i = 1$$$ if $$$a_i = x$$$, and $$$b_i = -1$$$ otherwise. Let $$$s$$$ be the maximum subarray sum over all subarrays of $$$b$$$. The answer to the problem is $$$\lfloor \frac{s}{2} \rfloor$$$.
Now let's consider how to handle updates. If we are asked to update index $$$k$$$ with an element $$$y$$$, we have two cases. If $$$a_k \neq x$$$ and $$$y=x$$$, then we update $$$b_k = 1$$$. Otherwise, if $$$a_k = x$$$ and $$$y \neq x$$$, then we set $$$b_k = -1$$$. Then, we can answer the query by finding the maximum subarray sum of $$$b$$$ using a segment tree. This is a standard problem.
Let $$$b_i$$$ denote the sequence of arrays that correspond to the array $$$b$$$ that fixes $$$i$$$. Now let's stop fixing $$$x$$$. For each update, notice that at most two of the arrays in the sequence is modified (precisely, $$$b_{a_k}$$$ and $$$b_y$$$). To answer the query, let $$$\texttt{res}_x$$$ denote the answer maximum subarray sums of arrays $$$b_x$$$ for all $$$x \in [1, n]$$$ so far. The answer to this query is just $$$\lfloor \frac{\max(\texttt{res}_1, \texttt{res}_2, \ldots, \texttt{res}_n)}{2} \rfloor$$$. Additionally, because of the update, notice that only $$$\texttt{res}_{a_k}$$$ and $$$\texttt{res}_y$$$ might change. This motivates an approach where we track $$$\texttt{res}_x$$$ overall all $$$x$$$ and extract the maximum.
For each $$$x$$$, let's keep track of all updates that modifies an index $$$k$$$ where it was $$$a_k = x$$$ originally or $$$a_k = x$$$ after the update. Let Notice that for the former case, we are setting $$$b_{x_k} = -1$$$ and in the latter case, we are setting $$$b_{x_k} = -1$$$. Notice that we are tracking at most $$$2q$$$ updates because each update modifies at most two $$$b_x$$$, as explained in the previous paragraph. Let's store all such updates in an array $$$v_x$$$.
Let's try to construct $$$b_x$$$ for each $$$x \in [1, n]$$$ now. To do so, let's first create a global array $$$b$$$, where initially $$$b_i = -1$$$ over all $$$1 \leq i \leq n$$$. Then, we can iterate over all elements of $$$a$$$ where $$$a_j = x$$$ and set $$$b_j = 1$$$. Now we can iterate over the queries in $$$v_x$$$. For each query in $$$v_x$$$, we can track the maximum subarray sum of $$$b$$$ after each update. If this is the $$$j$$$'th query, then we must note that after the $$$j$$$'th query, we will update $$$\texttt{res}_x$$$. We can iterate over all $$$x \in [1, n]$$$ and perform the above process in $$$O((n + q)\log n))$$$ time total.
Now, we know when to update $$$\texttt{res}_x$$$ for each $$$x \in [1, n]$$$. Finally, let's loop through all $$$q$$$ queries one last time and store all $$$\texttt{res}_x$$$ in a multiset. If the $$$j$$$'th query affects $$$\texttt{res}_y$$$, then we will erase the old $$$\texttt{res}_y$$$ from the multiset and replace it with the new one. To answer the query, we can simply extract the maximum element in the multiset.
First, consider the $$$k$$$-majority for a single element $$$y$$$. By definition, for a subarray $$$b$$$, the $$$k$$$-majority is given by $$$\lfloor{\frac{2 \cdot \text{cnt}(y) - |b|}{2}}\rfloor$$$. Instead of computing this directly, we construct a new array $$$c$$$, where $$$c_i = 1$$$ if $$$b_i = y$$$ and $$$c_i = -1$$$ otherwise. Then, the $$$k$$$-majority for the element $$$y$$$ is equivalent to the maximum sum among all subarrays of $$$c$$$, floor-divided by 2. This transformation reduces our problem to finding the maximal subarray sum.
Since we must efficiently handle updates, we require a dynamic data structure capable of both updates and queries. This can be accomplished using a segment tree that maintains the maximum subarray sum, maximum prefix sum, maximum suffix sum, and total sum within each node.
To apply this method to all possible values of $$$y$$$, we process all queries offline. Initially, we set all elements of array $$$c$$$ to $$$-1$$$. For each distinct element $$$y$$$, we first update the array $$$c$$$ by setting $$$c_i = 1$$$ wherever $$$a_i$$$ initially equals $$$y$$$. Next, we sequentially apply all updates involving $$$y$$$, calculating the maximal $$$k$$$-majority for $$$y$$$ after each update. This procedure generates $$$q + n$$$ segments, each segment holding the maximal $$$k$$$-majority information for an element $$$y$$$. By inserting and removing these maximum values into a multiset, we efficiently retrieve the global maximal $$$k$$$-majority at every step. Giving $$$O((n + q)\log n)$$$.
#include <bits/stdc++.h>
using namespace std;
using ll = long long;
#define F first
#define S second
#define all(x) x.begin(), x.end()
#define pb push_back
#define FOR(i,a,b) for(int i = (a); i < (b); ++i)
#define trav(a,x) for(auto& a: x)
#define sz(x) (int)x.size()
template<typename T> istream& operator>>(istream& in, vector<T>& a) {for(auto &x : a) in >> x; return in;};
struct Node {
int sum; // total sum of segment
int pref; // max prefix sum
int suff; // max suffix sum
int best; // max subarray sum
Node(): sum(0), pref(0), suff(0), best(0) {}
Node(int x): sum(x), pref(max(0,x)), suff(max(0,x)), best(max(0,x)) {}
};
Node merge(const Node &L, const Node &R) {
Node res;
res.sum = L.sum + R.sum;
res.pref = max(L.pref, L.sum + R.pref);
res.suff = max(R.suff, R.sum + L.suff);
res.best = max({ L.best, R.best, L.suff + R.pref });
return res;
}
struct segtree {
int n;
vector<Node> st;
segtree(int _n) {
n = _n;
st.resize(4*n);
build(1, 0, n-1);
}
void build(int p, int l, int r) {
if (l == r) {
st[p] = Node(-1);
return;
}
int m = (l + r) / 2;
build(p<<1, l, m);
build(p<<1|1, m+1, r);
st[p] = merge(st[p<<1], st[p<<1|1]);
}
void update(int p, int l, int r, int idx, int val) {
if (l == r) {
st[p] = Node(val);
return;
}
int m = (l + r) / 2;
if (idx <= m) update(p<<1, l, m, idx, val);
else update(p<<1|1, m+1, r, idx, val);
st[p] = merge(st[p<<1], st[p<<1|1]);
}
void update(int idx, int val) {
update(1, 0, n-1, idx, val);
}
Node query(int p, int l, int r, int i, int j) {
if (i > r || j < l) {
Node nullnode;
nullnode.sum = 0;
nullnode.pref = nullnode.suff = nullnode.best = INT_MIN;
return nullnode;
}
if (l >= i && r <= j) {
return st[p];
}
int m = (l + r) / 2;
Node L = query(p<<1, l, m, i, j);
Node R = query(p<<1|1, m+1, r, i, j);
if (L.best == INT_MIN) return R;
if (R.best == INT_MIN) return L;
return merge(L, R);
}
int max_subarray() {
Node res = query(1, 0, n-1, 0, n-1);
return res.best;
}
};
void solve() {
int n, q; cin >> n >> q;
vector<int> a(n); cin >> a;
vector<vector<int>> at(n+1);
FOR(i,0,n) at[a[i]].pb(i);
vector<vector<array<int, 3>>> updates(n+1);
FOR(idx,0,q){
int i, x; cin >> i >> x;
--i;
if(x != a[i]){
updates[a[i]].pb({idx, i, -1});
a[i] = x;
updates[x].pb({idx, i, 1});
}
}
vector<vector<int>> final_at(n+1);
FOR(i,0,n) final_at[a[i]].pb(i);
segtree st(n);
vector<int> init(n+1);
vector<vector<pair<int,int>>> change(q);
FOR(x,1,n+1){
trav(i, at[x]){
st.update(i, 1);
}
int cur = st.max_subarray();
init[x] = cur;
trav(i, updates[x]){
st.update(i[1], i[2]);
int new_cur = st.max_subarray();
change[i[0]].pb({cur, new_cur});
cur = new_cur;
}
trav(i, final_at[x]){
st.update(i, -1);
}
}
multiset<int> ms;
FOR(i,1,n+1) ms.insert(init[i]);
FOR(i,0,q){
trav(j, change[i]){
ms.erase(ms.find(j.F));
ms.insert(j.S);
}
cout << *prev(ms.end())/2 << " ";
}
cout << "\n";
}
signed main() {
cin.tie(0) -> sync_with_stdio(0);
int t = 1;
cin >> t;
for(int test = 1; test <= t; test++){
solve();
}
}








omg
graph
You
super fast editorial
i think the people who rated E high solved it using some crazy DP idea
Actually, I solved it using a greedy approach 323534611
same here 323532905
i don't think this implementation is very difficult only the observation took some time
bro there is no "ا" in "الرحمن"
I made it very difficult while thinking and solved it using DSU.
My approach: 323521220
yea man E felt so much easier than D
Thanks for fast editorial
fast editorial!
Hikari!
Found F to be easier than E . Anyone else with the same opinion ?
Found G much easier than F. It's more standard, while F is more unique.
I didn't get the time for E and G :(
Can you point out the name of the standard problem like G, or where I can find that kind of standard problem? I just want to practice more with it, thank you.
I think CSES has some problems like these in the graphs section. Try: CSES — Flight Discount
It reminds me of this one: https://mirror.codeforces.com/contest/1213/problem/G
does using dijkstra make G a standard problem?
I was thinking of DSU (union-find), not Dijkstra.
i felt the same but f is tricky for me
Yep! I got F at the end of the contest by realizing all valid trees belonged into two groups (one leaf or two leaves). When looking at G, I thought it vaguely had something to do with Dijkstra's but didn't think of much else.
Can $$$C$$$ be hacked with collisions?
yeah
No, if you use BST instead of hash table
what is BST?
Binary search tree. If you have sortable elements, you can store them in a tree. If the tree is balanced, then deletions and checks of inclusion are $$$O(\log N)$$$ and insertions are $$$O(1)$$$ on average. To ensure that tree is balanced different approaches are used: AVL, Red-Black trees and others.
Basically, it is used as a decently fast implementation of a set (as in math).
thanks for detailed explanation
I forgot I named my dijkstra function that.
Can someone explain the case where the tree has 2 leaves, more clearly please
When a tree has 2 leaves we can first ignore the leaves and just find the length of the root till some vertex where the tree splits to the 2 leaves. From that we can get 2^(the length of the root till said vertex) as all good possibilities. Then for the leaves, to make sure none of the sums over lap, and with the only possible values being 2 and 1 the only way to do that is to have one leaf be all odds (1,3,5...) starting with one and then continuously having 2s, and the other having all evens, starting with 2 instead. If one leaf is longer than the other however, we can essentially do the same thing we did were we get the difference between the leaves lengths and then get 2^(the difference). the difference-1 comes from the fact that the leaf with the odds, will be forces to have another 2 as if not it will overlap with the evens.
the difference-1 comes from the fact that the leaf with the odds, will be forces to have another 2 as if not it will overlap with the evens. More on this ?
For instance say we have a tree with two leaves that are length 5 and 3, from where they branch off from the line from the root. Say the one of length 3 is the evens having the subtree sums values 2+x,4+x,6+x; where x is whatever the sum is from where the tree branches off into the two branches till the root. This makes forces the sums of the other leaf to be 1+x,3+x,5+x if they where both of length of 3. However now with the extend length of the leaf of length 5 we cannot have the '4th' node be the value 1, as that would make the leafs values be 1+x,3+x,5+x,6+x where 6+x is a duplicate. First let o=2^(length of branch off to the root) as those values can always be either 1 or 2, and won't have overlap, and difference equal the the difference in length of the two leaves. Now the answer would be equal to (o * 2^(difference)) + (o * (2^(difference-1))), as the 'odd' or 'even' leaves could be either one, and when the shorter one is the 'even' leaf, there is an extra node that is forced into being 2, which is where we get the differece-1 from.
Great. Thanks
I was slightly confused regarding it, could you pls clarify if my way of thought is correct?
Assume I take this example
Now assuming the case for which 2^(y-x) fails by assuming
d = 2 and h = 1.For sure
c = 2, g = 2Now applying 2^(y-x) logic,
since I am flexible lets assume
f = 1Here I notice that subtree sum of c (
c--di.e. 2+2 = 4) and subtree sum of f (f-g-hi.e. 1+2+1=4) hence SAME hit, which is not allowed.Hence increasing the fixated Ness/locking another node to 2 in longer depth leaf path ensures until I reach LCA via the smaller depth leaf path(bottom to up travel) for such case, we never find equal sum for any subtree sum since f-g-h fixed values ensure the greater depth leaf's path's(or rather some node in the path) subtree sum is always greater than the max subtree possible by the smaller depth leaf's path's node (Here c-d, ie. the immediate node below the LCA ) without merging to other since LCA will be reached to improve sum.
Also since subtree of LCA means entire inverted Y shape is part of one subtree so no comparison (i.e. c,d,e,f,g part of one single tree).
I supposed H has a solution of space $$$O\left(n\right)$$$, and an obvious online solution using $$$n$$$ segment trees requires $$$O\left(n\log n\right)$$$ space, so I simply changed segment trees to balanced trees and then got an online solution of time complexity $$$O\left(n\log n\right)$$$ and space complexity $$$O\left(n\right)$$$
If you use only one segment tree the space complexity is also $$$O(n)$$$.
Edit: sorry i misunderstood your point.
wrote down so many equations for D but i could only get like half the information needed. wish i was better at maths T-T
I wouldn't say it's about math level. More like observations/ russian-ness.
I tried to solve D, using the fact that when a[n-1] = 2*a[n], I will do operations only of type 2, and the conditions, a[n-i] = (n-i+1)*a[n] should hold for all i, so first I calculated how many operations of first type are required to get a[n-1] = 2* a[n], apply that much operations of 1st type to the entire array, and then check the condition which I told.
But I am getting wrong answer on test 3, can somebody please look into my submission and tell the probable error, https://mirror.codeforces.com/contest/2117/submission/323482331
Consider the following test case:
which your solution gives
YES. Applying operation one 3 times makes it-2 -1which satisfies a[n-1] = 2*a[n] but requires applying operation two -1 times which can't be done.Ohh yes, got the error, thank you very much for the failing test case !!
https://mirror.codeforces.com/contest/2117/submission/323528702
Someone please check if this solution for problem E is hackable. Thanks!
https://mirror.codeforces.com/contest/2117/submission/323510631 why this code giving WA.. In this code I am finding max index of a particular element and after that while iterating again finding max index of a elements in current segment..and so on Problem C is nice greedy problem thanku to authors
Am I the only one acoustic enough to think of D involving a system of linear equations
i did too but after realizing the significance of pairing the operations i quickly abandoned the idea (edit: i was considering some system for all a shb i at first but the pairing helped me realize the significance of APs)
edit
that was actually the way to think
That was how I did it :)
I used the first and last values given to solve for two variable system of equations. Really satisfying to see that a system of equations is a way to solve that problem.
Even I did the same thing. Then I realized you can just solve for the first and last element and verify if the others satisfy it or not. It was actually satisfying to see the direct application of such a trivial high school math concept. https://mirror.codeforces.com/contest/2117/submission/323479092
the fact that even almost all testers found f easier then e and agreeing with me is ...
Problem G is using A* star algorithm here . f = h + g ; ( very basic thing in intro to AI courses). and worth to check out.
Can anyone provide one test case where my code for C will give wrong answer? I mean I have tried as many as I could but still couldn't find one test case where it could go wrong. And also I can't check the 1233rd input of the provided test cases where it gives wrong answer. my code
1
10
9 5 5 9 5 5 9 9 5 5
expected output: 4
your output: 3
Greedy E !!! my solution to E solution Although code is not very well written but idea is simple....
I think B & C are wonderful :)
Thanks for the editorial
when would our rating update, i'm new so i don't know about that
after the hacking phase so in about 8 hours
can u also tell me what hacking phase is used for?
people try to find a submission that passed the pretests but have some wrong edge case , they test this submission using this edge case , and if it actually fails it , the submission will be considered as hacked and the hacker will gain some placements . otherwise the hacker will lose some placements .
oh thats amazing....THANK YOU SO MUCH
you're WELCOME,:)
After that , there will be system testing.
Good contest
Seems like one of the testers wrote an incorrect Dijkstra :)
:skull: it has been fixed now
Guilty. Apparently IGMs make such mistakes too.
Can anyone tell me what is wrong with this implementation of problem C? I replaced the map implementation with a set, and it got all AC, but this map implementation is messing up on a very specific case (test case 1562) and I am unable to figure out the test case or reason why it fails.
https://mirror.codeforces.com/contest/2117/submission/323466126
1 9 3 1 1 1 3 1 4 3 4
Try this and thank the tester who didn't let pass this nearly correct solution.
Correction hint: There's something glitchy the way you update h.
Got it! Thanks much
(problem G) Can anyone hack this/send the test case that it fails on, I can't work out what is wrong. Thanks!
Problem G can also be solved with lazy prim algorithm. We firstly just run the algorithm from root until vertex $$$n$$$ is connected. Then we calculate an answer, but that may not be the optimal one. We can just continue running the algorithm until there is no edge not yet visited, and each time we visit an edge, we try to update the answer.
G is very easy with DSU. Maintain the smallest and largest edge weights for each component in the DSU, then update the answer if node 1 and n are in the same set. Code: 323560818
yes,wanted to solve using DSU but could't prove it.
I think
Note that the path is not necessarily simple.this hint is dropped on problem statement for the same reason.ur code is a mess bro
hello, can you help me why am I getting TLE on TC 33 326081245 I ran Dijkstra twice
solved , found the mistake
can you plz tell me what might be wrong i have literally done same as you 358079716
Here is another solution for C. Let us maintain an array that stores, for each $$$a_i$$$, the positions at which it appears in descending order. Next, we traverse through the array, and maintain two variables:
last_max: This tells us the maximum next position that an element in the previous segment has (among all elements in the previous segment).next_min: This tells us the minimum next position that a scanned element has (among all scanned elements)Now, we can make a cut (i.e., create a new segment) at index $$$i$$$ iff
last_maxis less than or equal $$$i$$$ andnext_minis greater than $$$i$$$. The first condition tells us that all elements in the previous segment are contained in the current segment, whereas the second condition tells us that if we make a cut at position $$$i$$$, we are guaranteed that the next segment will have all elements in the current segment.Here is the implementation code 323500146
Did anyone solved E using DSU?
I think G is an application of Kruskal's algorithm, perhaps most people think so
yeah I solved it using Kruskal's algorithm (mst) + maximum path query(binary lifting)
mst already minimizes the maximum edge on the path
then i just iterated over all the edges considering it the minimum edge (x, y) ans = wt[edge] + max of get_max_edge(1, x), get_max_edge(1, y), get_max_edge(x, n), get_max_edge(y, n)
minimum ans is the global answer
here is my submission: 323473024
hey can you plz see this code
i have been literally doing these same thing but getting wrong ans on test 2
358079716
B was way too easy, even for its position. Personally, I think it was easier than A
Yes , just a constructive problem.
yeap, I didn't participated in the contest but after solving B is easier than A
Can anyone tell what is wrong with this submission for Problem C. It is failing on test case 2 https://mirror.codeforces.com/contest/2117/submission/323569603
C was way more challenging than D. Maybe that's cause I tried bruteforcing C.
Awesome editorial
F deserves an example. I had no idea what was ask from me.
Subtree sum of a node basically means summation of all the weights of nodes under it (which can reached from the node in some way).
You need such a placement of weights on each node such that for all nodes in tree each subtree sum is unique.
Assume
Here we need such a placement of weights 1 and 2 on each node such that
subtreesum(a), subtreesum(b),...,subtreesum(h)are all unique values.So we need to find how many such permutations of weight distribution is possible such that all the subtreesum are unique values.
guys any tips I have solved till D. Took a lot of time for C my implementations was map<int,queue> m;
I want to improve.
Dude B was easier than A and D was easier than C (╥﹏╥), Why did you confuse my brain (╥﹏╥)
Can anyone help me with the logic behind problem D?
I don't know about the first solution but the second one is just using linear equations to find the number of operation of both the first operation and second operation and checking if it's same for all or not
I thought of C like, the answer would be: the number of times the first number appears in the array,but the solution got rejected :( can someone explain like some testcase where this might go wrong?
It dosent work for the testcase 5 1 5 4 5 5 1 , from your logic the answer should be 4, bu the answer is 3 — 5 , 1 5 , 4 5 5 1 . Like every element is the previous set must be repeated in the current set not only the first one
oh ok thanks!!! i misread the problem and did 5 wrong submissions TT
is anyones submission stuck at in queue while practicing? Like I submitted my solutions today for practice but all the submissions are stuck at
In queueeven after the system tests are complete. Is it ohk? or any issue? or how much time will it take to work? Edit : some of the submissions got accepted just now, maybe it'll work in a few minutesCan anyone help me understand why my solution for problem G got a WA ? Here's my code: 323611984 I know there's an easier way to solve it, but I’d really appreciate it if someone could point out why this version fails. Thanks in advance!
Found D to be easier then C . Anyone else with the same opinion?
yeap, also if we use python for C we get TLE but if we use CPP the solution gets accepted. Typical cpp behavior :)
Problem G is beautiful when u use DSU.
bro can you please explain your logic
So the prerequisite is DSU & MST.
You sort the edges first, and start merging the vertices of an edge in the increasing order.
And you maintain a min array where u store the min edge value of the current node.
Now check if 1-n is connected and update the ans as
min(ans, current weight + min edge value of current node so far).Try to dry run on some test cases u will get the picture, try a test case where min value is still connected to the path but doesn't lie on the actual path from 1-n.
My code: https://mirror.codeforces.com/contest/2117/submission/323621729
Thanks! With the DSU logic, everything is much clearer. I had tried to solve it using Dijkstra before, but got a wa. Now DSU the logic makes sense.
Also, do I need to compare anything anymore once 1 and n are in the same component? Isn’t that the point where I already have my answer?
No I was confused on the same point, that first 1-n connected component should give the right answer.
But that isn't the case, the min edge value of parent node 1 will change, once u keep merging newer edges.
It's the test case I mentioned above. Where the min edge is not part of the actual path.
Here the answer is 17 instead of 20. if you break early u will get 20 as the answer.
Yes, I noticed the same after getting a WA submission. I thought that if a component connects to any other component with a smaller minimum value, the answer might decrease. So your initial logic is absolutely necessary and correct.
For problem G, can someone please explain why can't we use Binary Search + BFS to find the max value in the tree which is premissible & form all the connected edged less than current value of binary search & form a path from 1 to n. and then find the minimum value in that path(i.e all the edge which connect 1-n ).
https://mirror.codeforces.com/contest/2117/submission/323689502
The reason is that there might be a better path that uses an edge with a higher value. Take this input for example:
The algorithm will see "Oh, the graph is connected when we consider edges with weight less than or equal to 6, so we use only those edges", which will return an answer of 4 + 6 = 10 by going through 1 — 2 — 3 — 6, even if the path 1 — 4 — 5 — 6 is of weight 2 + 7 = 9.
ohk, got it thank you
MST soln blows my mind bro....., wasted 1 hr thinking Dijkstra during contests :'(
Same, bro. I also tried it with Dijkstra but got a WA. 323611984 at case 4444 :)
Problem C, same second solution in python get TLE.
Can problem H be solved with a dynamic segment tree? I got RE
can someone explain on high level why this didn't work for G https://mirror.codeforces.com/contest/2117/submission/323524848
Congratulation yse You have achieved the record for the fastest and most helpful editorial in codeforces
What's possibly in the test cases 25 and 39 for C everyone? I got TLE in Java in #25 after rejudge.
Same for me..I used cpp and got TLE in test case 25 after rejudge..It's basically array of size 20000 consisting of only 1s
the output is not 200000, so it's not only ones. Besides I've tried 200000 1's locally and it's not that slow :(
Why can't I use persistent segment tree to solve H? I know a persistent segment tree only costs O(nlogn) memory totally,but it seems to match the condition of 192 MB.
can someone suggest some way of solving D through Binary Search..
I tried this but got wrong on test case 2
in the 2nd solution to the D problem: how can we prove that the solution using a1 and a2 will be valid for all ai? is it simply because of induction? also secondly, why don't we need to check whether the value of y obtained after the division to be an integer value for it's acceptance as a valid solution (as we are checking for it to be negative)?
In the solution the author isn't proving anything; the ($$$x$$$, $$$y$$$) he found is the only pair that is possible (i.e., if there exists a solution then it must be this ($$$x$$$, $$$y$$$); but it's possible that this ($$$x$$$, $$$y$$$) is not actually the solution). That's why he actually performs the operations with this ($$$x$$$, $$$y$$$) to check; it is the solution iff all elements become $$$0$$$).
That's also the same reason he doesn't check whether $$$n+1$$$ divides $$$2a_1-a_2$$$; he just performs integer division and if $$$y$$$ is actually a decimal, then some elements won't become zero (because $$$y$$$ has been "forcefully" rounded).
okay, thanks! that makes sense.
I forgot the lca algorithm :((((
Does anyone know why this solution was hacked for D? Tried to greedily if-else all possible cases i could think of :(
can anyone please tell me why this dijsktra dont works in some cases.323677800
E: https://www.youtube.com/watch?v=ow3hHbhQkfM
F: https://www.youtube.com/watch?v=LUZFk6SbHuA
G: https://youtu.be/H6S1v0eFBH8
What platform u use to create these animations? And how long it takes to create soln for one video ?
I created it. Around 10 mins.
For problem G, can someone please explain why can't we use Binary Search + BFS to find the max value in the tree which is premissible & form a path from 1 to n. and then find the minimum value in that path.
https://mirror.codeforces.com/contest/2117/submission/323689502
https://youtu.be/H6S1v0eFBH8
can you please explain condition where the above logic fails
in F problem solution, adding the statement
(d = depth_x — depth_y)
"in the case where the longer branch takes odd sum values, the (d + 1)-th node from the leaf will also have to be fixed as 2, to avoid collision with the shorter branch's d-th value"
helps understand it easier.
First time poster here. Need help debugging my solution.
I used yse's idea on solving problem C, on pypy 3.10. My solution was accepted but, after hacking, it went TLE on test 9. Didn't make sense to me, no O(n log n) solution should TLE with N=2*10^5.
After the rejudging, I attempted a few small optimizations and all went TLE on the same test. I thought it should be some pypy issue on handling sets, until I noticed the same idea implemented by Anni_official had resisted the hacking: 323469107
For comparison, here is my original implementation, the one that failed on rejudge: 323446043
The implementations were way too similar, no obvious reason why they should behave so differently, until I tweaked the input handling, resulting in these submissions: 323651217 323651381
The first TLE'd on case 9, the other was accepted. Only difference was that strings, rather than integers, were handled to the sets.
I was unable to replicate this issue on my IDE+environment (PyCharm, also running pypy 3.10, mac OS, M1 processor). I ran the same code of both submissions, measuring elapsed time (including the input handling), and both run in about 20–40 ms (with the integer version more often on the lower end of this range). I can't see the whole test case but I built a (hopefully) similar test case, containing a 2*10^5-long list of random integers whose second half is equal to the first half shuffled.
Any help (on either what's happening or how to debug this, since "it works on my IDE" and we can't experiment directly with the judge) is appreciated.
That's because you used
set. Usingsetordictnaively gives risk of getting hacked.For actual testing, try my hack case. I'm sure your solution is slow on pypy.
Indeed, about 7 seconds. Test case 9 is a similar input sequence. Seems like it would also be easy to break the string version except the hack just wasn't produced during the contest itself.
Hashing method for strings is complicated, and it's hard to craft an input to attack stringified integers. I do not know anti-hash attacks for them.
(Personally I do not recommend using strings, because they are slow.)
c is really good.
Could someone help me with problem C? I know the editorial uses O(n) solution, but during contest i came up with O(n log n) approach using binary search.
I'd like to stick with this idea and try to make it work (for learning purposes). Here is my submission: (https://mirror.codeforces.com/contest/2117/submission/323735379). It gives WA on test 2.
(Also i added a detailed explanation of algorithm (in russian) as a comment in the code)
I solved it using binary search as well. I don't understand Russian, so I won't be able to understand your comments, but you can take a look at my solution:
325357943
Correct me if I'm wrong but the editorial for E, shouldn't it say:
We also need to check if position i already contains a match, or if ai=ai+1 or bi=bi+1.
Instead of:
We also need to check if position i already contains a match, or if ai=bi+1 or bi=ai+1
My understanding is if ai = ai+1, then bi can be equal to ai, or if bi = bi+1, then ai can be equal to bi
Which set i as valid
Thank you. It has been fixed.
love the C(solution2), same as I thought, but i cant realize it in time.
Does anybody know the time complexity for the second code for problem C?
I am looping over all elements which is O(n), but every now and then I clear seen, I am assuming the worst case for seen is to be holding almost all of the elements which also makes clearing it O(n), making the worst case to be O(n^2)..
The solution makes absolute sense but how does it pass? Doesn't seem optimal.
Thanks I think I got it.
I am assuming the idea behind this is that O(n^2) means for every iteration i from 1 to n I will also iterate with j from 1 to n, which is not the case here, no, here the worst case would only happen when iteration i is equal to n — 1 (assuming 0-index) and the condition (seen.size() == curr.size()) is satisfied, in this worst case, the whole array is only cleared ONCE in the whole loop, and the iterations prior to i == n — 1 would be simply O(logN) I am not clearing anything just basic if conditions and set.inserts, so even saying that this is the worst case overall is debatable, it probably just isn't.
In other words the solution fits the time perfectly.
Sorry if I wasn't clear in my editorial. Notice that we only insert each element into
seenat most once. So we have at most $$$n$$$ insertions. We certainly do not need to delete more elements than we have inserted, so we also have at most $$$n$$$ deletions. In total, the runtime is $$$O(n\log n)$$$.Thank's alot! It was a good contest, keep it up!
orz,i am new to the arithmetic,in the game time idont realise that ican use set to solve it,but looking for queue or sth, I dont solve lots of problems, but its my best, cause i can solve such interesting problem without a Particularly complex algorithms
Very Good Jop
For problem F, how do we know that there can be at most 2 leaves?
because lets say there are 3 or more leaf nodes then all of these leaf nodes are themeselves subtrees and by php atleast two of them will have same value
Thank you for your reply. But I do not know what PHP is. Could you elaborate?
Pigeonhole principle
Ok thank you, I still have no idea how we can be sure we have max of 2 leaf nodes
Look at all of the possibilities:
1 1 1
2 1 1
2 2 1
2 2 2
In all cases, at least two leaves have the same value, contradicting the problem statement.
Thank you, I understand it now. One major point I was missing was that leaf nodes are also, by themselves subtrees. Once I realized this, it all made sense, thank you very much.
nice contest
QUESTION E, JAVA CODE
H can be done online using a treap.
Can anyone give me a test-case which would cause this code to fail? Its for problem E https://mirror.codeforces.com/contest/2117/submission/328590801
in problem F, can anyone prove that in the case where depth[y] > depth[x], assigning all nodes on the path from v to x (excluding v and x) to 2 will always ensure that the s[] values of all subtrees in the branch x are different from all s[] values of all subtrees in the branch y, regardless of how we assign the nodes on the path from v to y (excluding v and y)? this property doesn't seem very intuitive to me
can someone tell me why is this giving tle on TC 33 question — G
In the solution of H:
Let Notice that for the former case, we are setting $$$b_{x_k} = -1$$$ and in the latter case, we are setting $$$b_{x_k} = -1$$$.
yse
Please Anyone help me in my code for C problem i got stucked getting wrong answers
In question F, suppose a tree:
Number of nodes: 5
Edges are: 1-2, 1-3, 2-4, 2-5
Number of leaves in this tree = 3.
According to the tutorial solution, ans = 0.
But, in this testcase: ans = 4.
For node [1], value can be 1 or 2. For each value of node [1], value of [2] can be = 2, value of [3] can be = 1 and (value of [4] can be = 2 and value of [5] can be = 1) or (value of [4] can be = 1 and value of [5] can be = 2).
as for problem c, is it legal for a segment to be composed of only one number?if so,the first segment is exactly composed of one number to make it easier for the following segment to be legal,and then we can solve it greedily,but actually i was wrong according to the test running result ,can anyone help me pls???
for question 3. can someone tell me why my code gives wrong answer for test case 3
361849565
help is appreciated