Salam !
Here is the editorial for Codeforces Global Round 31. We hope you find these solutions helpful. We also have prepared a special editorial with step by step hint, examples, tables, and walkthroughs which is available here.
2180A - Carnival Wheel
Idea: AmirrzwM, MohammadParsaElahimanesh — Preparation: MohammadParsaElahimanesh
Since constraints are small, we can simulate the spins directly:
- Start from section $$$a$$$.
- Move the pointer by adding $$$b$$$ modulo $$$l$$$ repeatedly.
- Stop when we return to a previously visited section.
- Among all visited sections, select the section with the highest prize.
It runs in $$$\mathcal{O}(l)$$$ as we mark each section at most once.
It's easy to see that the pointer can land on section $$$c$$$ if $$$c \equiv a \pmod{d}$$$, where $$$d = \gcd(l, b)$$$. By Bézout's lemma, all such $$$c$$$ are indeed reachable. Hence, the maximum attainable section is:
// In the name of God
#include <bits/stdc++.h>
using namespace std;
int main() {
int l, a, b, t;
cin >> t;
while(t--) {
cin >> l >> a >> b;
cout << l - __gcd(l, b) + a % __gcd(l, b) << '\n';
}
return 0;
}
2180B - Ashmal
Idea: AmirrzwM, MohammadParsaElahimanesh — Preparation: MohammadParsaElahimanesh
Step 1: It can be shown (for instance, via a contradiction argument) that, to obtain the lexicographically minimum final string $$$s$$$, we must ensure $$$s$$$ is lexicographically minimum at each intermediate step as well.
Step 2: Therefore, at every step $$$i$$$, we choose the lexicographically smaller result between adding $$$a_i$$$ to the beginning or the end of our current string.
Step 3: Initially, $$$s$$$ is the empty string. Suppose before step $$$i$$$, the current string is $$$s_i$$$. Then the best choice for the updated string is:
- Step 4: By applying this choice at each step, we can compute the final (minimum) string in $$$\mathcal{O}(nS)$$$ where $$$S$$$ is the sum of string lengths.
#include <bits/stdc++.h>
using namespace std;
int main() {
ios_base::sync_with_stdio(false), cin.tie(nullptr);
int t;
cin >> t;
while(t--) {
int n;
cin >> n;
string s[n], t;
for(int i = 0; i < n; i++)
cin >> s[i];
for(int i = 0; i < n; i++)
t = min(t+s[i], s[i]+t);
cout << t << '\n';
}
return 0;
}
2180C - XOR-factorization
Idea: MohammadParsaElahimanesh — Preparation: MohammadParsaElahimanesh
We build $$$a_1,\dots,a_k$$$ bit by bit from the highest bit to $$$0$$$.
Let
Maintain two sets of indices:
$$$T$$$ (tight): current prefix of $$$a_i$$$ equals the prefix of $$$n$$$,
$$$L$$$ (loose): current prefix of $$$a_i$$$ is already smaller than the prefix of $$$n$$$.
Initially, $$$T={1,\dots,k}$$$, $$$L=\emptyset$$$, and all $$$a_i=0$$$.
Greedy rule for each bit $$$b$$$ (from high to low).
If the $$$b$$$-th bit of $$$n$$$ is $$$1$$$, then the number of $$$a_i$$$ having bit $$$b$$$ equal to $$$1$$$ must be odd. To maximize the sum we set as many ones as possible, i.e. exactly $$$o(k)$$$. So:
- if $$$k$$$ is odd, set this bit in all $$$k$$$ numbers;
- if $$$k$$$ is even, set it in all but one number. Prefer choosing that one from $$$T$$$ (if possible), because then it becomes loose ($$$T\to L$$$), which can help later.
If the $$$b$$$-th bit of $$$n$$$ is $$$0$$$, then the number of ones in this bit must be even. Tight indices cannot put $$$1$$$ here (otherwise $$$a_i \gt n$$$), so only $$$L$$$ may receive ones. Thus we set this bit in exactly $$$e(|L|)$$$ loose numbers (all of them if $$$|L|$$$ is even, otherwise all but one).
Why it is optimal. At bit $$$b$$$, each extra chosen $$$1$$$ adds $$$2^b$$$ to the total sum, and this gain is larger than anything we can get from lower bits. Also, the XOR condition fixes the parity in every bit, so we can only change the number of $$$1$$$’s by removing them from an even number of elements. So, at every bit we should keep as many $$$1$$$’s as possible, as long as we do not violate $$$a_i\le n$$$. When the $$$b$$$-th bit of $$$n$$$ is $$$1$$$ and $$$k$$$ is even, we must place one $$$0$$$; choosing it from a tight index increases $$$|L|$$$, which can help us place more $$$1$$$’s later when $$$n$$$ has $$$0$$$-bits.
This directly gives a construction with $$$0\le a_i\le n$$$, XOR equal to $$$n$$$, and maximum possible sum in $$$\mathcal{O}(k \log n)$$$.
// In the name of god
#include <bits/stdc++.h>
using namespace std;
typedef long long ll;
int main() {
ios_base::sync_with_stdio(false), cin.tie(nullptr);
int t;
cin >> t;
while(t--) {
int n, k;
cin >> n >> k;
int a[k]{}, p = 0;
if(k&1)
for(int i = 0; i < k; i++)
a[i] = n;
else {
for(int i = 30; i >= 0; i--) {
if(n >> i & 1) {
for(int j = 0; j < k; j++)
if(j != min(p, k-1))
a[j] += (1 << i);
if(p < k)
p++;
} else
for(int j = 0; j < p/2*2; j++)
a[j] += (1 << i);
}
}
for(int i = 0; i < k; i++)
cout << a[i] << (i+1 == k? '\n':' ');
}
return 0;
}
2180D - Insolvable Disks
Idea: AmirrzwM, MohammadParsaElahimanesh — Preparation: AmirrzwM
Let $$$x_1, x_2, \ldots, x_n$$$ be the sorted list of points, and let $$$d_i \ (1 \le i \le n-1)$$$ be the distance between consecutive points, i.e.
We try to find the largest prefix of points that can be made connected.
Consider some valid choice of radii, and assume that the radius of the first disk is $$$r_1 = x$$$. In order for the second disk to be tangent to the first one, its radius must be
Continuing in this way, the third disk must have radius
In general, if we connect the first $$$k$$$ disks together, the radius of the $$$i$$$-th disk $$$(1 \le i \le k)$$$ must be
For this configuration to be valid, we must have
Substituting the expression for $$$r_i$$$, we obtain
For odd $$$i$$$, this inequality becomes
which is equivalent to
For even $$$i$$$, we obtain
which is equivalent to
Therefore, for each $$$i$$$ we obtain an interval of valid values for $$$x$$$. By iterating over $$$i$$$ and maintaining the intersection of all these intervals, we can determine the first index where the intersection becomes empty. This index gives the largest prefix of disks that can be made connected.
After removing these disks, we can repeat the same process on the remaining ones. It can be shown that this procedure finds the correct answer and runs in $$$O(n)$$$ time.
Why it is optimal.
For each index $$$i$$$, let $$$f_i$$$ be the rightmost position such that, for every $$$j$$$ with $$$i \le j \lt f_i$$$, we can simultaneously make the $$$j$$$-th disk tangent to the $$$(j+1)$$$-th disk. Thus, $$$f$$$ is increasing sequence.
With this notation, the problem becomes selecting indices $$$1=i_1 \lt i_2 \lt \cdots \lt i_k$$$ such that
By induction, we may assume $$$i_{t+1}=f_{i_t}+1$$$, because choosing any smaller $$$i_{t+1}\le f_{i_t}+1$$$ cannot extend the reachable prefix beyond $$$f_{i_t}$$$, while choosing exactly $$$f_{i_t}+1$$$ is the earliest index that can potentially increase the farthest reachable position. Hence this greedy choice is optimal. Also, note that once we set $$$i_{t+1}=f_{i_t}+1$$$, there are no additional constraints on the choice of $$$r_{i_{t+1}}$$$ coming from earlier indices. In particular, any restriction on $$$r_{i_{t+1}}$$$ can only be imposed by $$$i_{t+1}+1$$$ (otherwise $$$f_{i_t}$$$ would not have been maximal by definition).
#include <bits/stdc++.h>
using namespace std;
int main(){
ios::sync_with_stdio(false) , cin.tie(0);
int tt;
cin >> tt;
while(tt--){
int n;
cin >> n;
int x[n];
for(int i = 0 ; i < n ; i++) cin >> x[i];
int ans = n-1;
int l = 0 , r = +1e9+1;
int cur = 0;
int fst = 1;
for(int i = 1 ; i < n ; i++){
cur = (x[i] - x[i-1]) - cur;
if(i % 2 == fst % 2){
r = min(r , cur);
if(i < n-1) l = max(l , cur - (x[i+1] - x[i]));
}
else{
l = max(l , -cur);
if(i < n-1) r = min(r , (x[i+1] - x[i]) - cur);
}
if(l >= r){
ans--;
l = 0 , r = +1e9+1;
cur = 0;
fst = i+1;
}
}
cout << ans << endl;
}
}
2180E - No Effect XOR
Idea: AmirrzwM — Preparation: MohammadParsaElahimanesh
Step 0. Let us consider $$$0$$$ as valid answer, for $$$l=r$$$ it's the only value, so assume $$$l \lt r$$$. Also xor is a bijection, so if $$$[l,r]$$$ maps to itself then its complement in $$$\mathbb W$$$ (all non-negative integers) also maps to itself.
Step 1. Let's count the number of $$$x$$$ such that maps $$$[0,n)$$$ to itself. If $$$n=2^k$$$ then every $$$0\le x \lt n$$$ works. If $$$2^{k-1} \lt n \lt 2^k$$$, then the $$$(k-1)$$$-th bit of $$$x$$$ must be $$$0$$$; otherwise it maps $$$2^k-1$$$ into $$$[0,n)$$$, hence the answer equals the answer for $$$[0, n-2^{k-1})$$$. Therefore the answer is simply the largest power of two dividing $$$n$$$.
Step 2. Choose $$$k$$$ with $$$2^{k-1}\le r \lt 2^k$$$. If $$$2^{k-1} \le l \lt r \lt 2^k$$$, then every number in $$$[l,r]$$$ has the $$$(k-1)$$$-th bit equal to $$$1$$$, so subtracting $$$2^{k-1}$$$ from both ends just removes that common bit and does not change the answer; repeating this reduction, we may assume $$$0 \le l \lt 2^{k-1} \le r \lt 2^k$$$.
Step 3. Work in the block $$$[0,2^k-1]$$$ and define $$$A=[0,l-1]$$$, $$$B=[l,r]$$$, $$$C=[r+1,2^k-1]$$$. We need $$$B\oplus x=B$$$. In this crossing case, every element of $$$A$$$ has $$$(k-1)$$$-th bit $$$0$$$ and every element of $$$C$$$ has $$$(k-1)$$$-th bit $$$1$$$, so xor by $$$x$$$ cannot send only part of $$$A$$$ to $$$C$$$. Since xor is a bijection on the whole block, once $$$B$$$ maps to itself we must also map $$$A\cup C$$$ to itself, and there are only two possibilities:
Map $$$A\to A$$$, $$$B\to B$$$, $$$C\to C$$$. This is equivalent to simultaneously keeping the prefix $$$A=[0,l)$$$ invariant and also keeping the prefix $$$A\cup B=[0,r+1)$$$ invariant.
Map $$$A\to C$$$ and $$$C\to A$$$, while still $$$B\to B$$$. This is possible only if $$$|A|=|C|$$$. Define $$$\bar{x}=x\oplus(2^k-1)$$$. Then for every $$$x$$$ that maps $$$A$$$ to $$$A$$$, the value $$$\bar{x}$$$ maps $$$A$$$ to $$$C$$$, and vice versa, since $$$2^k-1=(11\ldots 1)_2$$$ acts like a mirror on $$$[0,2^k)$$$.
Let $$$S$$$ be the set of all valid $$$x$$$ (including $$$0$$$). We claim that $$$S$$$ is closed under XOR. Assume $$$x,y\in S$$$. By validity, for every $$$i\in[l,r]$$$ we have
But $$$(i\oplus x)\oplus y = i\oplus (x\oplus y)$$$, hence for all $$$i\in[l,r]$$$,
so $$$x\oplus y\in S$$$. Therefore $$$S$$$ forms a vector space over $$$\mathbb{F}_2$$$, and it is enough to find a basis.
Consider
For $$$0\le x \lt 2^n$$$ we have
so $$$f_n$$$ acts like a mirror inside each block of length $$$2^n$$$. Motivated by this, we guess that basis elements are of the form $$$(111\ldots1)_2$$$, i.e. $$$2^n-1$$$.
Now we determine for which $$$n$$$ we have $$$2^n-1\in S$$$. Define the blocks
If $$$r-l+1 \gt 2^n$$$, then $$$2^n-1$$$ is valid if and only if for every $$$k$$$, the intersection $$$[l,r]\cap b_k$$$ is either empty or the whole block $$$b_k$$$ (equivalently, $$$[l,r]$$$ never cuts a block partially), since $$$f_n$$$ preserves each block and reverses the order inside it.
We must also check one boundary case, where $$$f_n$$$ maps $$$[l,r]$$$ to itself, which happens only if:
In other words, $$$l$$$ and $$$r$$$ must be symmetric with respect to the midpoint of one block under the mirror $$$f_n$$$.
// In the name of god
#include <bits/stdc++.h>
using namespace std;
typedef long long ll;
inline ll maxPower2Multiple(const ll &a) {return a&-a;}
inline ll prefixAnswer(const ll &a) {return maxPower2Multiple(a);}
inline bool isPerfect(const ll &a) {return a == maxPower2Multiple(a);}
int main() {
ios_base::sync_with_stdio(false), cin.tie(nullptr);
int t;
cin >> t;
while(t--) {
ll l, r;
cin >> l >> r;
ll both = 1LL<<60;
while((both&l) == (both&r) && both > 0) {
if(both&l) l -= both, r -= both;
both >>= 1;
}
if(l and isPerfect(l+r+1))
cout << 2LL*min(prefixAnswer(l), prefixAnswer(r+1LL))-1LL << '\n';
else if(l)
cout << min(prefixAnswer(l), prefixAnswer(r+1LL))-1LL << '\n';
else
cout << prefixAnswer(r+1LL)-1LL << '\n';
}
return 0;
}
2180F1 - Control Car (Easy Version)
Idea: MohammadParsaElahimanesh — Preparation: MohammadParsaElahimanesh, AmirrzwM
We solve the problem using dynamic programming.
A naive DP definition would be to let $$$dp[i][j]$$$ be the probability that a car, currently at cell $$$(i,j)$$$, eventually reaches the bottom-right corner of the grid. However, this state does not contain enough information to determine valid transitions, since the future movement of the car depends on the direction from which it entered the current cell and on the orientations of certain walls. Therefore, we need to add more information to our DP states.
We define the following DP states:
$$$dp[i][j][0][0]$$$: the probability that the car reaches the bottom-right corner, given that it is currently at cell $$$(i,j)$$$, entered this cell from above, and the wall starting from the top-right corner of this cell is pointing downward.
$$$dp[i][j][0][1]$$$: similar to $$$dp[i][j][0][0]$$$, but the wall starting from the top-right corner of the cell is \textbf{not} pointing downward.
$$$dp[i][j][1][0]$$$: the probability that the car reaches the bottom-right corner, given that it is currently at cell $$$(i,j)$$$, entered this cell from the left, and the wall starting from the bottom-left corner of this cell is pointing to the right.
$$$dp[i][j][1][1]$$$: similar to $$$dp[i][j][1][0]$$$, but the wall starting from the bottom-left corner of the cell is \textbf{not} pointing to the right.
We now describe how to compute the transitions.
Consider the state $$$dp[i][j][0][0]$$$. In this case, the car is at cell $$$(i,j)$$$, it entered from above, and the top-right wall is pointing downward. Therefore, the car cannot move to cell $$$(i,j+1)$$$ and must move to cell $$$(i+1,j)$$$.
After moving to $$$(i+1,j)$$$, there is a probability of $$$\frac{3}{16}$$$ that the top-right wall of the new cell is pointing downward, and a probability of $$$\frac{3}{8}$$$ that it is not. Thus, we obtain
Using similar arguments for the remaining states, we get the following transitions:
By computing these DP states in reverse order of $$$i$$$ and $$$j$$$, we can evaluate all states in $$$O(nm)$$$ time and obtain the final answer.
// In the name of god
#include <bits/stdc++.h>
using namespace std;
typedef long long ll;
constexpr int T = 10000;
constexpr int N = 5000;
constexpr ll mod = 1000000007;
constexpr ll inv2 = (mod+1)>>1;
constexpr ll inv4 = inv2>>1;
constexpr ll inv8 = inv4>>1;
constexpr ll inv16 = (inv8+mod)>>1;
ll dp[2][N+1][2][2];
vector<pair<int, int>> query[N+1];
ll ans[T];
inline ll Pow(ll a, ll b) {
ll res = 1LL;
b %= mod-1;
while(b) {
if(b&1) res = (res*a)%mod;
a = (a*a)%mod;
b >>= 1;
}
return res;
}
int main() {
ios_base::sync_with_stdio(false), cin.tie(nullptr);
int t;
cin >> t;
for(int n, m, i = 0; i < t; i++) {
cin >> n >> m;
query[n].push_back({m, i});
}
for(int i = 0; i <= N; i++)
dp[N&1][i][0][0] = dp[N&1][i][0][1] = dp[N&1][i][1][0] = dp[N&1][i][1][1] = 1;
for(int i = N-1; i >= 0; i--) {
int q = i&1, p = 1-(i&1);
dp[q][N][0][0] = dp[q][N][0][1] = dp[q][N][1][0] = dp[q][N][1][1] = 1;
for(int j = N-1; j >= 0; j--) {
dp[q][j][0][0] = (dp[p][j][0][0]*3LL*inv16+dp[p][j][0][1]*3LL*inv8)%mod;
dp[q][j][0][1] = ((dp[p][j][0][0]*3LL*inv16+dp[p][j][0][1]*3LL*inv8)%mod+
dp[q][j+1][1][0]*inv16+dp[q][j+1][1][1]*5LL*inv16)%mod;
dp[q][j][1][0] = (dp[q][j+1][1][0]*3LL*inv16+dp[q][j+1][1][1]*3LL*inv8)%mod;
dp[q][j][1][1] = (dp[p][j][0][0]*inv4+dp[p][j][0][1]*inv2+dp[q][j+1][1][1]*3LL*inv16)%mod;
}
for(auto [m, qu]: query[N-i]) {
ll P = (dp[q][N-m][0][0]*inv4+dp[q][N-m][0][1]*3LL*inv4)%mod;
ans[qu] = (1LL-P+mod)*Pow(4LL, 1LL*(N-i+1)*(m+1))%mod;
}
}
for(int i = 0; i < t; i++)
cout << ans[i] << '\n';
return 0;
}
2180F2 - Control Car (Hard Version)
Idea: MohammadParsaElahimanesh — Preparation: MohammadParsaElahimanesh, AmirrzwM
Note that the $$$j$$$-th layer of DP states (that is, $$$dp[i][j]$$$ for all valid $$$i$$$) depends only on the $$$(j-1)$$$-th layer and the $$$j$$$-th layer itself. Moreover, all layers are computed using identical transition rules.
Let $$$dp[j]$$$ denote the vector consisting of all states $$$dp[i][j]$$$ for a fixed $$$j$$$. Then there exists a constant transition matrix $$$M$$$ such that
By computing $$$M^{\,m-1}$$$ and multiplying it by the initial vector $$$dp[1]$$$, we obtain $$$dp[m]$$$, from which the final answer can be extracted.
This can be done in $$$O(n^3 \log m)$$$ time using binary exponentiation.
// In the name of god
#include <bits/stdc++.h>
using namespace std;
typedef long long ll;
constexpr ll N = 50 + 1;
constexpr ll M = 1000000000000000LL;
constexpr ll mod = 1000000007;
constexpr ll inv2 = (mod+1)>>1;
constexpr ll inv4 = inv2>>1;
constexpr ll inv8 = inv4>>1;
constexpr ll inv16 = (inv8+mod)>>1;
ll n, m;
struct DP {
ll dp[N][2][2][N][2][2] = {};
};
inline DP upd(const DP& l, const DP& r) {
DP lr;
for(int li = 0; li <= n; li++)
for(int lh = 0; lh < 2; lh++)
for(int lk = 0; lk < 2; lk++)
for(int ri = li; ri <= n; ri++)
for(int rh = 0; rh < 2; rh++)
for(int rk = 0; rk < 2; rk++)
for(int mi = li; mi <= ri; mi++) {
for(int mh = 0; mh < 2; mh++)
for(int mk = 0; mk < 2; mk++)
lr.dp[li][lh][lk][ri][rh][rk] += l.dp[li][lh][lk][mi][mh][mk]*r.dp[mi][mh][mk][ri][rh][rk];
lr.dp[li][lh][lk][ri][rh][rk] %= mod;
}
return lr;
}
inline DP Identical() {
DP I;
for(int i = 0; i < n; i++)
for(int j = 0; j < 2; j++)
for(int k = 0; k < 2; k++)
I.dp[i][j][k][i][j][k] = 1;
return I;
}
inline DP Pow(DP a, ll b) {
DP res = Identical();
while(b) {
if(b&1) res = upd(res, a);
a = upd(a, a);
b >>= 1;
}
return res;
}
inline ll Pow(ll a, ll b) {
ll res = 1LL;
while(b) {
if(b&1) res = res*a%mod;
a = a*a%mod;
b >>= 1;
}
return res;
}
int main()
{
int t;
cin >> t;
while(t--) {
cin >> n >> m;
DP col;
col.dp[n][0][0][n][0][0] = col.dp[n][0][1][n][0][1] = col.dp[n][1][0][n][1][0] = col.dp[n][1][1][n][1][1] = 1;
for(int i = (int)n-1; i >= 0; i--) {
for(int j = i+1; j <= n; j++)
for(int k = 0; k < 2; k++)
for(int l = 0; l < 2; l++) {
col.dp[i][0][1][j][k][l] = col.dp[i][0][0][j][k][l] = (col.dp[i+1][0][0][j][k][l]*3LL*inv16+col.dp[i+1][0][1][j][k][l]*3LL*inv8)%mod;
col.dp[i][1][1][j][k][l] = (col.dp[i+1][0][0][j][k][l]*inv4+col.dp[i+1][0][1][j][k][l]*inv2)%mod;
}
col.dp[i][0][1][i][1][0] = inv16;
col.dp[i][0][1][i][1][1] = 5LL*inv16%mod;
col.dp[i][1][0][i][1][0] = 3LL*inv16%mod;
col.dp[i][1][0][i][1][1] = 3LL*inv8%mod;
col.dp[i][1][1][i][1][1] = 3LL*inv16;
}
DP dpnm = Pow(col, m);
ll ans = 0;
for(int i = 0; i <= n; i++)
for(int j = 0; j < 2; j++)
for(int k = 0; k < 2; k++)
ans = (ans+dpnm.dp[0][0][0][i][j][k]*inv4+dpnm.dp[0][0][1][i][j][k]*3LL*inv4)%mod;
cout << (1LL-ans+mod)*Pow(4LL, 1LL*(n+1LL)*(m+1LL))%mod << '\n';
}
return 0;
}
2180G - Balance
Idea: MohammadParsaElahimanesh, AmirrzwM, Shayan — Preparation: AmirrzwM
First, note that due to the structure of the queries, the array $$$a$$$ is always a palindrome.
Let $$$n$$$ be the current length of the array. Define $$$c_{i,m,k}$$$ as the number of subsequences of $$$a$$$ of length $$$m$$$ in which $$$a_i$$$ appears as the $$$k$$$-th element. Then the total sum of balances over all non-empty subsequences is equal to
By symmetry of the palindrome, we have
Using this symmetry, we can write
After rearranging terms, this implies that the total balance can be decomposed into two parts:
- the sum of all elements over all non-empty subsequences,
- plus the sum of averages of all non-empty subsequences.
The first part is straightforward. Each element $$$a_i$$$ appears in exactly $$$2^{n-1}$$$ non-empty subsequences, hence
For the second part, note that $$$a_i$$$ contributes to subsequences of length $$$k$$$ exactly $$$\binom{n-1}{k-1}$$$ times, and in such subsequences its coefficient in the average is $$$\frac{1}{k}$$$. Therefore,
It is easy to verify that
Hence the second part equals
Combining both parts, if we denote
the answer to a type $$$3$$$ query is
Thus, it is sufficient to maintain the values of $$$n$$$ and $$$S$$$. Since all computations are done modulo $$$10^9+7$$$, we maintain:
- $$$S \bmod (10^9+7)$$$,
- $$$n \bmod (10^9+7)$$$,
- $$$n \bmod (10^9+6)$$$ (for computing powers of $$$2$$$).
Updating $$$S$$$ and $$$n$$$ for queries of type $$$2$$$ is trivial. Updating $$$n$$$ for queries of type $$$1$$$ is also straightforward. The only non-trivial part is determining which element is removed in a type $$$1$$$ query in order to update $$$S$$$.
To handle this, consider a type $$$2$$$ query inserting a value $$$x$$$. There are two cases.
- If the current length of $$$a$$$ is odd ($$$2n+1$$$), after insertion the array becomes
The first time the middle element is removed, it is the original middle element of $$$a$$$. The next two times, the removed element is $$$x$$$. After that, the situation is equivalent to applying the same process to the array obtained by removing the middle element of $$$a$$$.
- If the current length of $$$a$$$ is even ($$$2n$$$), after insertion the array becomes
The first removed middle element is $$$x$$$, the second one is the original middle element of $$$a$$$ (namely $$$a_{n+1}$$$), and afterwards the situation again reduces to the same process on the array with its middle element removed.
Using this observation, we maintain a list of all values added by type $$$2$$$ queries, in order. For each such value we store an integer state in $$${0,1,2,3}$$$. When a new value $$$x$$$ is inserted:
- if the current length is odd, we assign it state $$$3$$$,
- otherwise, we assign it state $$$1$$$.
When processing a type $$$1$$$ query, we traverse this list from the end. For each element:
- if its state is $$$0$$$ or $$$1$$$, this element is the current middle element; we update its state to $$$(\text{state}+1)\bmod 4$$$ and return its value,
- otherwise, we only update its state to $$$(\text{state}+1)\bmod 4$$$ and continue.
This procedure finds the middle element in $$$O(1)$$$ amortized time per query.
Therefore, the overall time complexity of the solution is $$$O(q \log M)$$$.
#include <bits/stdc++.h>
using namespace std;
long long pw(long long a , long long b , long long mod){
long long ret = 1;
long long mul = a;
while(b > 0){
if(b&1)
ret = (ret * mul) % mod;
mul = (mul * mul) % mod;
b >>= 1;
}
return ret;
}
const int mod = 1e9 + 7;
int q;
int sum = 0 , lenm = 0 , lenm1 = 0 , len2 = 0;
vector<pair<int,int>> adds; // stay stay pass pass
int get(int id , int del = 1){
assert(id >= 0);
if(adds[id].second < 2){
if(del) adds[id].second = (adds[id].second + 1) & 3;
return adds[id].first;
}
else{
if(del) adds[id].second = (adds[id].second + 1) & 3;
return get(id-1 , del);
}
}
int main()
{
ios::sync_with_stdio(false) , cin.tie(0);
int tt;
cin >> tt;
while(tt--){
cin >> q;
sum = 0 , lenm = 0 , lenm1 = 0 , len2 = 0;
adds.clear();
while(q--){
int tp;
cin >> tp;
if(tp == 1){
sum = (sum - get(adds.size()-1 , 1) + mod) % mod;
lenm = (lenm - 1 + mod) % mod;
lenm1 = (lenm1 - 1 + (mod-1)) % (mod-1);
len2 ^= 1;
}
else if(tp == 2){
int x;
cin >> x;
if(len2 == 1) adds.push_back({x , 3});
else adds.push_back({x , 1});
sum = (sum + 1ll * (lenm + 1) * x) % mod;
lenm = (2 * lenm + 1) % mod;
lenm1 = (2 * lenm1 + 1) % (mod-1);
len2 = 1;
}
else if(tp == 3){
int ans = (pw(2 , (lenm1 - 1 + (mod-1)) % (mod-1) , mod) + 1ll * (pw(2 , lenm1 , mod) - 1) * pw(lenm , mod-2 , mod)) % mod;
ans = (1ll * ans * sum) % mod;
ans = (1ll * ans * pw(2 , mod-2 , mod)) % mod;
cout << ans << endl;
}
}
}
return 0;
}
2180H1 - Bug Is Feature (Unconditional Version)
Idea: Akulyat — Preparation: MohammadParsaElahimanesh
Step 1: reduce to independent series.
First, we can split the whole match into independent series $$$(a_i,b_i,c_i,[l_i,r_i])$$$. Since moves never mix two different games, by the Sprague-Grundy theorem we can compute the Grundy number contributed by each series and then solve the question.
Step 2: normalize one game.
Fix one game with upper bound $$$x$$$. Shift everything by $$$a$$$:
Because $$$a,b,c$$$ are an arithmetic progression, we have $$$c-a=2(b-a)$$$. Put $$$d=b-a$$$ and $$$y=x-a$$$, so the state starts as
Any move keeps the three numbers in an arithmetic progression (after reordering). In this normalized form, every legal move transforms $$$(0,\ d,\ 2d,\ y)$$$ into exactly one of the following forms: - $$$(d,\ 2d,\ 3d,\ y)$$$, - $$$(0,\ 2d,\ 4d,\ y)$$$, - $$$(d,\ \tfrac{3d}{2},\ 2d,\ y)$$$ when $$$d$$$ is even.
Step 3: parameters $$$(t,d)$$$ and then $$$(q,k)$$$.
It is convenient to measure the remaining room under $$$y$$$:
Then the game state is represented by $$$(t,d)$$$.
Write
and define
After this, the game depends only on $$$(q,k)$$$. We denote its Grundy number by $$$g(q,k)$$$.
Step 4: large ranges collapse to a few ranges of $$$q$$$.
In a fixed series, $$$d$$$ is fixed, hence $$$k$$$ and $$$D$$$ are fixed. As $$$x$$$ runs through $$$[l_i,r_i]$$$, the value $$$t=x-c_i$$$ runs through an interval $$$[L,R]$$$ where $$$L=l_i-c_i$$$ and $$$R=r_i-c_i$$$, and we look at
This $$$q$$$ stays constant on blocks of consecutive $$$D$$$ values, and the full blocks have length $$$D$$$. Since $$$D$$$ is odd, the contribution of the whole interval $$$t\in[L,R]$$$ can be reduced to:
one consecutive interval of $$$q$$$ values corresponding to full blocks,
at most two boundary blocks (the beginning and the end), treated separately.
Step 5: recursion for $$$g(q,k)$$$.
From the move description on $$$(q,k)$$$, we can write
Two facts make it easy to work with:
the third term is never needed, so $$$g(q,k)=\mathrm{mex}\Big(g(q-2^k,k),\ g(q,k-1)\Big)$$$.
for fixed $$$k$$$, the sequence $$$g(0,k),g(1,k),\ldots$$$ is periodic with period $$$2^{k+1}$$$.
Using the first fact, we get a simple split:
- if $$$q \lt 2^k$$$, then $$$g(q-2^k,k)$$$ does not exist, so $$$g(q,k)=\mathrm{mex}\Big({g(q,k-1)}\Big)$$$.
- if $$$q\ge 2^k$$$, then $$$g(q,k)=\mathrm{mex}\Big({g(q,k-1),\ g(q-2^k,k)}\Big)$$$.
Step 6: reduce interval XOR to prefix XOR.
In each series we need
Define the prefix XOR
Then
Step 7: compute $$$f(u,k)$$$ as a small tuple.
We save $$$f(u,k)$$$ as a tuple
where $$$p_i=1$$$ means the value $$$i$$$ has an odd contribution in
In other words, $$$p_i$$$ tells whether $$$i$$$ appears an odd number of times among $$$g(0,k),\ldots,g(u,k)$$$.
We first precompute the XOR over one full period
Since the period is $$$2^{k+1}$$$, two full consecutive periods cancel in XOR, so the XOR over length $$$2^{k+2}$$$ becomes $$$(0,0,0)$$$. Therefore, for a general $$$u$$$ we reduce it to the remainder inside a single period, and then compute it by recursion on $$$k$$$ from the formulas in Step 5. This makes computing $$$f(u,k)$$$ fast, in $$$O(k)=O(\log u)$$$ time.
// In the name of God
#include <iostream>
using namespace std;
#pragma GCC optimize("O3,unroll-loops,no-stack-protector")
#pragma GCC target("avx2,bmi,bmi2,lzcnt,popcnt")
typedef long long ll;
inline ll lg2(ll x){return 63 - __builtin_clzll(x);}
constexpr inline ll period(ll k) {return 1LL<<(k+1);}
constexpr inline ll half_period(ll k) {return 1LL<<k;}
inline ll equivalent(ll x, ll k) {return x & (period(k)-1);}
inline bool count_period_parity(ll x, ll k) {return (x >> (k+1)) & 1;}
struct node {
int u;
node(bool zero, bool one, bool two) {
u = ((int) zero << 0) + ((int) one << 1) + ((int) two << 2);
}
node(int x) {
u = x;
}
bool has(int i) const {return u & (1<<i);}
node Mex() const {
return node(has(1) ^ has(2), has(0), false);
}
node MexMex() const {
return node(false, has(2), has(0) ^ has(1));
}
node operator^(const node other) const {
return node(u ^ other.u);
}
void operator^=(const node other) {
u ^= other.u;
}
int get() const {
return (u >> 1);
}
};
const node full_period(true, true, false);
const node full_period_mex = full_period.Mex();
const node zero_node(false, false, false);
const node zero(true, false, false), one(false, true, false), two(false, false, true);
node grundy(ll x, ll k) {
if(k)
return equivalent(x, k) < half_period(k)? grundy(x, k-1).Mex() : grundy(x, k-1).MexMex();
return x&1? one : zero;
}
node prefix_grundy(ll r, ll k) {
node period_effect = (count_period_parity(r, k)? full_period : zero_node);
r = equivalent(r, k);
if(k == 0)
return (r? full_period : zero) ^ period_effect;
if(r < half_period(k))
return prefix_grundy(r, k-1).Mex() ^ period_effect;
return prefix_grundy(r - half_period(k), k-1).MexMex() ^ full_period_mex ^ period_effect;
}
inline node grundy(ll l, ll r, ll k) {
return (l? prefix_grundy(l-1, k) : zero_node) ^ prefix_grundy(r, k);
}
int main() {
ios_base::sync_with_stdio(false), cin.tie(nullptr);
int n;
int t;
cin >> t;
while(t--) {
cin >> n;
node ans(0);
for(ll a, b, c, l, r, lp, rp, d, k, i = 0; i < n; i++) {
cin >> a >> b >> c >> l >> r;
b -= a, c -= a, l -= a, r -= a, a = 0;
l -= c, r -= c;
k = lg2(b & -b), d = b >> k;
if(l / d == r / d) {
if((r - l + 1) & 1)
ans ^= grundy(l / d, k);
} else {
lp = (l+d-1) / d, rp = r / d;
if(lp < rp) ans ^= grundy(lp, rp-1, k);
if((lp*d-l)&1) ans ^= grundy(l / d, k);
if((r-rp*d+1)&1) ans ^= grundy(r / d, k);
}
}
cout << (ans.get()? "Bug" : "Feature") << '\n';
}
return 0;
}
// Thank God
2180H2 - Bug Is Feature (Conditional Version)
Idea: MohammadParsaElahimanesh, AmirrzwM, Shayan — Preparation: MohammadParsaElahimanesh
Step 1: Game Transformation.
Convert each game with $$$a_i \lt b_i \lt c_i \le x$$$ to analyze $$$0 \lt 1 \lt 2 \le x'$$$ where:
The interval $$$[l_i, r_i]$$$ transforms to
Note that each $$$x' \in [L_i, R_i]$$$ corresponds to multiple original $$$x$$$ values (the number of originals can change near the interval endpoints). From state $$$x'$$$, a player can move either to $$$x'-1$$$ or to $$$\left\lfloor \frac{x'}{2} \right\rfloor$$$. The game ends when $$$x'=2$$$, and the player who makes the last move wins.
Step 2: Grundy Numbers.
Define $$$g(x)$$$ for state $$$0 \lt 1 \lt 2 \le x$$$:
- Base cases:
- Recursive rule ($$$x \ge 4$$$):
Key properties (proven by induction):
- $$$g(x) = 0$$$ $$$\iff$$$ $$$x$$$ has odd trailing zeros in binary
- Non-zero $$$g(x)$$$ alternate $$$1 \leftrightarrow 2$$$ sequentially
Step 3: Counting Grundy Values.
For any interval $$$[l, r]$$$:
Count numbers with $$$g(x) = 0$$$:
- Define $$$C(y) =$$$ count of $$${x \le y \mid g(x) = 0}$$$
- Loop through bits to compute $$$C(y)$$$ in $$$O(\log y)$$$
- Zeros in $$$[l, r]$$$: $$$C(r) - C(l-1)$$$
Count $$$1$$$'s and $$$2$$$'s similarly using $$$C(y)$$$ and alternation of $$$1$$$ and $$$2$$$.
Step 4: Final Computation.
For each transformed interval $$$[L_i, R_i]$$$:
- Compute XOR of all $$$g(x)$$$ in $$$[L_i, R_i]$$$
- Aggregate XOR across all $$$n$$$ queries
- Total XOR $$$\neq 0 \implies$$$ Bug wins; else Feature wins
// In the name of God
#include <iostream>
using namespace std;
typedef long long ll;
inline ll count_zeros(ll x) {
int u = (x&2)>>1;
ll cnt = u;
while (x>>=2) {
u = (x&2)>>1;
cnt += x+u;
}
return cnt;
}
// normal grundy function
/*
inline int grundy(ll x) {
if((__builtin_ctzll(x)&1) == 1) return 0;
return 1+((x-count_zeros(x))&1);
}
*/
// magical function
inline int grundy(ll x) {
if((__builtin_ctzll(x)&1) == 1) return 0;
return 1+(__builtin_popcountll(x)&1);
}
inline int grundy(ll l, ll r) {
ll zel = count_zeros(l-1), zer = count_zeros(r);
ll onl = (l-1-zel)/2, onr = (r-zer)/2;
ll twl = (l-2-zel)/2, twr = (r-1-zer)/2;
return ((onr-onl)&1)^(((twr-twl)&1)<<1);
}
int main() {
ios_base::sync_with_stdio(false), cin.tie(0);
int n, ans;
int t;
cin >> t;
while(t--) {
cin >> n;
ans = 0;
for(ll a, b, c, l, r, lp, rp, i = 0; i < n; i++) {
cin >> a >> b >> c >> l >> r;
b -= a, c -= a, l -= a, r -= a, a = 0;
if(l / b == r / b) {
if((r - l + 1)&1)
ans ^= grundy(l / b);
} else {
lp = (l+b-1)/b, rp = r/b;
if(b&1) ans ^= grundy(lp, rp-1);
if((lp*b-l)&1)ans ^= grundy(l/b);
if((r-rp*b+1)&1)ans ^= grundy(rp);
}
}
cout << (ans?"Bug":"Feature") << '\n';
}
return 0;
}









first.
second
Need more contest like this I love today. I hope we also get some similar problems to practice by authors.
I'm probably wrong, but in the Div. 2 contests I've followed, that one had the lowest number of people solving Problem C. X_X
nice! problem C was hard
C should be in the place of F
C gave wrong example
The way this is laid out looks like it was AI generated (as is every repovive editorial). Please tell me that is not that case...
(really what should i have expected from shayan)...
bro you had one job
ok so i put the edi for C in gptzero
and uh
"We are uncertain about this document. If we had to classify it, it would be considered AI generated Probability breakdown 49% AI generated 2% Mixed 49% Human"
atp youre just ragebaiting bro
WAforces
After removing these disks, we can repeat the same process on the remaining ones. It can be shown that this procedure finds the correct answer and runs in O(n) time.
Then show it.
yea the prefix part is super vague it did work in the contest but I really want the proof.
if it be a larger one and a smaller one(such as break 2 times, one ends at 9, one ends at 8), then we see that
the first one has answer 7 at 1-10 , and we can make r_10->0, it will be 7+(11-n)
the second one has answer 6 at 1-9 , so it will be 6+(10-n)<=6+(10-11)+(11-n)<=7+(11-n)
proof ends.
note that (a-b) means the answer from a to b(e.t:(1-2)=1, (5-6)=0 or 1, (1-n)=the answer to output)
Что за дичь задачи?
354224877 can someone help me figure out why is this wrong ?
Please help in problem C, If n is even then why isn't this solution working?
Not able to find the issue in this
My solution — https://mirror.codeforces.com/contest/2180/submission/354177090
yuhh... but why tho?
I have understood it intuitively like this:
We are building the numbers from the highest bit to the lowest. The restriction that $$$a_i \lt n$$$ is annoying, but basically only matters for $$$a_i$$$'s where the bits don't match with $$$n$$$. So each time we're forced to place a $$$0$$$, we place it for $$$a_i$$$'s where the bits match with $$$n$$$ so far, this is exactly what the editorial is calling as "tight". Once $$$a_i$$$ no longer matches with $$$n$$$, we can place 1s even where $$$n$$$ has the bit as 0.
The issue with OP's solution is the following: you're only placing the extra 0s at the last 2 numbers. The "give 1 to y and 0 to x" is the right idea, so that from now on we can add 1s to both x and y after that point, but what about the remaining $$$k-2$$$ numbers? For cases where the bit is set to 1 in $$$n$$$ but we're forced to add a 0, adding 0 to these $$$k-2$$$ numbers would be useful too. But we're fixing them as 1 in the wrong solution and always giving the 0 to y which is wasteful in a sense. It would be more useful if the 0 is given somewhere else, so that later on we can set 1 even where n is set as 0.
Hence keeping as many numbers as loose is useful.
Also in general, when you have a greedy strategy, your question shouldn't be "why doesn't it work". Do you have a proof, or a clear reasoning as to why it's correct? It's on the solver to prove the greedy idea. Realistically you can't formally prove ideas in a contest, but strong reasoning must be there as to why it's correct.
Here the solution with loose and tight $$$a_i$$$'s works because it is optimal to keep as many bits set as possible when moving from MSB to LSB. If we keep a 0 where we could have kept a 1, the only gain is in the rest of the bits of that number, which is not good enough to be better than the loss we got from a higher bit. And once we follow this strategy, we also want to keep as many numbers "loose" as possible.
This in my opinion is reasoning you can realistically come up with in a contest as to why the editorial's greedy idea is correct. (Not that it's easy to come up with this, I myself didn't lol)
Really helpful , thanks.
what would the answer be for this testcase
15 23 27 29
now i feel bad
Correct me if I'm wrong but isn't [30,30,27,29] gives maximum sum?
29 ^ 27 is not equal to 30
my bad, i'm sorry. i assumed things without checking
Taking k-2 n's is not optimal,starting from MSB for each set bit give k-1 elements bit 1 and one number bit 0 at that position for the next set bit choose a different element to give bit 0 this way when we have a 0 we can choose to give even number of 1's to some of the elements in which case the sum of numbers will be larger. Basically you are only allocating two 1's (in x and y) for the unset bit in n but it is possible that more than two 1's can be allocated.
i also implemented exact similar way, we are correct if k = 2, our assumption that taking k-2 n's is wrong , we just have to extend our logic for k whatever we implemented for k=2
Well, I tried the same approach in the contest, and I was as confused as you about how it didn't work until someone showed me this: (I'm going to give the examples in binary so that it looks immediately correct)
I see that you tried exploiting the second
1that you encounter innto obtain a greater sum. That was my idea, too. But why not try to exploit the1's as much as possible? So, lesson I learned: think a little deeper, and get a stronger reasoning.thanks buddy that was much needed
try n=6 k=4... You will get the idea.
you know,the most difficult part of this problem is recognizing that take k-2 n's isn't good.
I wish there was a clearer explanation for Problem C. The notation in this tutorial is quite confusing—what do T = 1,…,k even mean? The definitions of T and L don’t match the explanations at all, and the reasoning doesn’t correspond to the code either.
I tried to write down my approach to Problem C, it is a bit hard to explain in words.
The odd case is trivial.
For the even case, observe that in order for XOR-sum to equal $$$N$$$, at every $$$j$$$-th bit where $$$N$$$ is
1, in the answer, $$$K-1$$$ numbers must have the $$$j$$$-th bit set to1, and one must be a0. Let's call such bit positions1-bits.For the bits where $$$N$$$ is
0, ideally we will want to set all $$$K$$$ numbers of the output to1's, but this is not always possible because our numbers in the output might become larger than $$$N$$$.So, we can take advantage of the fact that at every
1-bit of $$$N$$$, we can take an opportunity to set a particular bit to0, and from then onward this number will be strictly smaller than $$$N$$$, so further0bits can be turned on to maximize the sum, in these numbers. (Note that only even number of such0must be flipped on, though)Key is to choose different number each time to turn off a particular
1-bit, and not only one number, which is the trap.Time Complexity. $$$O(K + (log \ N)^2)$$$.
you nailed it
can you explain why
(Note that only even number of such 0 must be flipped on, though)
For the
0-bit positions, we want the final XOR-sum to be equal to $$$0$$$ at those positions. Thus at each such bit position in the output numbers, we should flip an even number of them (as many as we can without the numbers exceeding $$$N$$$, of course).Best explanation of solution to problem C I've seen so far. Thanks bro
Amazing explanation. Thank you!
instead of doing the ChatGPT they should've asked you to make the editorial, such a good explanation compared to the absolute shit of an editorial to such a good problem.
good explanation! thanks!
I would like to share a solution for pE using the segment tree insertion function.
Similar to observations made in other solutions, we can see that for the same high-bit, if the count of 0s does not equal the count of 1s, swapping is definitely not possible. Conversely, if they are equal, swapping has no impact.
We can use a divide-and-conquer approach to maintain the counts of 0s and 1s.Specifically, if the range [l,r] covers the entire interval, the feasibility of this bit remains unaffected because the counts of 0s and 1s in this interval are identical. Therefore, we can ignore the coverage relationship—much like a segment tree. We can maintain the solution for this problem using a similar method. The code is as follows:
https://mirror.codeforces.com/contest/2180/submission/354220593
Would you mind elaborating more on that part of "count of 0s not being equal to count of 1s"? I haven't found any other solutions, and since there is no editorial yet, I can't understand what is that count. What do you mean by "swapping"?
I think they mean if $$$t$$$ is a valid number (i.e. we can use it), then going over each of its bits, if count of numbers from $$$l$$$ to $$$r$$$ with 0 at this bit is not equal to count to numbers with 1, then $$$t$$$ cannot have 1 at this bit because that will map numbers with 0 to numbers with 1 (at that bit). However, their number is different, so that is wrong.
Oh, ok, I get it now, thanks. Now I will see if I can understand the segment tree part.
That's very good insight. Great solution!
According to CList, problem C was roughly a 1900 rated problem. I solved A-C and my rating only went to 1587 from 1561. Why?
https://mirror.codeforces.com/problemset/problem/632/C
Task B has appeared before in edu round.
That problems says that concatenation can be done in arbitrary order. In this round problem order can't be arbitrary. E.g. strings are "a", "b" and "c". You can get "acb" here.
Sorry I meant , similar to this task. Because it's based on same principle. My bad.after seeing B i remembered this task.
Waiting for the prove of D
Added.
AmirrzwM Provide the proof for the greedy please, it can't be that hard that you need to omit it.
Added.
"the special editorial"... why is GPT content so much praised?
If you're like me and keep on misreading statements, you might be interested in the variation of problem F where walls have infinite length (in one direction). I believe I found a solution :
We calculate dp[i][j] = number of ways to orient the top-left corner of the grid such that the car is falling in column j on row i. We orient only the top-left corner (first j columns of first i rows) because those are the only points for which there will not be any additional constraint.
The transitions are : the car falls on row i at position k, and falls to row i + 1 at position j >= k. We can figure out how many orientations we get (points on rows < i + 1 and column between k and j should point up or right, and on row i + 1, the first j points should point left or down (or up for column 0), the j-th point should definetely point left if j != k, and the rest will be fixed by later transitions.
With that, we can get the number of ways to make the car fall off the grid at row n and column j, and multiply by the points that we have not yet fixed.
As for the number of ways to make the car exit the grid to the right, we can count it on each row : if the car is falling on row i at column j, and exits the grid to the right on row i, then all points on rows < i + 1 and columns >= j should point up or right, and there should be no holes on row i from column j onward and no walls going up. So we have $$$4 * 3^m$$$ possibilities, from which we must substract ways that leave a hole : there are $$$3 * 2^m$$$ ways to leave a hole at column j, and $$$3 * 2^{m - 1}$$$ ways to leave the first hole at column $$$k \gt j$$$ (again by looking at constraints), so we can find the answer.
Finally we get an $$$O(nm^2)$$$ dp which can easily be translated to an $$$O(nm)$$$ dp with some prefix sums, since transitions are very similar
I believe this code should work (I checked $$$(n, m) = (1, 1), (1, 2), (2, 1)$$$)
Orz orz orz
How to prove greedy in D?
If a circle with center at i th position is connected to the circle with center at the (i + 1)th position then let it be represented by 1.
So, A binary string will be formed of length n — 1.
Example 0011100111
So the string ith character is representing, if the circle at center i is connected to the circle at center i + 1.
So our greedy statergy to start from the ith position and go untill we are able to form a
continuous line of connected cycle.
so let the ith character be first positon where the optimal and greedy statergy differs
1111011(Greedy) 1110111(Optimal)
Now since character at i + 2 depending only on the i +1 character not on ith character so we can create optimal solution starting from i + 2 character without depending on first i characters of the greedy
Sorry for the bad explanation but hope you get the idea.
Brilliant explanation!
Thanks
How did you solve the problem C?
Old-school method: neurons firing, synapses sweating! Do try it once :)
What's the reason for 5000 limit in problem F1? I doesn't allow to declare n*n*2*2 array and to use recursion. Declaring n * n array to precalculate powers of 4 already takes 500MB out of 512MB. So frustrating... Why N is not 1000? Idea doesn't change from it at all.
I also agree that the memory limit only adds a slight difficulty to the problem that just requires you to do the queries offline, but I can that say that they probably wanted the question to be a bit harder since it is F1. Btw you should not use an $$$n^2$$$ array to precalculate the powers of 4, it is fast enough to use a $$$O(log(nm))$$$ exponential algorithm.
Making the problem harder with ridiculous limits is so disgusting. There's absolutely no reason to give N=5000. Look at the solutions that didn't toggle, they use 400MB or 500MB, so just give 1024MB. This has to be the worst limit I've seen in cf after that 120^5 problem. Also for gevak, you can save 4^0~4^m, 4^(m*i) (i<=n) and make the memory smaller(Which I couldn't come up during the contest and failed to reduce the memory at 600MB)
Interesting idea about memorization 4^0~4^m, 4^(m*i) (i<=n)!
Finally managed to get AC, storing all n*n*2*2 states and all n*n powers of 4: https://mirror.codeforces.com/contest/2180/submission/354268056
int dp[5001][5001][2][2] stores answer for all states, which takes 400MB. int pow4[5001 * 5001] stores all powers of 4, which takes 100MB.
Thus 500MB in total. And this iterative version takes 3000ms time. Recursive version takes 6000ms, which is close, but just a bit over TL.
Interesting problem, but very frustrating N <= 5000 limit.
NICE! I still think that the technique used in the tutorial solution is intended, but if they really wanted to go that way the memory limit should be way less (like 128MB or even 64MB) instead of the normal 500MB.
Probably it was a cut point for some other solution.
I had implemented N * M * 32, which does not work.
Btw, declaring an array of $$$4 n^2$$$ is possible, if you use the int type. Together with a precalculated power array, this gives $$$5 n^2 \cdot 4 \text{byte} \approx 400 \text{MB}$$$. Recursion is not a good idea anyway for $$$O(n^2)$$$ with $$$n=5000$$$. (In the end because I needed $$$6n^2$$$ space I did do space-saving style DP so I didn't need the $$$4n^2$$$ of storage). I do agree that adjusting limits to either side would have been better. Lower, such that for example $$$n^2$$$ integers does not fit, makes sure that you have to do some memory optimization. Higher, just makes sure not too many approaches are on the edge of memory limit.
Here is my approach for C
for odd k , just print n 'k' times
for even k ,
My idea was to try to bring in as much numbers(out of those k) in that category 'which will have 1 at the bit when n has a 0 at that bit' and as soon as possible .
starting from the highest set bit in n, let the bit be 'b': if b is 1 :
we need to have odd number of 1's (in those k numbers combined) at the b th bit , and so obviously to maximise the sum, we would want k-1 numbers to be set 1 at this bit. so make the 1st number of the list have a 0 bit here and rest all 1 bit. Next time we encounter a 1 in n , we make the second number of the list 0 here at this bit and rest all 1 . So after encountering 'k' 1's in n , we would have tried out all k numbers. Now , all the k numbers have been made < n .and in future whenever we encounter a 1 in n , doesn't matter you can make any of the k numbers 0 randomly .
And meanwhile in this process, whenever we encounter a 0 in n , we make the first 2*(num/2) numbers of the list to be 1 here. (Where num is the count of numbers which have been made < n so far) . Once num hits k , we can make all the numbers to be 1 here without violating [0,n] condition.
is there any proof for any of this question? it will be better if you add proof of working or something that can we proof with that the solutions.
To ensure no overlap, r_i + r_i+1 <= d_i. Thus, 0 < r_i+1 <= d_i — r_i. Doesn't this mean r_i < d_i (strict, for i < n)? In the editorial, it is given as a non-strict inequality so I'm confused.
In D, when we remove the disks from a prefix, the radius of the last center in the prefix will still affect the next set.
How do we guarantee that our choice of the radius of the first center in the second set will not cause it to overlap with the circle from the previous set?
Say the ith circle is the one that wasn't able to connect to the prefix. This means that even if it had its maximum possible radius (i.e., min(x[i+1]-x[i], x[i]-x[i-1]) it couldn't reach the (i-1)th circle. This also means that any radius you choose futurely (when you start a new "prefix component" at i) won't be affected by the (i-1)th radius, because it will never happen that the ith and (i-1)th circles touch.
Yeah, that's what I figured after some time. Thank you so much! Great explanation.
For G, you would end up with something $$$/(L-1)$$$ if you didn't realize that the sequence is palindrome. This is actually invalid when $$$L-1$$$ is a multiple of $$$10^9+7$$$. The authors seem not to have noticed this and it could pass systest in contest, but three out of five solutions in contest got hacked afterwards.
What funny is that in maroon's code, he wrote something like $$$*(L-1)/(L-1)$$$, which could have caused him some trouble if the hack existed in contest.
I am wondering how the solution works without realizing it is a palindrome, can you give me a brief explanation? Thanks.
The contribution for a single position $$$i$$$ (counting from backwards) is $$$2^{L-1}-i\times\frac{1+2^{L-1}(L-2)}{L(L-1)}$$$. This can be achieved by finding the pattern with brute force, or by some combinatorics calculation.
In the end we need to maintain the length, sum of $$$a_i$$$, sum of $$$i\times a_i$$$. The difficult part is to find out which element is deleted for every operation $$$1$$$. Break the sequence into two parts from the middle. For each part, we need to insert $$$x$$$ and delete the first (or last) element.
For every insertion operation, every $$$x$$$ inserted will be in positions $$$s,s+d,s+2d,s+3d,\cdots$$$, and a single operation either adds all $$$s$$$ by some value, or multiplies all $$$d$$$ by some value. Use a segment tree to maintain all $$$s,d$$$, and the element deleted every time is the one with minimum $$$s$$$.
I think E,F1,F2 and G are all standard problems with heavy implementation. What's the point of putting those problems in a 2.5h contest???
I completely agree that 3h is the better choice, but let me disagree about E: it’s only 12 lines. Also, I know finding the coefficient of $$$F$$$ may be boring, but the implementation is short. Either way, $$$G$$$ does not count as a heavy-implementation Codeforces problem in my opinion.
G definitely is a heavy-implementation Codeforces problem
Common, see editorial's implementation, did you count it heavy-implementation? Its implementation is more technical in my opinion.
Hi Kevin! Could you share your first thoughts on Problem E? Your solution is really elegant and I'm curious about your thought process. When you first read Problem E, what was your initial intuition and what was the first thing that came to your mind when you saw the XOR operation with the constraint that all frogs must stay within the range? I'd love to know how you developed this approach of checking each bit position individually — did you start by working through small examples by hand and notice the pattern, or did you think about how XOR affects numbers at each bit level like flipping within blocks, or maybe you had some key insight that led you directly to this solution? The idea of checking whether the range is symmetric around the midpoint or covers complete blocks for each bit position is really clever, so I'm wondering how you came up with the specific conditions like r — mid == mid — l — 1 and r % i == i-1 && l % i == 0. I'm trying to improve my problem-solving skills and understanding how others go from reading a problem to finding such a clean solution would be super helpful, so thanks for sharing!
Editorial for E?
There is a mistake in the F1 editorial's DP transitions.
For dp[i][j][0][1], the coefficient for dp[i][j+1][1][1] should be 5/16 instead.
Fixed, Thanks.
ok, I just realized that I didn't pass C beacause I wrote something like this:
memset(arr + 1, 0, sizeof(int) * n));
..........
The question yesterday was terrible. I hope I won't encounter it again in the future
It is super frustrated I stuck at problem C. My brain kept telling me that if k is even, just pick k-2 n, and only need to optimize the rest.
There is a slight typo in F1. The transition equation for $$$dp[i][j][0][1]$$$ and $$$dp[i][j][1][1]$$$ are not what is in the implemented solution. These are the fixed equations:
Fixed, Thanks.
We are like Greece.
Persia won't stop until they invade our ratings.
can anyone tell what is wrong in my submisson for question C
https://mirror.codeforces.com/contest/2180/submission/354264435
In C, instead of maintaining 2 sets, I used mergeable treaps to maintain all numbers in sorted order and perform += on prefix
354170111
awesome contest!! loved it.. need more like it
The problem D matches a lot with https://mirror.codeforces.com/gym/106073/problem/I this problem
I tried solving D using a dp approach as follows:
dp[i][0] means we make a tangent pair between i and i-1
dp[i][1] means we skip this tangent pair. Although i understand why this approach is wrong, i was still thinking if this approach can be modified into a valid solution.
Anyone thinking along the same line, their help will be appreciated. Thanks
see peltorator's solution. I also tried that in the contest but then realised that dp[i][1] will not be possible so you cant have that as a valid transition.
In problem C, I used a greedy approach that seems to require little thought. Is its correctness guaranteed?
code
Can somebody explain problem E?
In Problem D, I think it should be $$$0 \lt r_i \lt d_i$$$ instead of $$$0 \lt r_i \le d_i$$$?
I either think so. Fixed, Thanks!
Can someone explain the bitwise approach for E?
Define $$$f(a,b,x,c,d,y,k)$$$ as when considering the low $$$k$$$ bits of $$$a,b,x,c,d,y$$$, the number of $$$t\in [0, 2^k)$$$ such that $$$\forall i\in[a,b]$$$, $$$i\oplus t\ge x$$$, and $$$\forall j\in[c,d]$$$, $$$j\oplus t\le y$$$.
We can see the answer is exactly $$$f(l,r,l,l,r,r,60)$$$.
Next, consider how to calculate $$$f(a,b,x,c,d,y,k)$$$, enumerating the $$$k$$$-th bit of $$$t$$$ from 0 to 1, and we can express $$$f(a,b,x,c,d,y,k)$$$ as the sum of several (possibly zero) $$$f(.,.,.,.,.,.,k-1)$$$.
Doing this with memorized search gives a correct time complexity, as when you write out all the possible value of the parameters, you will find $$$a,x,c$$$ will either be $$$0$$$ or $$$l$$$, and $$$b,d,y$$$ will either be $$$2^{60}-1$$$ or $$$r$$$.
Time complexity: $$$O(2^{6}\log r)$$$.
Sample code: 354930229
In problem D, don't we have to use the greedy approach from left as well as right, i.e. assuming the leftmost radius a free variable first and then assuming the rightmost radius to be the free variable? The point made in the editorial that each break starts a new fresh problem is still correct though.
The solution of C is confusing
why editorials are so trash lately? Why are you skipping steps of the solution? Why are you explaining in such complex language if you can simplify it without loosing the idea?
can some1 explain the formula for A?
is F a problem?
Problem G is actually two completely independent problems. It can be split into two separate ones, with the first operation removed from one and the third operation removed from the other.
Можете объяснить почему это решение не работает?
Problem D. Also, note that once we set it+1=fit+1 , there are no additional constraints on the choice of rit+1 coming from earlier indices. In particular, any restriction on rit+1 can only be imposed by it+1+1 (otherwise fit would not have been maximal by definition).
Could someone prove this??
i shd pack it up ong
trying to upsolve C but I am completely confused as to why my solution fails.
My logic: odd=> all the terms will be n
even=>all the terms but the last 2 can be n. the last two will be numbers such that their xor is n and sum is maximum(a+b=max and a^b=n)
1)I gave msb to a 2)after that if set bit is 0 then I ask if I can have that bit as 1 in a without exceeding n?? if yes then add it.
calculated b=n^a. can anyone please tell me what is wrong with my approach?? https://mirror.codeforces.com/contest/2180/submission/354513423
Here is counter example:
Input:
30 4Yours:
30 30 17 15Answer:
29 27 23 15Thanks a ton for this turns out my assumption of having k-2 n's are wrong.
What problem with C? Short statement, intresting, but simple solution. Don't understand all hate about it.
In the linear algebra approach of Problem E, when discussing the case $$$2^{n-1} \le r \lt 2^n$$$, I believe that $$$l$$$ and $$$r$$$ are implicitly preprocessed by factoring out their common binary prefix (i.e., shifting the interval to the block $$$b_0$$$). This was a bit confusing to me at first glance, as the preprocessing step is not stated explicitly.
Thanks, fixed.
Can anyone explain more why the basis vectors can all be of the form 2^n-1? I don’t really get the motivation here.
Proof:
Central Idea: We want so show that if a number with msb = m works then the number with all bits on that are <=m works.
Lets define a block as a group of consecutive numbers such that any number in the group xorred with the xorring number is also in the group. It is clear that under any working xor, the range l to r will be partitioned into blocks.
Consider any number with an msb of m that works. Consider any block in l to r that is minimal (i.e that it is not itself containing more than one block). Note that any minimal block must be self contained in some interval that is 0 to 2^(m+1)-1 under mod 2^(m+1). We want to show that the number 2^(m+1)-1 also works for this block. Since 2^(m+1)-1 just reflects numbers about the midpoint, we just need to show that the block is centered at the midpoint.
Now, we are going to define the coordinate of a number in the modded interval by 0 to 2^(m+1)-1 in this way. [0,2^m-1] -> [-2^m, -1] and [2^m, 2^(m+1)-1] -> [1, 2^m].
Now let us take the time to note this Lemma of Equivalent Action. Define x' as the number with the opposite coordinate as x as defined above. Suppose that x -> x + inc when xorred with a number, then clearly x' -> x'-inc.
Now we show that any block is centered around the midpoint. Suppose that the block is not centered. Let the left of the block be l and let the right of the block be r (this is in the new coordinate system). Suppose wlog that |r| > |l| (they can't be equal since the block is not centered). Suppose that r goes to y under the xor. Note that y to r must cross the midpoint. Then note that y' is in (since it has a positive coordinate less than r) and will go to -r (by the Lemma of Equivalent Action), so -r must be in. But that contradicts that |r| > |l|, so the block is centered. If you can't quite see this, please draw a picture.
There are many basis, $$$2^n-1$$$ only is important when having one block.
The other approach published, I think it helps a lot.
yeah its just not clear why this always works without the proof though
Shit C
MohammadParsaElahimanesh Problem E bitwise approach not published till now
published! It was really hard :)
Thanks 😊 😊 😊
In my humble opinion, there's a small mistake in the solution of G: $$$\displaystyle\sum_{k=1}^n\dfrac{\binom{n-1}{k-1}}{k}$$$ doesn't equals to $$$\dfrac{2^n}{n}$$$ but equals to $$$\dfrac{2^n-1}{n}$$$. Here's a brief proof:
Note that
therefore, we have
Binomial Theorem shows that
so
therefore,
(apologize for my poor English)
MohammadParsaElahimanesh
Thanks a lot, I fixed it.
You're welcome.
And would you mind me ask another question? Why the complexity of problem G is $$$O(q\log q)$$$, not $$$O(q)$$$ or $$$O(q\log V)$$$(you can regard $$$V$$$ as $$$10^9+7$$$, for the complexity of quick power)? Thanks.
Fixed, Thanks.
I have a solution for G but can't figure out two corner cases. Can anyone help me?
I found the answer of question 3 is: $$$\sum\limits_{i=1}^na_i\times(i\times\frac{(n-2)\times2^{n-1}+1}{n\times(n-1)}+\frac{(n-2)\times2^{n-1}+1}{n\times(n-1)}+2^{n-1})$$$. So I only need $$$\sum a_i$$$ and $$$\sum i\times a_i$$$. If something wrong, please point it out.
Then I have coded it and got WA on test 44 and 45. My code is here: 354965006. Only the main function is useful.
I found I was hacked by some thing like this: when $$$n=k\times\bmod$$$ or $$$n=k\times\bmod+1$$$, $$$\frac1n$$$ or $$$\frac1{n-1}$$$ is undefined. (Line 1231 of the code is wrong.) How to solve these two corner cases.
UPD: ops. The problem ensure $$$n\neq k\times\bmod$$$, but still $$$\frac1{n-1}$$$ maybe undefined.
You only need to change $$$n-2$$$ to $$$(n-1) - 1$$$ and then cancel $$$n-1$$$ from numerator and the denominator.
There is a minor mistake in the code for problem F1 - in the input for loop (the first for loop) which is for(int n, m, i = 0; i < T; i++), it should be for(int n, m, i = 0; i < t; i++).
Thanks, fixed. It's interesting that it works before.
Cant we solve C like this
if k is odd repeat the number k times
if k is even repeat n k-2 times then makes two numbers say x and y then where(till second bit) in n there is 0 make that bit in x and y as 1 and where is 1 make x as 1 and y as 0 for the last bit make y as 1
I can't get the mathematical proof or some goo intuition in E explanation with bits method that how the following mapping possibility was eliminated:
some of A->C and rest of A->A.
some of C->A and rest of C->C
B->B
We mentioned in editorial why x cannot send one part of A to C.
I can't understand why "the third term is never needed" in the step5 of H1, maybe it can be proven by induction in the order of q?
Honestly I can't prove it! Our coordinator does that and I check with brute force.
Thanks! Practice is the sole criterion for testing truth.
MohammadParsaElahimanesh I found the C problem really pretty, I appreciate your effort <3
Alternate stupid solution for F1:
Code: 360970649
$$$O(n^2)$$$ solution for F2 using Lagrange interpolation: https://mirror.codeforces.com/contest/2180/submission/368065239