You could tell people didn't seem to like the match very much. I'm sorry I screwed up again ¯\_(ツ)_/¯
If you're interested, I'd like to share some thoughts I have about this contest.
This time I've been trying my best to make the problems use more varied algorithms instead of just dp problems, but sadly 2115C - Gellyfish and Eternal Violet was much harder than expected.
But it seems like people are complaining more about the narrow time limit, centered on 2116C - Gellyfish and Flaming Peony and 2115C - Gellyfish and Eternal Violet.
No one realized this before the contest, because none of the tester's code is get TLE except the wrong time complexity.
I actually don't see a problem with the Time Limit of 2116C - Gellyfish and Flaming Peony. Some of you may be puzzled, but let me try to explain this:
- The intended solution is $$$O(\sum n \max(a) + \max(a)^2)$$$ and does not contain $$$\log \max(a)$$$. Computing $$$\gcd(x, y)$$$ is not $$$O(1)$$$, if you don't preprocess, then it's not unusual to actually get a TLE.
- I believe the vast majority of people who get TLE do so for this reason, and you might argue that it's actually hard to tell whether it's the constant that's too large or the time complexity that's wrong. I would say that normally, if someone expects $$$O(n^2 \log n)$$$ to pass, then the questioner would set it to $$$n \leq 2000$$$ instead of $$$n \leq 5000$$$.
- I'll admit that $$$n, a_i \leq 2000$$$ would have been better, "Happy AC" isn't a bad thing, this is not Div.1 F and we shouldn't limit complexity to such a narrow range. I'm ignoring the fact that most people who get a TLE will assume that the constant is too large, and thus not even think about the large amount of time wasted from optimizing the complexity.
- If someone's solution is $$$O(\sum n \max(a) + \max(a)^2)$$$ and it gets TLE, I really don't know why. The speed performance of array accesses on Codeforces is very strange (also happens on 2115C - Gellyfish and Eternal Violet), the vast majority of $$$O(\sum n \max(a) + \max(a)^2)$$$ code which gets TLE performs quite well locally, I didn't realize this until after the contest. It's extremely hard for this to happen, I've tried various modifications to std, including swapping the order of the two dimensions of the array and the enumerations i, j, and none of them have gotten a TLE. it's only possible to get a TLE if your code is in an extremely unlucky situation, otherwise going beyond $$$1$$$ second is almost impossible if you are using C++.
For 2115C - Gellyfish and Eternal Violet, I'll admit it was a huge mistake:
- A $$$O(m^2 h)$$$ code is so fast that it takes less than $$$5$$$ seconds even at today's ranges, It runs so fast that I ignored the constants in order to prevent it from passing. The speed of std and testers' codes gave me too much confidence. (Although it was ultimately discovered that some testers' codes did not encounter the worst-case scenario) I should indeed guarantee $$$\sum n \leq 50 $$$ instead of $$$\sum n \leq 100$$$.
- Now I deeply realize that for all but the last few problems, the probability of unintended solutions passing due to extreme constant optimization is far less than the probability of intended solutions receiving TLE. Very few people master instruction sets and extreme constant optimization techniques, and their skill level is usually high enough to handle these problems. The effort required to pass these problems through extreme constant optimization often outweighs simply reconsidering the correct time complexity. I hope the failure of this contest will make future problem authors aware of this issue.
- In fact, the original constraints for 2115C - Gellyfish and Eternal Violet were $$$n \leq 100, m \leq 2000, h_i \leq 100$$$ which would have prevented the current situation. If someone passes your problem through extreme constant optimization, think carefully based on the problem's difficulty—whether you should increase the constraints or just give everyone a "Happy AC."
In any case, it's all over now. Though this wasn't a perfect ending, 'Life goes on.' Hope Codeforces contests will keep improving, and wish those reading this better results in future competitions.
2116A - Gellyfish and Tricolor Pansy
Idea: Gellyfish Solution: Gellyfish Prepared by: Gellyfish
Please think carefully about what happens after the death of either knight.
While a player who goes to $$$0$$$ HP will lose the game outright, when a player's knight dies, she loses the ability to attack; then in future rounds, she can only be attacked by her opponent and thus lose the game.
Thus it can be found that the player herself is as important as her knights, and she loses the game when either of them becomes $$$0$$$ HP.
Therefore, the optimal strategy for both of them is to attack the one with lower HP. So when $$$\min(a, c)$$$ is greater than or equal to $$$\min(b, d)$$$, Gellyfish wins, otherwise Flower wins.
Time complexity: $$$O(1)$$$ per test case.
Memory complexity: $$$O(1)$$$ per test case.
#include<bits/stdc++.h>
using namespace std;
inline void solve(){
int a = 0, b = 0, c = 0, d = 0;
scanf("%d %d %d %d", &a, &b, &c, &d);
if(min(a, c) >= min(b, d)) printf("Gellyfish\n");
else printf("Flower\n");
}
int T = 0;
int main(){
scanf("%d", &T);
for(int i = 0 ; i < T ; i ++) solve();
return 0;
}
2116B - Gellyfish and Baby's Breath
Idea: Gellyfish Solution: Gellyfish Prepared by: Gellyfish
How to quickly compare $$$2^a + 2^b$$$ and $$$2^c + 2^d$$$ for given integers $$$a, b, c, d$$$ is the key.
We are given two permutations of $$$p$$$ and $$$q$$$, which means that each element will appear only once in both $$$p$$$ and $$$q$$$, respectively. What's the point of this?
For given integers $$$a, b, c, d$$$ , if we want to compare $$$2^a+2^b$$$ and $$$2^c+2^d$$$, we actually need to compare $$$\max(a, b)$$$ with $$$\max(c, d)$$$ first, and $$$\min(a, b)$$$ with $$$\min(c, d)$$$ second. This is due to $$$2^k = 2^{k-1} + 2^{k-1}$$$; when $$$\max(c, d) \lt \max(a, b)$$$, $$$2^c + 2^d \leq 2 \times 2^{\max(c, d)} \leq 2^{\max(a, b)}$$$, and it's symmetric for $$$\max(a, b) \lt \max(c, d)$$$.
So for all $$$i$$$, we only need to find $$$j = \arg \max\limits_{1 \leq l \leq i} p_l, k = \arg \max\limits_{1 \leq l \leq i} q_l$$$, then $$$r_i = \max(2^{p_j} + 2^{q_{i-j}}, 2^{p_{i-k}} + 2^{q_k})$$$. This is easily done in $$$O(n)$$$ time.
Time complexity: $$$O(n)$$$ per test case.
Memory complexity: $$$O(n)$$$ per test case.
#include<bits/stdc++.h>
using namespace std;
const int N = 1e5 + 5, Mod = 998244353;
int n = 0, s[N] = {}, p[N] = {}, q[N] = {}, r[N] = {};
inline void solve(){
scanf("%d", &n);
for(int i = 0 ; i < n ; i ++) scanf("%d", &p[i]);
for(int i = 0 ; i < n ; i ++) scanf("%d", &q[i]);
for(int i = 0, j = 0, k = 0 ; k < n ; k ++){
if(p[k] > p[i]) i = k; if(q[k] > q[j]) j = k;
if(p[i] != q[j]){
if(p[i] > q[j]) printf("%d ", (s[p[i]] + s[q[k - i]]) % Mod);
else printf("%d ", (s[q[j]] + s[p[k - j]]) % Mod);
}
else printf("%d ", (s[p[i]] + s[max(q[k - i], p[k - j])]) % Mod);
}
printf("\n");
}
int T = 0;
int main(){
s[0] = 1; for(int i = 1 ; i < N ; i ++) s[i] = s[i - 1] * 2 % Mod;
scanf("%d", &T);
while(T --) solve();
return 0;
}
2115A - Gellyfish and Flaming Peony
Idea: Gellyfish Solution: Gellyfish Prepared by: Gellyfish
Try to think about why Gellyfish can always achieve her goal, and ultimately what all the elements will turn into.
When you've figured out Hint 1, try using dynamic programming to reach your goal.
Let $$$g = \gcd(a_1, a_2, \dots, a_n)$$$, It can be shown that eventually all elements become $$$g$$$.
Consider the assumption that eventually all numbers are $$$x$$$. Then for all $$$i$$$, there is $$$x\ |\ a_i$$$; this is because as $$$a_i := \gcd(a_i, a_j)$$$, the new $$$a_i$$$ value will only be a factor of the original. It further follows that $$$x\ |\ g$$$.
Further analysis reveals that $$$x$$$ cannot be less than $$$g$$$, no matter how it is manipulated. Thus we have $$$x = g$$$.
Next we consider that after a certain number of operations, if there exists some $$$a_k$$$ equal to $$$g$$$. In the next operations, for each element $$$a_i$$$ that is not equal to $$$g$$$, we simply choose $$$j=k$$$ and then make $$$a_i := \gcd(a_i, a_k)$$$. After this, all elements will become $$$g$$$.
If $$$g$$$ is initially in $$$a$$$, then the problem is simple; we just need to count the number of elements in $$$a$$$ that are not equal to $$$g$$$.
But if the initial $$$g$$$ is not in $$$a$$$, we are actually trying to make an element equal to $$$g$$$ by minimizing the number of operations.
This is not difficult to achieve through dynamic programming, using $$$f_x$$$ to indicate that it takes at least a few operations to make an element equal to $$$x$$$.
The transition is simple, just enumerate $$$x$$$ from largest to smallest, and then for all $$$i$$$, use $$$f_{x}+1$$$ to update $$$f_{\gcd(x, a_i)}$$$.
But computing $$$\gcd(x, y)$$$ is $$$O(\log x)$$$, a direct transition takes $$$O(n \max(a) \log \max(a))$$$ time. So we need to preprocess this. Let $$$h_{x, y} = \gcd(x, y)$$$, then there is obviously $$$h_{x, y} = h_{y, x \bmod y}$$$. Before all test cases, $$$h$$$ can be preprocessed in $$$O(\max(a)^2)$$$ time complexity.
Time complexity: $$$O(n \max(a))$$$ per test case and $$$O(\max(a)^2)$$$ for preprocessing.
Memory complexity: $$$O(n+\max(a))$$$ per test case and $$$O(\max(a)^2)$$$ for preprocessing.
Try to solve $$$n, a_i \leq 2 \times 10^5$$$ with only one test case.
#include<bits/stdc++.h>
using namespace std;
const int N = 5000 + 5;
inline void checkmax(int &x, int y){
if(y > x) x = y;
}
inline void checkmin(int &x, int y){
if(y < x) x = y;
}
int n = 0, m = 0, k = 0, a[N] = {}, f[N] = {};
int g[N][N] = {}, ans = 0;
inline void solve(){
scanf("%d", &n); m = k = 0;
for(int i = 1 ; i <= n ; i ++){
scanf("%d", &a[i]);
k = g[k][a[i]];
}
memset(f, 0x3f, sizeof(f));
for(int i = 1 ; i <= n ; i ++) a[i] /= k, checkmax(m, a[i]), f[a[i]] = 0;
for(int x = m ; x >= 1 ; x --) for(int i = 1 ; i <= n ; i ++){
int y = a[i];
checkmin(f[g[x][y]], f[x] + 1);
}
ans = max(f[1] - 1, 0);
for(int i = 1 ; i <= n ; i ++) if(a[i] > 1) ans ++;
printf("%d\n", ans);
}
int T = 0;
int main(){
for(int x = 0 ; x < N ; x ++) g[x][0] = g[0][x] = g[x][x] = x;
for(int x = 1 ; x < N ; x ++) for(int y = 1 ; y < x ; y ++) g[x][y] = g[y][x] = g[y][x % y];
scanf("%d", &T);
while(T --) solve();
return 0;
}
2115B - Gellyfish and Camellia Japonica
Idea: Gellyfish Solution: Gellyfish Prepared by: Gellyfish
Try working backwards from the final sequence, to the initial.
If you're confused about Hint 1, it's probably because the result isn't unique each time. Think carefully about whether you can just take the "tightest" result possible
Let's think of the problem in another way, if we only require that for all $$$i$$$, the final value of $$$c_i$$$ is greater than or equal to $$$b_i$$$, what will be the limitations for all $$$a_i$$$?
It is possible to prove that the restricted form exists as a sequence $$$l$$$. It is sufficient that $$$a_i \geq l_i$$$ for all $$$i$$$.
Next we try to illustrate this thing, we need to work backward from the last operation and observe the restrictions on $$$c$$$ in each step. After the last operation, we have $$$c_i \geq b_i$$$, which means $$$l = b$$$.
Consider an operation that replaces $$$c_z$$$ with $$$\min(c_x, c_y)$$$. If for a post-operation restriction of $$$l$$$, can we recover the pre-operation restriction $$$l'$$$? Let us think as follows:
- It is not difficult to find that for $$$i \notin {x, y, z}$$$, $$$l'_i = l_i$$$.
- $$$l'_z = 0$$$, because the $$$c_z$$$ before the operation is overwritten, it will not actually have any restrictions.
- $$$l'_x = \max(l_x, l_z), l'_y = \max(l_y, l_z)$$$. Since the new $$$c_z$$$ is the original $$$\min(c_x, c_y)$$$, it can be found that for the original $$$c_x$$$, we have $$$c_x \geq \min(c_x, c_y) = c_z \geq l_z$$$, while for $$$y$$$ is symmetric.
We have thus proved that, according to the eventually obtained $$$l$$$, $$$a_i \geq l_i$$$ for all $$$i$$$ is a sufficient condition to eventually make all $$$c_i \geq b_i$$$ for all $$$i$$$ .
And ultimately all $$$c_i = b_i$$$, thus for all $$$i$$$, $$$a_i \geq l_i$$$ is necessary. In fact, we could just take $$$a=l$$$ and do all the operations sequentially to see if we end up with $$$c=b$$$. This is because as $$$a$$$ decreases, eventually $$$c$$$ decreases as well, and by the $$$l$$$ guarantee there is always $$$c_i \geq b_i$$$, so in effect we are trying to minimize all of $$$c$$$, and then minimizing all of $$$a$$$ is clearly an optimal decision.
Time complexity: $$$O(n + q)$$$ per test case.
Memory complexity: $$$O(n + q)$$$ per test case.
#include<bits/stdc++.h>
using namespace std;
const int N = 3e5 + 5;
int n = 0, q = 0, a[N] = {}, b[N] = {}, c[N] = {};
int x[N] = {}, y[N] = {}, z[N] = {};
inline void init(){
for(int i = 1 ; i <= n ; i ++) a[i] = b[i] = c[i] = 0;
for(int i = 1 ; i <= q ; i ++) x[i] = y[i] = z[i] = 0;
n = q = 0;
}
inline void solve(){
cin >> n >> q;
for(int i = 1 ; i <= n ; i ++){
cin >> b[i];
c[i] = b[i];
}
for(int i = 1 ; i <= q ; i ++) cin >> x[i] >> y[i] >> z[i];
for(int i = q ; i >= 1 ; i --){
int v = c[z[i]]; c[z[i]] = 0;
c[x[i]] = max(c[x[i]], v), c[y[i]] = max(c[y[i]], v);
}
for(int i = 1 ; i <= n ; i ++) a[i] = c[i];
for(int i = 1 ; i <= q ; i ++) c[z[i]] = min(c[x[i]], c[y[i]]);
for(int i = 1 ; i <= n ; i ++) if(b[i] != c[i]){
cout << "-1\n";
return;
}
for(int i = 1 ; i <= n ; i ++) cout << a[i] << ' ';
cout << '\n';
}
int T = 0;
int main(){
ios :: sync_with_stdio(0);
cin.tie(0), cout.tie(0);
cin >> T;
for(int i = 0 ; i < T ; i ++) init(), solve();
return 0;
}
2115C - Gellyfish and Eternal Violet
Idea: Gellyfish Solution: Gellyfish Prepared by: Gellyfish
Try to find an $$$O(nm h^2)$$$ solution using dynamic programming.
Re-examining Gellyfish's strategy, there are definitely situations where she chooses to carry out an attack. Can we divide the $$$m$$$ rounds into two phases by some nature?
Considering all current monsters, if the lowest HP of the monsters is $$$l$$$, Gellyfish can only make at most $$$l-1$$$ "ranged" attacks.
So for each monster, if its HP is $$$h_i$$$, then it must be subject to at least $$$h_i - l$$$ "pointing" attacks. Further, we can see that we only care about $$$l$$$ and $$$\sum\limits_{i=1}^n (h_i - l)$$$.
Consider directly using $$$f_{i, l, x}$$$ to indicate that there are still $$$i$$$ rounds to go, the lowest HP of the monsters is $$$l$$$, and $$$x = \sum\limits_{i=1}^n (h_i - l)$$$, the probability that Gellyfish will reach her goal. This solves the problem within $$$O(nmh^2)$$$, but that's not enough.
Consider the initial HP of all monsters, let $$$l = \min\limits_{i=1}^n h_i, s = \sum\limits_{i=1}^n (h_i - l)$$$. For the first $$$s$$$ times the sword doesn't shine, Gellyfish will obviously launch an attack.
So we using $$$g_{i, j}$$$ to indicate the probability that the sword did not shine exactly $$$j$$$ times in the first $$$i$$$ rounds and the last time the sword did not shine. We then enumerate the $$$s$$$-th time that the sword didn't shine. We can divide the problem into two parts. The first half can be solved using $$$g$$$, while the second half can be solved using $$$f$$$, We just need to merge the results of the two parts, which is not difficult.
When we use f to solve the second part, it is easy to see that initially all monsters have the same HP, and each "pointing" attack can directly attacks the monster with the highest HP, which doesn't make the answer worse. So we have $$$h_i - l \leq 1$$$ for any time, and the range of $$$x$$$ that we actually need is compressed to $$$[0, n)$$$.
Time complexity: $$$O(nmh)$$$ per test case.
Memory complexity: $$$O(nmh)$$$ per test case.
#include<bits/stdc++.h>
using namespace std;
const int N = 22, K = 4000 + 5, M = 400 + 5, Inf = 0x3f3f3f3f;
inline void checkmin(double &x, double y){
if(y < x) x = y;
}
int n = 0, m = 0, s = 0, k = 0, p0 = 0, h[N] = {};
double p = 0, f[K][K] = {}, g[K][N][M] = {}, ans = 0;
inline void init(){
for(int i = 0 ; i <= k ; i ++){
for(int c = 0 ; c < n ; c ++) for(int x = 0 ; x <= m ; x ++) g[i][c][x] = 0;
for(int x = 0 ; x <= s ; x ++) f[i][x] = 0;
}
m = Inf, s = 0, ans = 0;
}
inline void solve(){
scanf("%d %d %d", &n, &k, &p0);
p = 1.0 * p0 / 100;
for(int i = 1 ; i <= n ; i ++){
scanf("%d", &h[i]); h[i] --;
m = min(m, h[i]);
}
for(int i = 1 ; i <= n ; i ++) s += h[i] - m;
if(s > k){
printf("0.000000\n");
return;
}
g[0][0][0] = 1;
for(int i = 1 ; i <= k ; i ++){
g[i][0][0] = 1;
for(int x = 1 ; x <= m ; x ++) g[i][0][x] = g[i - 1][0][x - 1] * p + max(g[i - 1][0][x], g[i - 1][n - 1][x - 1]) * (1 - p);
for(int c = 1 ; c < n ; c ++){
g[i][c][0] = g[i - 1][c][0] * p + g[i - 1][c - 1][0] * (1 - p);
for(int x = 1 ; x <= m ; x ++) g[i][c][x] = g[i - 1][c][x - 1] * p + g[i - 1][c - 1][x] * (1 - p);
}
}
f[0][0] = 1;
for(int i = 0 ; i < k ; i ++) for(int x = 0 ; x < s ; x ++){
f[i + 1][x] += f[i][x] * p;
f[i + 1][x + 1] += f[i][x] * (1 - p);
}
for(int i = s ; i <= k ; i ++){
double r = 0;
for(int x = 0 ; x <= min(i - s, m) ; x ++) r = max(r, g[k - i][0][m - x]);
ans += r * f[i][s];
}
printf("%.6lf\n", ans);
}
int T = 0;
int main(){
scanf("%d", &T);
while(T --) init(), solve();
return 0;
}
2115D - Gellyfish and Forget-Me-Not
Idea: MagicalFlower Solution: MagicalFlower Prepared by: MagicalFlower
Consider if $$$c$$$ consists only of $$$0$$$, this problem turned out to be another classic problem. So you need at least something that you know what it is.
"linear basis" is the answer to Hint 1. Please try to understand this: all addition operations are interpreted as XOR operations.
We can assume that the initial value of $$$x$$$ is $$$\sum a_i$$$. In each step, we can choose to add $$$c_i = a_i + b_i$$$ to $$$x$$$, or do nothing. Each suffix of the sequence can be seen as a subproblem, so we prove inductively that the answer, as a function $$$f(x)$$$ of the initial value of $$$x$$$, is an affine transformation, i.e., $$$f(x) = Ax + b$$$.
When $$$n = 0$$$, this is trivial: $$$f(x) = x$$$.
When $$$n \gt 1$$$, we can choose to add $$$c_i$$$ to $$$x$$$ or not. Let $$$f(x)$$$ denote the answer function from the second operation onward. The two possible outcomes correspond to:
- Not choosing $$$c_i$$$: result is $$$f(x)$$$
- Choosing $$$c_i$$$: result is $$$f(x + c_i) = f(x) + f(c_i) + b$$$.
Although we don't know the exact value of $$$x$$$, $$$f(c_i) + b$$$ is a constant. Suppose the highest set bit in the binary representation of $$$f(c_i) + b$$$ is at position $$$k$$$. Then, the decision of whether to apply this operation depends only on:
- Whether we want to maximize or minimize the final answer
- Whether the $$$k$$$-th bit of $$$f(x)$$$ is $$$0$$$ or $$$1$$$
It is easy to observe that the new function $$$g(x)$$$ (before this decision) remains a affine transformation, satisfying $$$g(x) = g(x + c_i)$$$.
Based on the above process, we can represent the answer function $$$f(x)$$$ using a orthogonal basis. Each element in the orthogonal basis represents a vector in the null space of $$$A$$$. Additionally, we keep a tag for each vector, indicating whether it is used to increase or decrease the value of $$$x$$$.
In each iteration, we try to insert $$$c_i$$$ into the linear basis, and associate it with a tag depending on the index $$$i$$$.
Time complexity: $$$O(n \log \max(a, b, x))$$$ per test case.
Memory complexity: $$$O(n + \log \max(a, b, x))$$$ per test case.
#include <bits/stdc++.h>
using i64 = long long;
constexpr int L = 60;
int main() {
std::ios::sync_with_stdio(false), std::cin.tie(0);
int T;
for(std::cin >> T; T; T --) {
int n;
std::cin >> n;
std::vector<i64> a(n), b(n);
std::string str;
i64 all = 0;
for(auto &x : a) std::cin >> x, all ^= x;
for(int i = 0; i < n; i ++) {
std::cin >> b[i];
b[i] ^= a[i];
}
std::cin >> str;
std::vector<i64> bas(L);
std::vector<int> bel(L, -1);
i64 ans = 0;
for(int i = n - 1; i >= 0; i --) {
i64 x = b[i], col = str[i] - '0';
for(int i = L - 1; i >= 0; i --) if(x >> i & 1) {
if(bas[i]) {
x ^= bas[i];
} else {
for(int j = i - 1; j >= 0; j --) if(x >> j & 1) {
x ^= bas[j];
}
bas[i] = x;
for(int j = L - 1; j > i; j --) if(bas[j] >> i & 1){
bas[j] ^= bas[i];
}
bel[i] = col;
break;
}
}
}
for(int i = L - 1; i >= 0; i --) if(all >> i & 1) {
all ^= bas[i];
}
for(int i = 0; i < L; i ++) if(bel[i] == 1) ans ^= bas[i];
std::cout << (all ^ ans) << '\n';
}
return 0;
}
2115E - Gellyfish and Mayflower
Idea: Gellyfish Solution: Gellyfish Prepared by: Gellyfish
There is an easy way to solve the problem in $$$O(m \max(r) + q)$$$ time complexity. Thus for cases with small $$$r$$$, we can easily solve them, but what about cases with large $$$r$$$?
There is a classic but mistaken greed where we only take the item with the largest $$$\frac w c$$$. This is obviously wrong, but Hint 1 lets us rule out the case where r is small; is there an efficient algorithm that can fix this greed for larger $$$r$$$?
Let $$$s$$$ be any path from vertex $$$1$$$ to vertex $$$p$$$, the vertices that pass through in turn are $$$s_1, s_2, \dots, s_k$$$.
Let $$$z$$$ be the vertex in $$$s$$$ with the largest $$$\frac {w} {c}$$$, and $C = c_z, W = w_z$.
Let's call your cards you don't get from vertex $$$z$$$ as special cards.
Lemma1. There exists an optimal solution such that the number of special cards does not exceed $$$C$$$.
Proof. Let $$$c'_1, c'_2, \dots, c'_{k'}$$$ be the cost of the special cards, and $$$p'_i = \sum_{j=1}^i {c'}_j$$$. If there exists $$$0 \leq l \lt r \leq k',\ p'_l \equiv p'_r \bmod C$$$, we will get that $$$\sum_{i=l'+1}^{r'} p_i \equiv 0 \bmod C$$$. Then we can replaces these cards with $$$\frac {\sum_{i=l'+1}^{r'} p_i} C$$$ cards from vertex $$$z$$$, the answer won't be worse. Since there are only $$$C+1$$$ values of $$$x \bmod C$$$ for all non-negative integers $$$x$$$, there are no more than $$$C$$$ special cards.
So it's not hard to find the total cost of special cards won't exceed $$$\max(c)^2$$$.
Now we can use dynamic programming to solve this problem:
- $$$dp(u, v, x, 0/1)$$$ means we are current at vertex $$$u$$$, the vertex on the path with the largest $$$\frac {w} {c}$$$ is $$$v$$$, the total cost of the cards is $$$x$$$, we have reached the vertex $$$v$$$ or not, the maximum sum of the power of the cards.
Since the remainder will be filled by cards from $$$v$$$, We just need to find the value of $$$dp(u, v, x)$$$ that satisfies $$$1 \leq x \leq \max(c)^2$$$.
But unfortunately, the time complexity of the algorithm is $$$O(mn\max(c)^2)$$$. It's not fast enough.
At this point you'll find that solving the problem directly becomes incredibly tricky, so we'll try to split it into two problems.
We first consider the following problem: whether there is a better solution when $$$r$$$ is sufficiently large?
We need to broaden the problem, so we try to be able to buy a negative number of cards with the largest $$$\frac {w} {c}$$$.
Doing so would make the answer larger, but when $$$r$$$ is large enough, the answer won't change. Because according to Lemma1, if the total cost of special cards exceed $$$\max(c)^2$$$, there will be a solution that's at least not worse.
Thus when $$$r \gt \max(c)^2$$$, that's the answer of the problem. And we can use another dynamic programming to solve this problem:
- $$$g(u, v, x, 0/1)$$$ means we are current at vertex $$$u$$$, the vertex on the path with the largest $$$\frac {w} {c}$$$ is $$$v$$$, the total cost of the cards is $$$x$$$, we have reached the vertex $$$v$$$ or not, the maximum sum of the power of the cards.
But unlike the original, when $$$x$$$ is equal to or greater than $$$c_v$$$, we remove several cards from vertex $$$v$$$ to make $$$0 \leq x \lt c_v$$$.
The time complexity becomes $$$O(mn \max(c))$$$, now it's fast enough.
For each query, we can just enumerate $$$v$$$ in $$$O(n)$$$ time complexity.
As for $$$r \leq max(c)^2$$$? It's an easy problem:
- $$$f(u, x)$$$ means we are current at vertex $$$u$$$, the total cost of the cards is $$$x$$$, the maximum sum of the power of the cards.
The time complexity is $$$O(m \max(c)^2)$$$.
For each query, we can get the answer directly from $$$f$$$ in $$$O(1)$$$ time complexity.
Over all, we have solved the problem.
Time complexity: $$$O(mn \max(c) + m \max(c)^2 + qn)$$$
Memory complexity: $$$O(n^2 \max(c) + n \max(c)^2)$$$
#include<bits/stdc++.h>
using namespace std;
typedef long long ll;
const ll N = 200 + 5, Inf = 0xcfcfcfcfcfcfcfcf;
inline ll sqr(ll x){
return x * x;
}
inline ll gcd(ll x, ll y){
if(y) return gcd(y, x % y);
else return x;
}
inline void checkmax(ll &x, ll y){
if(y > x) x = y;
}
ll n = 0, m = 0, magic = 0, w[N] = {}, c[N] = {};
ll f[N][N * N] = {}, g[N][N][N][2] = {};
vector<vector<ll> > G(N);
inline void main_min(){
memset(f, 0xcf, sizeof(f));
for(ll x = 0 ; x <= magic ; x ++) f[1][x] = 0;
for(ll u = 1 ; u <= n ; u ++){
for(ll x = 0 ; x + c[u] <= magic ; x ++) checkmax(f[u][x + c[u]], f[u][x] + w[u]);
for(ll v : G[u]) for(ll x = 0 ; x <= magic ; x ++) checkmax(f[v][x], f[u][x]);
}
}
inline void solve_max(ll i){
ll a = c[i], b = w[i];
for(ll x = 0 ; x < a ; x ++) g[i][1][x][0] = 0;
for(ll u = 1 ; u <= n ; u ++){
if(u == i) for(ll x = 0 ; x < a ; x ++) checkmax(g[i][u][x][1], g[i][u][x][0]);
else if(w[u] * a > c[u] * b) memset(g[i][u], 0xcf, sizeof(g[i][u]));
for(ll s = 0, k = gcd(c[u], a) ; s < k ; s ++) for(ll x = s, t = 0 ; t < 2 * (a / k) ; x = (x + c[u]) % a, t ++){
checkmax(g[i][u][(x + c[u]) % a][0], g[i][u][x][0] + w[u] - ((x + c[u]) / a) * b);
checkmax(g[i][u][(x + c[u]) % a][1], g[i][u][x][1] + w[u] - ((x + c[u]) / a) * b);
}
for(ll v : G[u]) for(ll x = 0 ; x < a ; x ++){
checkmax(g[i][v][x][0], g[i][u][x][0]);
checkmax(g[i][v][x][1], g[i][u][x][1]);
}
}
}
inline void main_max(){
memset(g, 0xcf, sizeof(g));
for(ll i = 1 ; i <= n ; i ++) solve_max(i);
}
int main(){
scanf("%lld %lld", &n, &m);
for(ll i = 1 ; i <= n ; i ++){
scanf("%lld %lld", &c[i], &w[i]);
magic = max(magic, sqr(c[i]));
}
for(ll i = 1, u = 0, v = 0 ; i <= m ; i ++){
scanf("%lld %lld", &u, &v);
G[u].push_back(v);
}
main_min(), main_max();
ll q = 0, p = 0, r = 0;
scanf("%lld", &q);
while(q --){
scanf("%lld %lld", &p, &r);
if(r <= magic) printf("%lld\n", f[p][r]);
else{
ll ans = Inf;
for(ll i = 1 ; i <= n ; i ++){
ll a = c[i], b = w[i];
checkmax(ans, g[i][p][r % a][1] + (r / a) * b);
}
printf("%lld\n", ans);
}
}
return 0;
}
2115F1 - Gellyfish and Lycoris Radiata (Easy Version)
Idea: Gellyfish User Solution: JoesSR, zhaohaikun Prepared by: MagicalFlower
We apply block decomposition to the operations, dividing every $$$B$$$ operations into a single round. Within each round, the sequence is partitioned into $$$O(B)$$$ segments. For each segment, we maintain a queue that records all elements added to that segment during the round.
- Type 1 and 2 operations can be handled directly by pushing into the appropriate segment’s queue or toggling a reversal flag.
- Type 3 (deletion) is handled by marking the element $$$x$$$ as deleted without immediately removing it from queues.
At the end of each round, we rebuild the sequence. Specifically, for each position in the sequence, we record which segment it belongs to, and treat the position as containing all elements currently in that segment's queue.
For queries, we process the contribution from each round one by one. In each round:
- Identify the segment that contains position $$$p$$$.
- Iterate through the queue of that segment.
- For each element, if it has been marked as deleted, remove it from the front of the queue and continue; otherwise, consider it as present.
Since each element is inserted and deleted at most once per segment, and each segment has $$$O(1)$$$ amortized processing per round, the total cost per query remains efficient.
We choose $$$B = \sqrt{q}$$$, resulting in $$$O(\sqrt{q})$$$ rounds in total. The total time and space complexity is: $$$O((n + q)\sqrt{q})$$$
#pragma GCC optimize(2)
#pragma GCC optimize("Ofast")
#pragma GCC optimize("inline","fast-math","unroll-loops","no-stack-protector")
#pragma GCC diagnostic error "-fwhole-program"
#pragma GCC diagnostic error "-fcse-skip-blocks"
#pragma GCC diagnostic error "-funsafe-loop-optimizations"
// MagicDark
#include <bits/stdc++.h>
#define debug cerr << "\33[32m[" << __LINE__ << "]\33[m "
#define SZ(x) ((int) x.size() - 1)
#define all(x) x.begin(), x.end()
#define ms(x, y) memset(x, y, sizeof x)
#define F(i, x, y) for (int i = (x); i <= (y); i++)
#define DF(i, x, y) for (int i = (x); i >= (y); i--)
using namespace std;
typedef long long ll;
typedef unsigned long long ull;
template <typename T> T& chkmax(T& x, T y) {return x = max(x, y);}
template <typename T> T& chkmin(T& x, T y) {return x = min(x, y);}
// template <typename T> T& read(T &x) {
// x = 0; int f = 1; char c = getchar();
// for (; !isdigit(c); c = getchar()) if (c == '-') f = -f;
// for (; isdigit(c); c = getchar()) x = (x << 1) + (x << 3) + (c ^ 48);
// return x *= f;
// }
// bool be;
const int N = 1e5 + 1010, B = 500, B1 = N / B + 5, B2 = B + 5;
int n, q, ans, p[N], wp[N], tot, t[N], tl[N], tr[N];
bool rev[N];
bool ed[N];
// bool vv[N];
// struct Q1 {
// int tl = 1, tr;
// int a[B2];
// bool chk() {
// return tl <= tr;
// }
// int front() {
// return a[tl];
// }
// void pop() {
// tl++;
// }
// void push(int x) {
// a[++tr] = x;
// }
// } tq[N];
// struct Q2 {
// int tl = 1, tr;
// int a[B1];
// bool chk() {
// return tl <= tr;
// }
// int front() {
// return a[tl];
// }
// void pop() {
// tl++;
// }
// void push(int x) {
// a[++tr] = x;
// }
// } vq[N];
queue <int> tq[N], vq[N];
vector <int> cur;
int qq(int x) {
while (tq[x].size()) {
if (!ed[tq[x].front()]) return tq[x].front();
tq[x].pop();
}
return 0;
}
int query(int x) {
int s = 0;
for (int i: cur) {
s += tr[i] - tl[i] + 1;
if (s >= x) {
int g = s - x + 1;
int y;
if (rev[i]) {
y = p[tl[i] + g - 1];
} else {
y = p[tr[i] - g + 1];
}
while (vq[y].size()) {
int tmp = qq(vq[y].front());
if (tmp) return tmp;
vq[y].pop();
}
// int tmp = qq(i);
// if (~tmp) return tmp;
return qq(i);
}
}
assert(false);
// return -1;
}
// bool ee;
// int cnt = 0;
signed main() {
ios::sync_with_stdio(0); // don't use puts
cin.tie(0), cout.tie(0);
// debug << abs(&ee - &be) / 1024 / 1024 << endl;
cin >> n >> q;
F(i, 1, n) p[i] = i;
cur.push_back(++tot);
tl[1] = 1, tr[1] = n;
F(i, 1, q) {
int f, x, y; cin >> f >> x >> y;
if (f == 3) {
x = (x + ans - 1) % q + 1;
} else {
x = (x + ans - 1) % n + 1;
}
y = (y + ans - 1) % n + 1;
// auto split = [&] (int x) {
// if (x > n || vv[x]) return;
// // vv[x] = true;
// for (auto [])
// };
if (f == 1) {
int s = 0;
for (int j: cur) {
int w = tr[j] - tl[j] + 1;
s += w;
if (s >= x) {
if (s > x) {
tot++;
tq[tot] = tq[j];
int g = s - x;
if (rev[tot] = rev[j]) {
tl[tot] = tl[j];
tr[tot] = (tl[j] += g) - 1;
} else {
tr[tot] = tr[j];
tl[tot] = (tr[j] -= g) + 1;
}
cur.insert(next(find(all(cur), j)), tot);
}
tq[j].push(i);
break;
}
tq[j].push(i);
}
}
if (f == 2) {
int s = 0, sz = 0;
for (int j: cur) {
sz++;
int w = tr[j] - tl[j] + 1;
s += w;
if (s >= x) {
if (s > x) {
tot++;
tq[tot] = tq[j];
int g = s - x;
if (rev[tot] = rev[j]) {
tl[tot] = tl[j];
tr[tot] = (tl[j] += g) - 1;
} else {
tr[tot] = tr[j];
tl[tot] = (tr[j] -= g) + 1;
}
cur.insert(next(find(all(cur), j)), tot);
}
reverse(cur.begin(), cur.begin() + sz);
F(j, 0, sz - 1) rev[cur[j]] ^= true;
break;
}
}
}
if (f == 3) {
if (x < i) ed[x] = true;
}
cout << (ans = query(y)) << '\n';
if (cur.size() >= B) {
int cnt = 0;
for (int j: cur) {
if (rev[j]) {
DF(k, tr[j], tl[j]) {
vq[wp[++cnt] = p[k]].push(j);
}
} else {
F(k, tl[j], tr[j]) {
vq[wp[++cnt] = p[k]].push(j);
}
}
}
F(j, 1, n) {
p[j] = wp[j];
}
cur.clear();
cur.push_back(++tot);
tl[tot] = 1, tr[tot] = n;
}
// for (int j: cur) {
// debug << tl[j] << " " << tr[j] << " " << rev[j] << endl;
// }
}
return 0;
}
2115F2 - Gellyfish and Lycoris Radiata (Hard Version)
Idea: Gellyfish Full Solution: errorgorn Prepared by: MagicalFlower
We consider using leafy persistent balanced trees to maintain the sequence. At each non-leaf node, we store a set $$$S_u$$$ as a lazy tag, indicating that every set in the subtree rooted at $$$u$$$ contains $$$S_u$$$. However, since $$$|S_u|$$$ can be large, it's difficult to push down the tag efficiently.
To address this, we split each set $$$S_u$$$ into two components:
- $$$T_u$$$: A part of $$$S_u$$$ stored directly at node $$$u$$$
- A collection of child nodes $$$v_{u,1}, v_{u,2}, \cdots, v_{u,k}$$$, each storing a subset of $$$S_u$$$
We maintain the invariant: $$$S_u = T_u + \sum_{i=1}^k S_{v_{u,i}}$$$.
Thus, we make the balanced tree persistent. When we create a new node $$$u$$$ and initialize its children as $$$ls(u)$$$ and $$$rs(u)$$$, we add $$$u$$$ into both $$$v_{ls(u)}$$$ and $$$v_{rs(u)}$$$. During split and merge, we do not modify $$$T_u$$$, which ensures these operations still run in logarithmic time.
- Type 1 (Insert): We split the balanced tree into two parts. Let the root of the first part be $$$rt$$$, and simply add an element into $$$T_{rt}$$$.
- Type 2 (Reverse a segment): Just mark the root of the relevant subtree with a "reverse" flag.
- Type 3 (Delete): Since each element $$$x$$$ appears in only one $$$T$$$, we can directly remove it from that node's $$$T$$$.
For queries, We first locate the leaf node $$$u$$$. To compute $$$S_u$$$, we need to compute all $$$S_{v_{u,i}}$$$. Due to persistence, we can guarantee: $$$\max(S_{v_{u,i}}) \lt \min(S_{v_{u,i+1}})$$$
Thus, we can sequentially check whether each $$$S_{v_{u,i}}$$$ is empty.
We process this recursively. Every time we encounter a node $$$x$$$ with $$$S_x = \emptyset$$$ and $$$x$$$ is not part of the latest tree version, then $$$S_x$$$ will always be empty, and we can safely delete $$$x$$$. During a query, we encounter three types of nodes:
- Nodes with $$$|S_x| \ne 0$$$: We only encounter one such node per query — this is where we find the minimum value.
- Nodes with $$$|S_x| = 0$$$ and $$$x$$$ not in the latest tree: These nodes are deleted. Since the total number of persistent nodes is $$$O(q \log n)$$$, these nodes also appear at most that many times.
- Nodes with $$$|S_x| = 0$$$ and $$$x$$$ in the latest tree: These are ancestors of leaf $$$u$$$, so there are at most $$$O(\log n)$$$ of them per query.
Why can we just do $$$O(\log n)$$$ rounds of recursion? Because we replace all the nodes we pass through on the path, so the size of the subtree doesn't change from the time the node is created to the time it is removed from the tree. Furthermore, if we recurse from node $$$u$$$ to node $$$v$$$, this means that $$$v$$$ is the parent of $$$u$$$ at least at some point on the WBLT. Since the WBLT is weight-balanced, the size of the $$$v$$$ subtree is actually at least a constant multiple of the size of the $$$u$$$ subtree, and this constant greater than $$$1$$$ will be based on your WBLT.
Time complexity: $$$O(q \log n + n)$$$.
Memory complexity: $$$O(q \log n + n)$$$.
#include <bits/stdc++.h>
constexpr int N = 3e5 + 10, S = 1.1e7, SS = 2 * S;
int n, q;
int next[SS], val[SS], cnt;
struct queue {
int head, tail;
void push(int x) {
if(head) {
next[tail] = ++ cnt, val[cnt] = x; tail = cnt;
} else {
head = tail = ++ cnt, val[cnt] = x;
}
assert(cnt < SS - 100);
}
void pop() {head = next[head];}
int front() {return val[head];}
bool check() {return head == tail;}
bool empty() {return head == 0;}
void clear() {head = tail = 0;}
};
struct node {
int ls, rs;
queue fa;
int val, exit;
int size, rev;
}a[S];
int tot;
int id[N];
void pushup(int u) {
a[u].size = a[a[u].ls].size + a[a[u].rs].size;
}
void setR(int u) {
a[u].rev ^= 1;
std::swap(a[u].ls, a[u].rs);
}
void setT(int u, int v) {
a[u].fa.push(v);
}
void pushdown(int u) {
if(a[u].rev) {
setR(a[u].ls);
setR(a[u].rs);
a[u].rev = 0;
}
}
int newnode() {
int u = ++ tot;
a[u].exit = 2;
return u;
}
int newleaf() {
int u = newnode();
a[u].size = 1;
return u;
}
int join(int x, int y) {
int u = newnode();
a[u].ls = x, a[u].rs = y;
a[x].fa.push(u);
a[y].fa.push(u);
pushup(u);
return u;
}
auto cut(int x) {
pushdown(x);
a[x].exit = 1;
return std::make_pair(a[x].ls, a[x].rs);
}
int get_val(int u) {
if(a[u].exit == 0) return 0;
if(a[u].val != 0) return a[u].val;
if(a[u].fa.empty()) return 0;
int ans = 0;
while(1) {
ans = get_val(a[u].fa.front());
if(ans) return ans;
if(a[u].fa.check()) break;
a[u].fa.pop();
}
if(a[u].exit == 1) {
a[u].exit = 0;
a[u].fa.pop();
a[u].fa.clear();
}
return 0;
}
int newtag(int x) {
int u = ++ tot;
a[u].val = x;
a[u].exit = 1;
return u;
}
constexpr double ALPHA = 0.292;
bool too_heavy(int sx, int sy) {
return sy < ALPHA * (sx + sy);
}
int merge(int x, int y) {
if(!x || !y) return x + y;
if(too_heavy(a[x].size, a[y].size)) {
auto [u, v] = cut(x);
if(too_heavy(a[v].size + a[y].size, a[u].size)) {
auto [z, w] = cut(v);
return merge(merge(u, z), merge(w, y));
} else {
return merge(u, merge(v, y));
}
} else if(too_heavy(a[y].size, a[x].size)) {
auto [u, v] = cut(y);
if(too_heavy(a[u].size + a[x].size, a[v].size)) {
auto [z, w] = cut(u);
return merge(merge(x, z), merge(w, v));
} else {
return merge(merge(x, u), v);
}
} else {
return join(x, y);
}
}
std::pair<int, int> split(int x, int k) {
if(!x) return {0, 0};
if(!k) return {0, x};
if(k == a[x].size) return {x, 0};
auto [u, v] = cut(x);
if(k <= a[u].size) {
auto [w, z] = split(u, k);
return {w, merge(z, v)};
} else {
auto [w, z] = split(v, k - a[u].size);
return {merge(u, w), z};
}
}
int find(int u, int k) {
if(a[u].size == 1) return u;
pushdown(u);
if(k <= a[a[u].ls].size) return find(a[u].ls, k);
else return find(a[u].rs, k - a[a[u].ls].size);
}
int build(int n) {
if(n == 1) return newleaf();
int x = build(n / 2);
int y = build(n - n / 2);
return join(x, y);
}
int main() {
std::ios::sync_with_stdio(false), std::cin.tie(0);
std::cin >> n >> q;
int rt = build(n);
int lastans = 0;
for(int i = 1; i <= q; i ++) {
int o;
std::cin >> o;
if(o == 1) {
int p;
std::cin >> p;
p = (p + lastans - 1) % n + 1;
auto [A, B] = split(rt, p);
setT(A, id[i] = newtag(i));
rt = merge(A, B);
} else if(o == 2) {
int p;
std::cin >> p;
p = (p + lastans - 1) % n + 1;
auto [A, B] = split(rt, p);
setR(A);
rt = merge(A, B);
} else if(o == 3) {
int x;
std::cin >> x;
x = (x + lastans - 1) % q + 1;
a[id[x]].exit = 0;
}
int p;
std::cin >> p;
p = (p + lastans - 1) % n + 1;
int u = find(rt, p);
std::cout << (lastans = get_val(u)) << '\n';
}
return 0;
}









Cool! The problems are interesting.
I think you should swap C and D because C is extremely difficult while D is relatively easy. If there were no C, I believe I could solve problems D and E. Additionally, if you had ever solved this problem, you could probably solve E in just a few minutes.
Can anyone tell me what does this pipe sign indicate "x | ai" does it mean x is multiple of ai? or it has other meaning
edited: Thanks phongnc
quite the opposite: $$$a_i$$$ is a multiple of $$$x$$$, or $$$x$$$ divides $$$a_i$$$
I solved only 1 problem in Div.1 T_T
It's too hard
In Div1-C, if I declare the dp array dp[m][h][n] like this, it gets TLE.
But if the array is declared dp[m][n][h] in this format, it passes in 750 ms.
Can someone explain this?
It's cache. When you are consecutively using memories in the cache, it kind of speeds up your code by decreasing the constant of your code. I'm not quite an expert on this, I hope my ''blurred'' explanation does help a bit(
As far as I know multi-dimensional array are represented as linear array in memory. So both array should use contiguous blocks of memory.
computer archi er course koren, oitay ei jinisher answer ase:)
Course to sesh. Class e ghumaisi.
yes, but in your second case, dp[m][n][h] and dp[m][n][h+1] are adjacent in memory, meanwhile the same states in your original solution were not (dp[m][h][n] and dp[m][h+1][n]). This can cause speedups because when you access a memory address, the contents of addresses that are nearby are also brought to cache.
While I was using the first array, I accessed dp[m][h][n + 1] right after dp[m][h][n].The actual reason is something else. Actually most of the time both array gives almost equal performance. But when n is 1 only then the second array gives better performance, as n is constrained over all testcase but m and h is not. Let's explain why second one gives better performance when n is 1.
Let's denote first dp array as dp1, second one as dp2, the iterator over m is i, the iterator over h is j and the iterator over n is k
dp1[4000][400][20]
dp2[4000][20][400]
The distance between dp1[i][j][k] and dp1[i][j][k + 1] is 1, but this transition will never occur when n = 1. Every time the transition will be from dp[i][j][1] to dp[i][j + 1][1] the distance between dp1[i][j][1] and dp1[i][j + 1][1] is 20.
But, in case of second dp array the distance between dp2[i][1][j] and dp2[i][1][j + 1] is 1. So only in this case second dp array will give better performance because of cache speedup.
Watch this A concept I bet you didn't know
Solution codes would be very appreciated (mostly for Div2D)
I don't think it's very hard, the idea is like 2100-2200 level, but the hardest part of the problem by far is to figure out all the details and implement correctly, which isn't very easy given weak samples (for example all samples had minimum element $$$\le 2$$$, so dp bugs were unlikely to be caught) and only 2 hours. Also edge cases like $$$p = 0, p = 100, n = 1$$$ didn't help, along with $$$O(tm^2)$$$ being disallowed for calculating coefficients. I think it would've been better to just replace this problem or somehow make it less implementation heavy, though I'm not sure how to do it.
can someone explain the second test case of D1 C.why is it 1-(0.8)^5
Div 2C / Div 1A could also be done via brute force for finding minimum operations for getting an element = gcd.
AC Submission: 322311151
This works because
ai <= 5000and this guarantees atmost 5 operations required (2 * 3 * 5 * 7 * 11 * 13 > 5000). Also the number of unique elements possible inawould also be constrained as minimum number of operations increases. Have not proved rigorous bounds for this, so someone can attempt hacking the solution if u feel it will TLE.nice one
I do not think there is any problem of 2116C - Gellyfish and Flaming Peony, I don't preprocess gcd, but my submission pass within about 1s, which is the half of TL.
322347081
So I think TL is enough.
I notice lots of participants who get TL used long long so I try it.
But I pass finally. 322428242
Can any one tell me why? (I guess maybe it's because the different implementation of func GCD)
I'm a little confused about the analysis of the F2 solution. I don't understand why the query time is small. In the
get_valfunction, when we successfully return a value, is there an upper bound on the recursion depth? Can we build a long chain where $$$T_x$$$ is always empty but we always have to keep recursing into $$$S_{v_{x,1}}$$$?The depth of the recursion is the same as the depth of the balanced tree. Furthermore, if $$$S_u$$$ is empty and not in the latest tree, we remove it.
Do the depths of the tree not change as you split/merge/scapegoat?
We do not use scapegoat trees, but instead use a WBLT (Weight-Balanced Lefy Tree) — a type of weight-balanced binary search tree.
Can someone please tell me why the first code for D1 A gets TLE and the other one passes in < 500ms? They are both $$$O(nA)$$$.
322302212
322302440
In fact it was still possible to get AC in Div2C with a $$$O(n^2 log(max(a)))$$$ solution.
How to solve bonus Div2C/ Div1A $$$?$$$
I guess we just need to replace the bruteforce O(N*V) DP with CF1043F(the exactly identical subproblem of 1A which requires nlogv complexity)
1C is great, it's quite an interesting question. There's no need to blame yourself. This match is already excellent enough.
How to solve the 1A enhanced version mentioned in the solution?
Try to solve $$$n, a_i \leq 2 \times 10^5$$$ with only one test case.
try to find the set of numbers with dp[x] = i + 1 from the set of numbers with dp[x] <= i. You can do it in $$$O(A log A)$$$ with some PIE. Repeat $$$O(log A)$$$ times. 322329305
The problem with the original constraints on C is not because solutions with worse complexity passs with "extreme constant optimization", it's because solutions with worse complexity pass way too easily. While testing I wrote the following very simple $$$O(hm^2)$$$ code that was as fast as the model solution on the original constraints.
It is obvious that this solution requires much less effort to come up with and write compared the the intended one. You don't need crazy constant optimization techniques to make it pass. Therefore I think it is reasonable to change the constraints and TL to separate these two.
Here's a way to do D1D that seems different and easier (I'm not sure if I fully understand the editorial though so it might be the same solution under the hood):
Transform $$$a_i,b_i$$$ into $$$c_i=a_i\oplus b_i$$$ and initialize $$$x$$$ as $$$\sum a_i$$$ as in the editorial. Solve for the 60th bit first, say $$$i_1,\ldots,i_k$$$ are the indices such that $$$c_{i_1},\ldots,c_{i_k}$$$ have the 60th bit set. Then, depending on whether $$$x$$$ is set on the 60th bit and whether Flower or Gellyfish controls $$$c_{i_k}$$$, we have a constraint of the form "we should include an even/odd number of elements among $$$c_{i_1},\ldots,c_{i_k}$$$". The point of this constraint is that the choice of including each of $$$c_{i_1},\ldots,c_{i_{k-1}}$$$ will toggle whether we decide to include $$$c_{i_k}$$$. If we should include an even number of elements among $$$c_{i_1},\ldots,c_{i_k}$$$, we can enforce this by adding $$$c_{i_k}$$$ to $$$c_{i_1},\ldots,c_{i_{k-1}}$$$, and then deleting $$$c_{i_k}$$$ (set it to zero). If we should include an odd number of elements among $$$c_{i_1},\ldots,c_{i_k}$$$, then do the same thing but first set $$$x$$$ to $$$x\oplus c_{i_k}$$$. Now solve for the 59th bit on the modified $$$c$$$ array and so on.
This was my solution for Div1D as well, the code is ~10 lines: https://mirror.codeforces.com/contest/2115/submission/322764422
Basically, the idea is just that whoever last has the ability to flip a digit determines whether that digit is in the final result. This means that every previous modification to that digit will be joined with a flip in the last time that digit can be used.
Div 2. Problem C: Gellyfish and Flaming Peony
For each prime p_k, define a set S_k, which is a subset of the array a. First, divide each element a[i] by their greatest common divisor g, i.e., set a[i] = a[i] / g. Then perform prime factorization on each a[i], expressed as a product of prime powers p_ij^b_ij. If for some j, the exponent b_ij = 0, then a[i] belongs to the set S_j.
This reduces the problem to a Hitting Set problem: the universe is the array a, and each S_k is a subset.
Solving this problem is somewhat complicated. Considering a[i] ≤ 5000 and the number of prime factors j ≤ 5, one can use a bitset c_i to represent the subset status corresponding to a[i]. Then perform one-dimensional dynamic programming (DP) in descending order of c_i. Here, the DP array dp[c_i] represents the minimum number of elements needed to hit the subsets represented by c_i. Alternatively, Integer Linear Programming (ILP) can be used to solve the problem.
can some one please explain me the div2 D problem statement-
for this input
30 30
585394845 663181874 674985388 534304106 534304106 280872037 534304106 118807993 280872037 280872037 460630149 405335374 23982539 584709287 534304106 280872037 534304106 534304106 521918620 884592393 772438410 378525108 663181874 280872037 534304106 585394845 378525108 280872037 180180388 180180388
16 30 21
13 30 3
7 30 28
3 30 23
12 30 7
25 30 1
23 30 22
21 30 15
9 30 7
8 30 4
22 30 24
9 30 15
21 30 2
23 30 15
28 30 3
7 30 10
21 30 16
25 30 20
26 30 27
18 30 17
14 30 29
16 30 30
8 30 10
20 30 4
15 30 4
10 30 10
11 30 6
18 30 1
25 30 24
17 30 28
it says it has the solution 585394845 663181874 674985388 405335374 1 1 405335374 1 1 280872037 460630149 405335374 23982539 584709287 1 534304106 1 1 1 884592393 772438410 180180388 521918620 405335374 585394845 1 378525108 118807993 1 1
but when i apply the operation on result i dont get the input
shouldnt its ans be -1?
Where did u get this testcase? My accepted code gives -1 as answer. Maybe u can hack that(or possibly my XD) code.
btw learn to insert code chart like this
Well i tried to hack myself with this testcase and failed, so the ansewr is -1 after all.
sorry brother pasted the wrong testcase orignal testcase is this 30 30
585394845 663181874 674985388 534304106 534304106 280872037 534304106 118807993 280872037 280872037 460630149 405335374 23982539 584709287 534304106 280872037 534304106 534304106 521918620 884592393 772438410 378525108 663181874 280872037 534304106 585394845 378525108 280872037 180180388 180180388
16 21 15
13 3 19
7 28 6
3 23 29
12 7 8
25 1 26
23 22 30
21 15 18
9 7 9
8 4 22
22 24 17
9 15 23
21 2 23
23 15 25
28 3 9
7 10 16
21 16 28
25 20 4
26 27 22
18 17 12
14 29 19
16 30 29
8 10 16
20 4 7
15 4 17
10 10 24
11 6 8
18 1 5
25 24 9
17 28 6 (it is the 1st testcase for 3rd set) expected output 585394845 663181874 674985388 405335374 1 1 405335374 1 1 280872037 460630149 405335374 23982539 584709287 1 534304106 1 1 1 884592393 772438410 180180388 521918620 405335374 585394845 1 378525108 118807993 1 1
my solution gives -1 (:
I have a solution for div2C bonus. In short the solution's 322468958 (just change the constants).
Basically I try to use bfs and shortest path. Let $$$\gcd(a_i, a_j) = g$$$, then $$$g$$$ can only be a divisor of $$$a_i$$$ and $$$a_j$$$ there aren't that many divisors of a number. So I make a directed graph of edges $$$(x, y)$$$ if there exist $$$i$$$, so $$$\gcd(a_i, x) = y$$$.
We want to find the least subset with $$$\gcd$$$ of $$$1$$$. Then we need to find subset, so intersection of elements' primes is empty. That means if $$$a_i = p_1^{a_1}...{p_k}^{a_k}$$$ we can make it $$$a_i:=p_1...p_k$$$, as we do not care about primes powers.
So the problem is for each $$$x$$$ find all $$$y$$$, so for some $$$i$$$ we have $$$\gcd(a_i, x) = y$$$. I bruteforce $$$x$$$, and consider its divisors in an decreasing order. Let $$$dp_i$$$ is the number of $$$j$$$, so $$$\gcd(a_j, x) = i$$$. Then to find $$$dp_i$$$ we add the number of $$$a_j$$$ that $$$i$$$ divide and substract extra, which is $$$dp_k$$$ for all divisors $$$k$$$ of $$$x$$$ that $$$i$$$ divide. Math formula $$$dp_i = p[i] - \sum\limits_{\forall \,k | x \,\text{and} \,i | k} dp_k$$$.
Let's calculate the complexity. For each $$$x$$$ we look for all its divisors $$$div$$$, and for them for number of theirs divisors. The number smaller than $$$2 \times 10^5$$$ has atmost $$$6$$$ primes. Then the sum of number of divisors for each divisor of a number is atmost $$$\sum\limits_{i = 0}^k (\mathrm{C}_{k}^{i})^2 = \mathrm{C}_{2k}^{k} \le 1000$$$. So at most $$$2 \times 10^8$$$ operations, which should somewhat work in $$$2$$$ seconds. I tried it out at codeforces custom runner, it ran fast, but I used random tests.
Can anyone tell me why does this submmission 322398512 for div2C passes as i didn't precompute the gcd values as expected by the author
I think the author just mentioned the precompute as good measure. Even my solution 327302246 passed without precompute. With precompute 327302651 it reduces by 500-600ms.
Can anyone help me spot what's wrong with my solution for https://mirror.codeforces.com/contest/2116/problem/D?
One can see that if there exists an array
asatisfying the properties in the problem, then one can also find an arraya'satisfying those properties, but also satisfyingset(a') = set(b)(1).We visualize the problem as a graph of variables where we put an arrow
x -> zandy - > zif ever there was an operationz = min(x, y). This graph will haven + qnodes. Our goal is to "recover" those variables using greedy "backprop": find the largest value amongb[i]-s and simply fill variables thatb[i]depends on (those variables can't be larger than thatb[i]because of (1).https://mirror.codeforces.com/contest/2116/submission/322482479
i think the C is a good problem, but the time limit is too strict
Hi, In div2 b I am not writing if else statement and using: cout<< max(power(2,p[i]) + power(2,q[k-i]), power(2,p[k-j])+ power(2,q[j]))%Mod<< " ";
But it is failing and if else below code is working. Could someone please help me?
Never mind.. I got it.. I am using pow and i should compare before %mod
Could you explain please? I have the same question and do not understand the difference
You should use binary exponentiation, something like this-
I solves Div2C without using dp, i found the gcd of all element first, and then find the minimum gcd of 2 numbers using O(n^2log(max(ai))), and then I use a greedy algorithm to find the minimum value of current gcd and every number in a, Until it reaches the gcd of all element, then calculate the minimum number using the number of times it needed to operate.
Here is my submission (it passes with sightly more than a second :D ) 322270354
That would fail testcase 17 322597408. Yours was not tested against this one.
I have a solution for div2C/div1A bonus in $$$O(maxA * logN * logmaxA)$$$ time: 322504352.
Instead of finding minimum number of operations, I find the smallest subset where the $$$gcd$$$ equals $$$g$$$. To achieve this, I used binary search combined with Inclusion — Exclusion principle.
this will actually fail for cases where the count is a multiple of mod 1e9+7 then your check may return false , but actually its true .There are maybe weak test cases in the problem
You can use multiple modulo to make it more accurate
Why editorials('tutorials') are not showing up in problem's 'contest material' sections?
In problem C (Div 1), these lines:
should be: r = g[k — 1][0][m — min(i — s, m)] Instead of iterating to find the maximum, it's more efficient to directly use this.
div1 A can be solved using gcd-convolution: submission
In Div1D, the editorial claims that $$$f(x + y) = f(x) + f(y)$$$. But this implies that $$$f(x) = f(0) + f(x) \implies f(0) = 0$$$; this isn't true, right? Suppose that there is a single nonzero vector in $$$c_i$$$, and that it is controlled by Flower. Then Flower will use that vector and the answer will not be $$$0$$$, contradicting that $$$f(0) = 0$$$.
Yeah this seems false. I think it's true that $$$f$$$ is affine though.
fixed, thanks.
I see that some people submitted solutions to F2 that don't appear equivalent to the editorial at first glance, mind giving a brief explanation of what you did? Nachia turmax maspy
My solution appears to follow the same fundamental approach as the F1 editorial. I use a balanced binary search tree and a
bitset<B>for block-level simulation. For value-finding queries, I use bitset::_Find_First(), which results in a time complexity of $$$O(n^2/w)$$$.I have the same as F1 too with bitsets with complexity $$$O(n\sqrt{\frac{n}{w}})$$$ (but tbh i implemented it worse and with complexity $$$O(n\sqrt{\frac{nlog(n)}{w}})$$$).
Upd: No, it is just $$$O((n+q)\sqrt{n+q})$$$.
If I'm understanding correctly, the time complexity of your solution still looks proportional to $$$Q\sqrt Q$$$? In each loop iteration, you have the following two parts, neither of which is reduced by word size:
makeotmwhich takes $$$B$$$ timefor(int i1=0;i1<i/B;++i1)which takes $$$i/B$$$ time.322376847
Yes, you are right
Mine is $$$O(q (\log q)^2)$$$ .
Consider a segment tree of the query sequence. Each node is evaluated as soon as its children are filled. Each node stores a permutation of intervals (of the original sequences) and is-reversed flag for each interval. A node of length $$$k$$$ is evaluated in $$$O(k \log k)$$$ time.
Also, an
existflag is assigned to an interval in a leaf node corresponding to aninsertquery. These flags are aggregated over the segment tree. Anexistflag in a middle node is removed when both the corresponding intervals in the children become to have noexistflags. Removing is irreversible so the cost ofdeletequeries sums up to (number of intervals) = $$$O(q\log q)$$$ .Querying is a straightforward thanks to
existflag, $$$O((\log q)^2)$$$ time because of the segment tree traversal and the binary search of intervals.My solution to 1D somehow avoids requiring knowledge of XOR basis:
Let's look at the highest bit $$$\text{bit1}$$$, and consider the largest $$$i$$$ such that $$$a[i] \land \text{bit1} \neq b[i] \land \text{bit1}$$$ ($$$\land$$$ means AND because I couldn't figure out how to type an & in math mode). Whoever can make a move on this turn gets to choose whether $$$\text{bit}$$$ is on in the final answer or not.
Now let's do the same for the next highest bit $$$\text{bit2}$$$. Unfortunately, there's an issue if the largest index matches the previous one — we already know whether it flips or not depends on if $$$\text{bit}$$$ is on in $$$x$$$. Therefore, the person playing on this turn will either want $$$(x \land \text{bit1}) \oplus (x \land \text{bit2})$$$ to be $$$0$$$ or $$$1$$$. To "set this up" using a previous column, we'll now care about the largest $$$j \lt i$$$ such that $$$(a[j] \land \text{bit1}) \oplus (a[j] \land \text{bit2}) \neq (b[j] \land \text{bit1}) \oplus (b[j] \land \text{bit2})$$$.
Now let's do the same for the next highest bit $$$\text{bit3}$$$. Unfortunately, now it gets really confusing since the index can collide with the previous two indices...
After some thinking, we can handle everything as follows: let $$$f(x, m)$$$ be
(__builtin_popcountll(x & m) & 1). We'll iterate through the bits in descending order and keep track of the set $$$S$$$ of indices that we've picked. For every $$$i \in S$$$, we maintain two values $$$\text{mask}(i)$$$ and $$$\text{target}(i)$$$. This means that on turn $$$i$$$, the player will choose between $$$a[i]$$$ and $$$b[i]$$$ depending on whether or not $$$f(x, \text{mask}(i)) = \text{target}(i)$$$.For each bit $$$\text{bit}$$$, we'll scan from right to left in the array, and look for the first index $$$i$$$ that satisfies $$$f(a[i], m) \neq f(b[i], m)$$$, where $$$m$$$ is initially $$$\text{bit}$$$. If we ever run into a collision with some index $$$j$$$, we can replace $$$m$$$ with $$$m \oplus \text{mask}(j)$$$, and look for the next largest index $$$ \lt j$$$ that satisfies the new condition and so on. Suppose that we've finally found some $$$i \notin S$$$ that fulfills the condition. The value of $$$\text{mask}(i)$$$ should just be $$$m$$$, but what about $$$\text{target}(i)$$$? This is admittedly really confusing to compute, but thankfully, we can try setting it to both values and simulating the game with the rules we've found in $$$S$$$ so far. We can decide $$$\text{target}(i)$$$ simply based on whose turn it is and which simulation produced a number with $$$\text{bit}$$$ on.
The implementation is surprisingly short: https://mirror.codeforces.com/contest/2115/submission/322602866
Admittedly, at least for me, this seems unusually difficult to come up with in 2 hours, so I'm guessing the XOR basis-dependent solution is the more "proper" one to think of (guessing because I haven't thought about that one yet).
Here's another solution that I don't think requires thinking explicitly about XOR basis (though it ends up being quite related), and I think is easier to understand:
322674146
Basically, let $$$Z_i=A_i\oplus B_i$$$ for all $$$i$$$, and $$$x=A_1\oplus A_2\oplus \dots \oplus A_N$$$. Then iterate over the $$$Z_i$$$ in reverse order and update $$$x$$$ / $$$Z_i$$$. For example,
The number of nonzero $$$Z_i$$$ is bounded by the number of bits.
You can also think of the min/max portion as setting the bit in $$$x$$$ to whatever the bit in $$$C_N$$$ is
Seems like 322284403 by ikaurov kind of combines both of these ideas. (Basically instead of iterating over the $$$Z_i$$$ in reverse order, do them in order of highest bit, prioritizing the rightmost one for each bit. And then to force the parity of $$$Z_i$$$ to be correct, do the same trick of setting $$$x = \min(x, x \oplus Z_i)$$$, and $$$Z_j = \min(Z_j, Z_j \oplus Z_i)$$$ for all $$$j \neq i$$$. Finally, remove $$$Z_i$$$.)
(Congrats on IGM btw! And yes, I stalk your submissions quite often LOL)
Thanks, haha!
can anyone tell me where my code in DIv2 B going wrong? Initially even though loigc were exactly same but official solution implemented it a bit differently. Even changing to that is giving me WA for some reason. this is my solution: https://mirror.codeforces.com/contest/2116/submission/322560858
Thanks in advance.
Why my $$$O(nmh)$$$ solution TLE?
submission
The problems were very good (I especially enjoyed B). C was just more difficult than usual, but don't beat yourself over this ! The problems themselves were quite high quality
Hi, For C Problem, any idea why below code is giving TLE
APPROACH:
Step 1: Calculate GCD of full array.
Step2: Checking if fullGCD is alredy present in array if present print ans in O(1) and return.
Step3: Try multisource BFS over all elements to calculate number of steps for each possible GCD with array elements. Then print ans in O(1).
Link: https://mirror.codeforces.com/contest/2116/submission/322759756
I really liked the solution for Div 1B/Div 2D. Is there a special name for this technique?
can somebody help me what is wrong with my program for the problem — B , It is giving wrong ans in test case 3, I am not able to understand what is the problem, I am sharing the submission link, — https://mirror.codeforces.com/contest/2116/submission/322884021. Hoping for help
like this case
In this case, most probably your ans would come wrong.You will understand when you work out the example on your own.
Ok, I will go through it and try to understand, btw thankyou
You have implemented in a correct way..but there is a mistake you are comparing numbers after taking MOD so it maybe wrong.The example given may be correct as you're checking it correctly...
Maybe checking which has second-max power may help
Sorry for not checking properly.
Ok, I will try to fix that part and check
It worked, just changed the last if else condition, thankyou, I was stuck for a very long time in this.
In problem 2115 C, what's the meaning of the $$$g[i][j][k]$$$ ? I can't understand.
Is there anyone available to help me to solve Problem B? I am getting wrong answer on tast case 3. My link is: https://mirror.codeforces.com/contest/2116/submission/323347026
This is some text. HEY , FIRST OF ALL I AM SEEKING SORRY IF I MADE ANY ANY MISTKAE , BUT IN MY LITTLE BRAIN I GET A POINTS WHERE THE SOLTUION OF MANY PEIOPLE FOR PROBLEM HAS FAIL //here is the code
The rest of the text.
if i excute the coomment parth output is r1=6 else 5 both code got Ac pls any may clear my litlle;s brain weak intuition pls ; thanks in advance cutie :) close.
p and q are permutations so we can not have two 2's. wrong test case.
i see thanks :)
It might be helpful to look at https://atcoder.jp/contests/agc045/tasks/agc045_a before looking at D. Very similar problem but an easier version. Nice problem!
can anyone help me with my code of B and why is failing on test 3. I am pretty sure its because of Mod but I have tried enough I couldn't fix it. submission: https://mirror.codeforces.com/contest/2116/submission/324570618
In the Problem C : Gellyfish and Flaming Peony I am getting TLE on using long long instead of int.
Accepted: 324649684
TLE: 324649318
The only difference is: Screenshot
Can anyone explain why?
Can anyone help me with problem B.
why the upper code works and the commented one does not? What I am thinking is that ultimately in the commented code I am getting the max of the two conditions right that is what we want "the max". then why it fails?
Gellyfish and Forget-Me-Not I don't undestend why ****""it is easy to observe that the new function g(x) (before this decision) remains a affine transformation, satisfying g(x)=g(x+ci)"" . what is g(x)?
why affine transformation while x is a number not a vector
In div 2 C, why it's safe to perform this:
"The transition is simple, just enumerate $$$x$$$ from largest to smallest, and then for all $$$i$$$, use $$$f_{x}+1$$$ to update $$$f_{gcd(x,a_{i})}$$$."
This transition is assuming that all initial $$$a_i$$$ will be unchanged and available, but on each action, we are altering $$$x$$$ to $$$gcd(x,a_{i})$$$ and "destroying" $$$a_i$$$?
That is a great valid question. It is because the answer of different pairs is same as gcd of all those combined elements. Your concern is :- {a1,a2,a3,a4,a5,a6} Let us say best answer follows this way :- Step 1 :-> a4 = gcd(a3,a4) Step 2 :-> a2 = gcd(a1,a2) Step 3 :-> a2 = gcd(a2,a4) -> Total 3 steps gives the best minimum answer -> assume it for this testcase. Now the worry arises because we are not comparing the middle values with each-other in the algorithm -> basically gcd(a3,a4) vs gcd(a1,a2) are not mixing up with each other -> because our current algorithm only compares x with all other numbers in the array -> x does not compare with other intermediate y which would have been achieved by some other operations. But all this still works because you can get job done in 3 operations by not changing any of the other elements except the current element i -> Step 1 :-> a4 = gcd(a3,a4) Step 2 :-> a4 = gcd(a1,a4) Step 3 :-> a4 = gcd(a4,a2) -> we achieve our goal in 3 steps and see we had to change no element in the entire process except the 4th element :) And yes if 4th element is changing its fine because if any future operation contains it — as gcd(small,big) == small -> big is the earlier number at index 4 and small is the new number at index 4 after some operation/operations is/are done. Even thinking this is not needed. You can just assume that you can reach any number "u" by changing just 1 index and making no change ever to any other indices. Assuming creating u was possible in first place for it
Hope it helps :)
gays pleas pleas some one tell me where is the pug in my coode problem is B. Gellyfish and Baby's Breath this my code
~~~~~ Your code here... ~~~~~
include
include <bits/stdc++.h>
include <memory.h>
include
define ll long long
define yes cout<<"YES"<<endl;
define no cout<<"NO"<<endl;
define br cout<<endl;
define he cout<<"here"<<endl;
define pb push_back
include
include
define mod 1000000000+7//if the answer negative +mod
define starto ios_base::sync_with_stdio(0) ,cin.tie(0);;
using namespace std; bool flag=0; //A B C D E F G H I J K L M N O P Q R S T U V W X Y Z //( a + b) % c = ( ( a % c ) + ( b % c ) ) % c // ll k=sizeof(s)/sizeof(s[0]); //->>>tree //vector<vector>tree(n); //bool s[n][n]; // pair<int, string> p = {1, "Hello"}; // cout << "First element: " << p.first << "\n"; //(size ,value) /* for(aout i :Maylist) { cout<<i; } /// make all the arry o fill(x + 1,x + 1 + n, 0);
*/
ll mul(ll a,ll b,ll m) { return ((a%m)*(b%m))%m; }
ll add(ll a,ll b,ll m) { return ((a%m)+(b%m))%m; } int main() { starto; ll T=2;
cin>>T; ll k=998244353; while(T--) { ll n; cin>>n; ll p[n],q[n],super[n]; for(int i=0;i<n;i++) { cin>>p[i]; if(i==0) { super[i]=1; } else { super[i]=super[i-1]*2%k; } }
for(int i=0;i<n;i++) { cin>>q[i]; } ll x1=0,x2=0,ans1=0,ans2=0; for(int i=0;i<n;i++) {
if(p[x1]<p[i]) {
} if(q[x2]<q[i]) {
x2=i; }ans1=super[p[x1]]+super [q[i-x1]]%k; ans2=super[q[x2]]+super [p[i-x2]]%k;
cout<<max(ans1,ans2)<<" "; }
//br; br; }
}
Does anyone know why this does not work for B?
But it works when I uncomment the two lines. Are both not the same logic?
Hey mate, I gave a good time solving the bonus problem associated with 2115A. I am stuck and need a hint? I am assuming g = gcd(all elements) does not belong to array, and I operate a[i] = a[i]/g for simplicity. Now, say an element requires minimum 'k' operations to reach 1, then our final answer is n+K-1, where K is minimum of all such k's. Also, each k is bounded by log(n), as at each gcd call, we at least reduce a[i] by two, can be easily proven by contradiction. Now, trying to construct using this bound some n*log(n) algorithm, but facing crisis, thanks!