Thank you for participating!
Special thanks to reirugan for writing the analysis for B, D, E and to Proof_by_QED and Friedrich for helping to optimise F1, leading to the creation of F2.
| Problem | A | B | C1 | C2 | D | E | F1 | F2 |
|---|---|---|---|---|---|---|---|---|
| reirugan | 800 | 1100 | 1200 | 1600 | 1700 | 2000 | 2200 | ¯\_(ツ)_/¯ |
| eric899 | 800 | 1000 | 1300 | ¯\_(ツ)_/¯ | 1600 | ¯\_(ツ)_/¯ | ¯\_(ツ)_/¯ | ¯\_(ツ)_/¯ |
| Proof_by_QED | 800 | 1100 | 1500 | 1900 | 1600 | 2100 | 2300 | 2600 |
| _istil | 800 | 1100 | 1400 | 1800 | 1800 | 2100 | 2000 | 2500 |
| anango | 800 | 1100 | 1300 | 1900 | 1800 | 2200 | 2400 | 2700 |
| FetFot | 800 | 1100 | 1400 | 1800 | 1900 | ¯\_(ツ)_/¯ | 2400 | ¯\_(ツ)_/¯ |
| Intellegent | 800 | 1000 | 1300 | 1900 | 1800 | 2300 | 2200 | 2800 |
| nifeshe | 800 | 1000 | 1200 | 1800 | 1800 | 2400 | 2500 | 2800 |
| satyam343 | 800 | 1000 | 1400 | 1900 | 1800 | 2400 | 2500 | 2700 |
| catgirl | 800 | 1000 | 1300 | 1700 | 1500 | 2200 | 2200 | 2700 |
| wuhudsm | 800 | 1100 | 1400 | 2000 | 1600 | 2200 | ¯\_(ツ)_/¯ | ¯\_(ツ)_/¯ |
| Edeeva | 800 | 900 | 1100 | 1800 | 1700 | 2300 | ¯\_(ツ)_/¯ | ¯\_(ツ)_/¯ |
| Actual rating | 800 | 1000 | 1400 | 2000 | 1900 | 2500 | 2700 | 3300 |
If there is a subarray containing $$$k - 1$$$ zeros, then the next element cannot be prevented from being hit. So any time we see $$$k - 1$$$ zeros follwed by a one then the one must be protected.
From Hint 1, lets protect every $$$\mathtt{1}$$$ that fits the pattern. Furthermore we should always protect the first $$$1$$$ in the string so lets do that. It turns out that this is all that we need to do, to prove this think inductively: every element that is not protected will always have a $$$\mathtt{1}$$$ in the last $$$k - 1$$$ elements and so it will be unchanged. So to solve the problem we can check the distance between each consecutive $$$\mathtt{1}$$$, if the gap is at least $$$k$$$ then add $$$1$$$ to the answer. Final time complexity $$$O(n)$$$.
#include<bits/stdc++.h>
using namespace std;
typedef long long ll;
#define debug(x) cout << #x << " = " << x << "\n";
#define vdebug(a) cout << #a << " = "; for(auto x: a) cout << x << " "; cout << "\n";
mt19937 rng(chrono::steady_clock::now().time_since_epoch().count());
int uid(int a, int b) { return uniform_int_distribution<int>(a, b)(rng); }
ll uld(ll a, ll b) { return uniform_int_distribution<ll>(a, b)(rng); }
void solve(){
int n, k;
cin >> n >> k;
string s;
cin >> s;
int ans = 0;
int last = -1e9;
for (int i = 0; i < n; i++){
if (s[i] == '1' && i - last >= k)
ans++;
if (s[i] == '1')
last = i;
}
cout << ans << "\n";
}
int main(){
ios::sync_with_stdio(false);
cin.tie(0);
int t;
cin >> t;
while (t--) solve();
}
You want odd-indexed values to be small, and even-indexed values to be large. In other words, it is beneficial to apply operation 1 on even indices, and operation 2 on odd indices.
Since operation 1 is free, you may as well apply it to every even index. None of the operations can increase a prefix max, so you can't possibly do any better than just applying operation 1 immediately to every even index.
Let's first apply operation 1 to every even index. Now, we can't further increase any even-indexed values, so we just need to decrease odd-indexed values by using operation 2. All of the odd indices are independent from one another, so let's just consider each one separately. For a fixed odd index $$$i$$$, the number of instances of operation 2 needed to make it smaller than $$$a_{i-1}$$$ and $$$a_{i+1}$$$ is given by $$$\max(0, a_i - \min(a_{i-1}, a_{i+1}) + 1)$$$. (Be careful with indices $$$1$$$ and $$$n$$$ as well; you can treat the out of bounds values as being arbitrarily large, or just take care of them separately.) Now, we sum this value, across all odd indices $$$i$$$.
#include<bits/stdc++.h>
using namespace std;
typedef long long ll;
#define debug(x) cout << #x << " = " << x << "\n";
#define vdebug(a) cout << #a << " = "; for(auto x: a) cout << x << " "; cout << "\n";
mt19937 rng(chrono::steady_clock::now().time_since_epoch().count());
int uid(int a, int b) { return uniform_int_distribution<int>(a, b)(rng); }
ll uld(ll a, ll b) { return uniform_int_distribution<ll>(a, b)(rng); }
void solve(){
int n;
cin >> n;
vector<int> a(n);
for (int &x : a) cin >> x;
vector<int> pre(n);
pre[0] = a[0];
for (int i = 1; i < n; i++)
pre[i] = max(pre[i - 1], a[i]);
ll ans = 0;
for (int i = 0; i < n; i += 2){
int dif = -1;
if (i > 0)
dif = max(dif, a[i] - pre[i - 1]);
if (i < n - 1)
dif = max(dif, a[i] - pre[i + 1]);
ans += dif + 1;
}
cout << ans << "\n";
}
int main(){
ios::sync_with_stdio(false);
cin.tie(0);
int t;
cin >> t;
while (t--) solve();
}
2154C1 - No Cost Too Great (Easy Version):
You can make an element divisible by $$$2$$$ in $$$1$$$ move.
Can you check if the answer is $$$0$$$ or if the answer is $$$1$$$?
By result of Hint 1, the answer cannot be more than $$$2$$$, as we can always pick two elements and make them both divisible by $$$2$$$ using at most $$$1$$$ move on each. So now we just need to check if the answer is $$$0$$$ or $$$1$$$.
To check if the answer is $$$0$$$, we need to determine if there is a common divisor between two elements. Note that it is sufficient to only check if there is a common prime divisor. So, for each element, iterate through all its prime divisors and check if there is another element with the same prime divisor.
To check if the answer is $$$1$$$, we can do the same as $$$0$$$. This time, for each $$$i$$$ we check if $$$a_i + 1$$$ shares a common prime divisor with another element.
Prime divisors can be pre-calculated with the Sieve of Eratosthenes in $$$O(N\log\log N)$$$ where $$$N$$$ is the maximum possible element (so $$$2 \cdot 10^5$$$ for this problem). However, the constraints are lenient enough to allow $$$O(n\sqrt{N})$$$ as well.
The final complexity comes out to $$$O(n\omega(N) + N\log\log N)$$$, where $$$\omega(x)$$$ is used to denote the number of distinct prime divisors of $$$x$$$. Note that the editorial implementation is $$$O(n\omega(N)\log(n\omega(N)) + N\log\log N)$$$ due to use of c++ std::map for the sake of readability.
#include<bits/stdc++.h>
using namespace std;
typedef long long ll;
#define debug(x) cout << #x << " = " << x << "\n";
#define vdebug(a) cout << #a << " = "; for(auto x: a) cout << x << " "; cout << "\n";
mt19937 rng(chrono::steady_clock::now().time_since_epoch().count());
int uid(int a, int b) { return uniform_int_distribution<int>(a, b)(rng); }
ll uld(ll a, ll b) { return uniform_int_distribution<ll>(a, b)(rng); }
const int N = 2e5 + 10;
vector<vector<int>> pfac(N + 1);
void solve(){
int n;
cin >> n;
vector<int> a(n), b(n);
for (int &x : a) cin >> x;
for (int &x : b) cin >> x;
int ans = 2;
map<int, int> cnt;
for (int i = 0; i < n; i++){
for (int x : pfac[a[i]]){
if (cnt[x] > 0)
ans = 0;
cnt[x]++;
}
}
for (int i = 0; i < n; i++){
for (int x : pfac[a[i]])
cnt[x]--;
for (int x : pfac[a[i] + 1]){
if (cnt[x] > 0)
ans= min(ans, 1);
}
for (int x : pfac[a[i]])
cnt[x]++;
}
cout << ans << "\n";
}
int main(){
ios::sync_with_stdio(false);
cin.tie(0);
for (int i = 2; i <= N; i++){
if (!pfac[i].empty())
continue;
for (int j = i; j <= N; j += i)
pfac[j].push_back(i);
}
int t;
cin >> t;
while (t--) solve();
}
2154C2 - No Cost Too Great (Hard Version):
Solve the easy version.
We can always make two elements $$$i, j$$$ divisible by $$$2$$$ with no more than $$$b_i + b_j$$$ cost.
By hint 2, we should never perform more than one operation on more than one element.
Lets say we are trying to make the elements at index $$$i, j$$$ have a common divisor. From hint 2 and 3 there are only two cases to consider here. The first case where we make $$$0$$$ or $$$1$$$ operation on both $$$i$$$ and $$$j$$$. Or the second case where we make more than $$$1$$$ operation on a single element. We should never do more than $$$1$$$ operation on both elements because we can always do better by making both elements divisible by $$$2$$$.
So now to solve the problem on the entire array, we should try to solve both of the cases. For convenience lets reorder the elements so that $$$b$$$ is in sorted order.
The first case can be solve almost identically to the easy version of the problem. We can always do at least as good as $$$b_1 + b_2$$$, the only other possibility is only making one operation. To do this we just check if $$$a_i + 1$$$ has a common prime divisor with another element.
The second case is the more tricky case, we need to check the optimal way to make more than one operations for each element, and it is probably the crux of the problem. The key thing to notice is that we actually do not need to check every element, in fact we can only check the element with the smallest value of $$$b_i$$$ ($$$b_1$$$ because $$$b$$$ is sorted). To prove this lets argue by contradiction, supposed we made $$$k$$$ ($$$k \ge 2$$$) operations on element $$$x$$$ ($$$x \ge 2$$$). The total cost of this is $$$k \cdot b_x$$$, we know from hint 2 that we can do $$$b_1 + b_x$$$ or better, since $$$b_1$$$ is the smallest element we know that $$$b_1 + b_x \le k \cdot b_x$$$ and thus we do not need to consider this case!
So now we need to quickly check what is the optimal way to use multiple operations on the first element. To do this we can iterate through all the prime divisors of the other elements and determine the minimum number of operations to make $$$a_1$$$ divisible by it.
The final time complexity comes out to $$$O(n\omega(N) + N\log\log N)$$$, where $$$\omega(x)$$$ is used to denote the number of distinct prime divisors of $$$x$$$. Note that the editorial implementation is $$$O(n\omega(N)\log(n\omega(N)) + N\log\log N)$$$ due to use of c++ std::map for the sake of readability.
#include<bits/stdc++.h>
using namespace std;
typedef long long ll;
#define debug(x) cout << #x << " = " << x << "\n";
#define vdebug(a) cout << #a << " = "; for(auto x: a) cout << x << " "; cout << "\n";
mt19937 rng(chrono::steady_clock::now().time_since_epoch().count());
int uid(int a, int b) { return uniform_int_distribution<int>(a, b)(rng); }
ll uld(ll a, ll b) { return uniform_int_distribution<ll>(a, b)(rng); }
const int N = 2e5 + 10;
vector<vector<int>> pfac(N + 1);
void solve(){
int n;
cin >> n;
vector<int> a(n), b(n);
for (int &x : a) cin >> x;
for (int &x : b) cin >> x;
vector<int> ord(n);
iota(ord.begin(), ord.end(), 0);
sort(ord.begin(), ord.end(), [&](int x, int y) -> bool{
return b[x] < b[y];
});
int ans = b[ord[0]] + b[ord[1]];
map<int, int> cnt;
for (int i = 0; i < n; i++){
for (int x : pfac[a[i]]){
if (cnt[x] > 0)
ans = 0;
cnt[x]++;
}
}
for (int i = 0; i < n; i++){
for (int x : pfac[a[i]])
cnt[x]--;
for (int x : pfac[a[i] + 1]){
if (cnt[x] > 0)
ans= min(ans, b[i]);
}
for (int x : pfac[a[i]])
cnt[x]++;
}
int idx = ord[0];
vector<int> check;
for (int i = 0; i < n; i++){
if (i == idx)
continue;
for (int x : pfac[a[i]])
check.push_back(x);
}
for (int x : check){
int times = x - (a[idx] % x);
if (times == x)
times = 0;
ans = min(1LL * ans, 1LL * times * b[idx]);
}
cout << ans << "\n";
}
int main(){
ios::sync_with_stdio(false);
cin.tie(0);
for (int i = 2; i <= N; i++){
if (!pfac[i].empty())
continue;
for (int j = i; j <= N; j += i)
pfac[j].push_back(i);
}
int t;
cin >> t;
while (t--) solve();
}
Destroying a leaf will never split the tree.
Trees are bipartite. In particular, you can root the tree arbitrarily, then partition vertices into those with even depth and those with odd depth. All edges connect an even depth vertex to an odd depth vertex.
We would like to decrease the size of the tree by one vertex using no more than three moves. To do so, we will pick an arbitrary leaf $$$v$$$ and destroy it. However, we must make sure that we do not destroy the vertex we are standing on, and that we do not have two consecutive instances of the second instruction. To do so, let's root the tree arbitrarily, and color vertices with even depth red and vertices with odd depth blue. We maintain the color of the vertex we are standing on, noting that every instance of instruction 1 flips the color. Now, if we are standing on the same color as $$$v$$$, we apply operation 1 once. Otherwise, we apply operation 1 twice. Thus, we are not currently standing on leaf $$$v$$$, so now we can destroy $$$v$$$. Since we applied at least one instance of instruction 1, we do not have two consecutive instances of instruction 2. By iterating this process until the only remaining vertex is vertex $$$n$$$, we guarantee that we have reached vertex $$$n$$$.
#include<bits/stdc++.h>
using namespace std;
typedef long long ll;
#define debug(x) cout << #x << " = " << x << "\n";
#define vdebug(a) cout << #a << " = "; for(auto x: a) cout << x << " "; cout << "\n";
mt19937 rng(chrono::steady_clock::now().time_since_epoch().count());
int uid(int a, int b) { return uniform_int_distribution<int>(a, b)(rng); }
ll uld(ll a, ll b) { return uniform_int_distribution<ll>(a, b)(rng); }
void solve(){
int n;
cin >> n;
vector<vector<int>> g(n);
for (int i = 0; i < n - 1; i++){
int u, v;
cin >> u >> v;
u--;
v--;
g[u].push_back(v);
g[v].push_back(u);
}
vector<int> dep(n), par(n), ch(n);
auto dfs = [&](auto self, int c, int p) -> void{
if (p != -1)
dep[c] = dep[p] + 1;
par[c] = p;
for (int x : g[c]){
if (x == p) continue;
self(self, x, c);
ch[c]++;
}
};
dfs(dfs, n - 1, -1);
array<vector<int>, 2> leaves;
leaves[0];
for (int i = 0; i < n; i++){
if (ch[i] == 0)
leaves[dep[i] & 1].push_back(i);
}
vector<array<int, 2>> ans;
int col = dep[0] & 1;
for (int i = 0; i < n - 1; i++){
if (leaves[col ^ 1].empty()){
ans.push_back({1, -1});
col ^= 1;
}
int nxt = leaves[col ^ 1].back();
leaves[col ^ 1].pop_back();
ans.push_back({2, nxt});
int p = par[nxt];
ch[p]--;
if (ch[p] == 0)
leaves[dep[p] & 1].push_back(p);
ans.push_back({1, -1});
col ^= 1;
}
cout << ans.size() << "\n";
for (auto [x, y] : ans){
if (x == 1)
cout << "1\n";
else
cout << "2 " << y + 1 << "\n";
}
}
int main(){
ios::sync_with_stdio(false);
cin.tie(0);
int t;
cin >> t;
while (t--) solve();
}
Can you implement the checker?
Without loss of generality, assume the array is sorted. Let's fix $$$x = 2y+1$$$, and let's also fix the median of a given operation $$$a_i$$$, for some $$$y+1 \leq i\leq n-y$$$. Now, we see that the best subsequence to choose is $$$a_1, a_2, \dots, a_y, a_i, a_{i+1}, \dots, a_{i+y}$$$, because we must choose $$$y$$$ values to the left of $$$a_i$$$ and $$$y$$$ values to the right of $$$a_i$$$, and we would like to change the smallest values possible.
Suppose you have applied one operation with median $$$a_i$$$ and another operation with median $$$a_j$$$, for $$$y+1 \leq i \lt j \leq n-y$$$. Now, you have increased up to $$$\min(2y, j-1)$$$ of the smallest values to either $$$a_i$$$ or $$$a_j$$$, since you can increase up to $$$y$$$ values smaller than the median each operation. However, you have also decreased $$$y$$$ values after $$$a_i$$$, and $$$y$$$ values after $$$a_j$$$. Instead, you might as well have just applied two operations with median $$$a_j$$$, which allows you to increase $$$\min(2y, j-1)$$$ of the smallest values to $$$a_j$$$, and decrease only the $$$y$$$ values after $$$a_j$$$. Therefore, there exists some optimal sequence of operations where we use the same median across all operations.
By the result of Hint 2, there exists some optimal sequence of operations where we use the same median across all operations. Let's fix the median at $$$a_i$$$. Now, if we choose $$$x = 2y+1$$$ for some $$$0\leq y \leq \min(i-1, n-i)$$$, we are able to increase the values of $$$a_1, a_2, \dots, a_{\min(ky, i-1)}$$$ to $$$a_i$$$, but we must also decrease the values of $$$a_{i+1}, a_{i+2}, \dots, a_{i+y}$$$ to $$$a_i$$$.
The benefits of increasing $$$y$$$ are that we can increase more values, but notice that these benefits are getting less and less, because we are increasing values that are already larger. For example, the benefit of increasing $$$y$$$ from $$$0$$$ to $$$1$$$ is that now we can increase the values $$$a_1, a_2, \dots, a_k$$$, but the benefit from increasing $$$y$$$ from $$$1$$$ to $$$2$$$ is that now we can increase the values $$$a_{k+1}, a_{k+2}, \dots, a_{2k}$$$; increasing $$$y$$$ from $$$1$$$ to $$$2$$$ is not as beneficial as increasing $$$y$$$ from $$$0$$$ to $$$1$$$, because the values we are increasing are already relatively big. (There's a small case to think about, when $$$ky$$$ first exceeds $$$i-1$$$, but note that the number of values you can increase is also monotonically decreasing, so this doesn't cause any issues.) The same reasoning goes for the losses. The losses of increasing $$$y$$$ are that we are forced to decrease more values, and these losses are getting greater and greater. If we increase $$$y$$$ from $$$0$$$ to $$$1$$$, we are only forced to decrease $$$a_{i+1}$$$, but if we increase $$$y$$$ from $$$1$$$ to $$$2$$$, we now have to decrease $$$a_{i+2}$$$, which is worse because $$$a_{i+2}$$$ is relatively big. In other words, our net gains (that is, the benefits minus the losses) are monotonically decreasing.
We want to keep incrementing $$$y$$$ as long as our net gains are positive, then stop incrementing $$$y$$$ once our net gains become negative. Since our net gains are monotonically decreasing, we can binary search for this point. (Alternative solutions might use ternary search instead.) So we iterate over all choices of $$$i$$$, then binary search for the optimal choice of $$$y$$$, which in total takes $$$O(n\log{n})$$$ time. Note that all computations can be done quickly with prefix sums.
#include<bits/stdc++.h>
using namespace std;
typedef long long ll;
#define debug(x) cout << #x << " = " << x << "\n";
#define vdebug(a) cout << #a << " = "; for(auto x: a) cout << x << " "; cout << "\n";
mt19937 rng(chrono::steady_clock::now().time_since_epoch().count());
int uid(int a, int b) { return uniform_int_distribution<int>(a, b)(rng); }
ll uld(ll a, ll b) { return uniform_int_distribution<ll>(a, b)(rng); }
void solve(){
int n, k;
cin >> n >> k;
vector<int> a(n);
for (int &x : a) cin >> x;
sort(a.begin(), a.end());
vector<ll> pre(n);
pre[0] = a[0];
for (int i = 1; i < n; i++)
pre[i] = pre[i - 1] + a[i];
ll ans = pre[n - 1];
auto query = [&](int l, int r) -> ll{
ll res = pre[r];
if (l > 0)
res -= pre[l - 1];
return res;
};
for (int i = 0; i < n; i++){
int l = 1, r = min(i, n - i - 1) + 1;
while (l < r){
int mid = (l + r) / 2;
int left = min(1LL * (mid - 1) * k, 1LL * i);
int right = min(1LL * mid * k - 1, 1LL * i);
ll cost = query(left, right) + a[i + mid];
if (cost <= 1LL * a[i] * (right - left + 1 + 1))
l = mid + 1;
else
r = mid;
}
l--;
if (l == 0)
continue;
int right = min(1LL * l * k - 1, 1LL * i);
ll len = right + 1 + l;
ll cost = len * a[i] - (query(0, right) + query(i + 1, i + l));
ans = max(ans, pre[n - 1] + cost);
}
cout << ans << "\n";
}
int main(){
ios::sync_with_stdio(false);
cin.tie(0);
int t;
cin >> t;
while (t--) solve();
}
2154F1 - Bombing (Easy Version):
For every valid permutation we can always determine the index $$$k$$$ used as the split in the riffle shuffle operation, unless it is the sorted permutation.
Find the number of riffle shuffles for each split.
Lets prove Hint 1. For a valid permutation $$$q$$$, consider the first $$$k$$$ such that $$$k$$$ appers after $$$k + 1$$$ in $$$q$$$, if $$$q$$$ is a riffle shuffle of $$$[1, 2, \ldots, n]$$$ then we know that $$$k$$$ must be the split used. The only permutation where such a item does not exist is the one where the permutation is already sorted.
Now lets try finding the number of solutions for a specific split point $$$k$$$ in sub-quadratic time. We can think of it like this, colour all the numbers less than or equal to $$$k$$$ in blue and the elements bigger than $$$k$$$ in red. Now when we replace all $$$-1$$$ in $$$p$$$ the following must be true after, for each blue element $$$x$$$ exactly $$$x$$$ elements are blue in its prefix and for each red element $$$x$$$ exactly $$$x - k$$$ elements are red in its prefix. So now we just need to count the number of ways to insert red and blue elements so that this condition is satisfied. To find this iterate over each index that is not $$$-1$$$ and insert the number of elements of its colour that it demands and fill the remaining elements in its prefix with the other colour. This will have worst case $$$O(n)$$$ run time if we can calculate the required binomial coefficients in $$$O(1)$$$ which can be done by pre-calculating factorials.
So we can just iterate over each split and sum in $$$O(n^2)$$$ making sure to remember that we might overcount the sorted permutation.
#include<bits/stdc++.h>
using namespace std;
typedef long long ll;
#define debug(x) cout << #x << " = " << x << "\n";
#define vdebug(a) cout << #a << " = "; for(auto x: a) cout << x << " "; cout << "\n";
mt19937 rng(chrono::steady_clock::now().time_since_epoch().count());
int uid(int a, int b) { return uniform_int_distribution<int>(a, b)(rng); }
ll uld(ll a, ll b) { return uniform_int_distribution<ll>(a, b)(rng); }
const int MOD = 998244353;
template<ll mod> // template was not stolen from https://mirror.codeforces.com/profile/SharpEdged
struct modnum {
static constexpr bool is_big_mod = mod > numeric_limits<int>::max();
using S = conditional_t<is_big_mod, ll, int>;
using L = conditional_t<is_big_mod, __int128, ll>;
S x;
modnum() : x(0) {}
modnum(ll _x) {
_x %= static_cast<ll>(mod);
if (_x < 0) { _x += mod; }
x = _x;
}
modnum pow(ll n) const {
modnum res = 1;
modnum cur = *this;
while (n > 0) {
if (n & 1) res *= cur;
cur *= cur;
n /= 2;
}
return res;
}
modnum inv() const { return (*this).pow(mod-2); }
modnum& operator+=(const modnum& a){
x += a.x;
if (x >= mod) x -= mod;
return *this;
}
modnum& operator-=(const modnum& a){
if (x < a.x) x += mod;
x -= a.x;
return *this;
}
modnum& operator*=(const modnum& a){
x = static_cast<L>(x) * a.x % mod;
return *this;
}
modnum& operator/=(const modnum& a){ return *this *= a.inv(); }
friend modnum operator+(const modnum& a, const modnum& b){ return modnum(a) += b; }
friend modnum operator-(const modnum& a, const modnum& b){ return modnum(a) -= b; }
friend modnum operator*(const modnum& a, const modnum& b){ return modnum(a) *= b; }
friend modnum operator/(const modnum& a, const modnum& b){ return modnum(a) /= b; }
friend bool operator==(const modnum& a, const modnum& b){ return a.x == b.x; }
friend bool operator!=(const modnum& a, const modnum& b){ return a.x != b.x; }
friend bool operator<(const modnum& a, const modnum& b){ return a.x < b.x; }
friend ostream& operator<<(ostream& os, const modnum& a){ os << a.x; return os; }
friend istream& operator>>(istream& is, modnum& a) { ll x; is >> x; a = modnum(x); return is; }
};
using mint = modnum<MOD>;
struct Combi{
vector<mint> _fac, _ifac;
int n;
Combi() {
n = 1;
_fac.assign(n + 1, 1);
_ifac.assign(n + 1, 1);
}
void check_size(int m){
int need = n;
while (need < m) need *= 2;
m = need;
if (m <= n) return;
_fac.resize(m + 1);
_ifac.resize(m + 1);
for (int i = n + 1; i <= m; i++) _fac[i] = i * _fac[i - 1];
_ifac[m] = _fac[m].inv();
for (int i = m - 1; i > n; i--) _ifac[i] = _ifac[i + 1] * (i + 1);
n = m;
}
mint fac(int m){
check_size(m);
return _fac[m];
}
mint ifac(int m){
check_size(m);
return _ifac[m];
}
mint ncr(int n, int r){
if (n < r || r < 0) return 0;
return fac(n) * ifac(n - r) * ifac(r);
}
mint npr(int n, int r){
if (n < r || r < 0) return 0;
return fac(n) * ifac(n - r);
}
} comb;
void solve(){
int n;
cin >> n;
vector<int> a(n);
for (int &x : a) {
cin >> x;
if (x != -1)
x--;
}
mint ans = 0;
for (int i = 0; i < n - 1; i++){
mint res = 1;
int spaces = 0;
int less = -1;
int more = i;
for (int j = 0; j < n; j++){
if (a[j] == -1){
spaces++;
continue;
}
if (a[j] <= i){
int need = a[j] - less - 1;
res *= comb.ncr(spaces, need);
more += spaces - need;
less = a[j];
}
if (a[j] > i){
int need = a[j] - more - 1;
res *= comb.ncr(spaces, need);
less += spaces - need;
more = a[j];
}
spaces = 0;
}
res *= comb.ncr(spaces, i - less);
ans += res;
}
bool sorted = true;
for (int i = 0; i < n; i++){
if (a[i] != i && a[i] != -1)
sorted = false;
}
if (sorted)
ans -= n - 2;
cout << ans << "\n";
}
int main(){
ios::sync_with_stdio(false);
cin.tie(0);
int t;
cin >> t;
while (t--) solve();
}
2154F2 - Bombing (Hard Version):
Solve the easy version.
If $$$p_i \lt i$$$ we know that $$$p_i$$$ must belong to the left side of the split, similarly if $$$p_i \gt i$$$ we know that $$$p_i$$$ must belong to the right side of the split.
If a permutation p is a riffle shuffle of $$$[1,2,\ldots,n]$$$, then it will have three regions described by two integers $$$l, r$$$ as follows:
a prefix where for all $$$i$$$ ($$$1 \le i \le l$$$) we have $$$p_i = i$$$.
a suffix where for all $$$i$$$ ($$$r \le i \le n$$$) we have $$$p_i = i$$$.
a middle region where for all $$$i$$$ ($$$l \lt i \lt r$$$) we have $$$p_i \ne i$$$.
See that the first region will all belong to the left side of the split, and the last region will all belong to the right side of the split. Additionally, we can deduce the side of the entire middle region from Step 2. There is a single edge case to this where $$$p$$$ is the sorted permutation.
Suppose there is an index $$$i$$$ where $$$a_i \ne i$$$ and $$$a_i \ne -1$$$, or in other words, the sorted permutation is not valid. In this case, from Step 3, we can deduce the side of every element $$$i$$$ where $$$a_i \ne -1$$$.
For the opposite case to Step 4 (i.e, the case where the sorted permutation is valid), can you think of a way to solve this?
Can you find a closed form formula for the case where $$$p_i = -1$$$ for all $$$i$$$?
There is almost a bijection between colouring an array in two colours and valid riffle shuffles of $$$[1,2,\ldots,n]$$$ with the sole edge case being the sorted permutation, which is counted $$$n - 1$$$ times, if we account for this we obtain the formula $$$2^n - n$$$.
Now that we have a formula, from Step 3 we should just find the number of ways to fill each run of consecutive $$$-1$$$, leaving everything outside as the prefix and suffix as discussed in Hint 3. Solving a consecutive run of $$$-1$$$ is equivalent to solving the case in Step 5.1 on a smaller array. So we just solve each of these runs and sum to get the answer for this case.
Now lets solve the same case as Step 4. For each $$$k$$$, consider a consecutive run of $$$-1$$$ and try to find the number of ways to fill it, lets try to find some nice properties for some cases. Let me introduce some notation, I will type each gap in the following way:
an $$$l \rightarrow l$$$ gap means the element immediately before the gap belongs to the left side of the split and the element after belongs to the left side of the split.
an $$$l \rightarrow r$$$ gap means the element immediately before the gap belongs to the left side of the split and the element after belongs to the right side of the split.
similar notation for $$$r \rightarrow l$$$ and $$$r \rightarrow r$$$.
Now notice that for any $$$k$$$, the number of ways to fill an $$$l \rightarrow l$$$ or $$$r \rightarrow r$$$ gap will always be the same.
For the $$$l \rightarrow r$$$ and $$$r \rightarrow l$$$ gaps, we have to get a bit more creative. The key thing to notice is that each $$$k$$$ will demand a unique number of elements from the left side of the split to be placed inside the gap. Now since the number of left elements must be between $$$0$$$ and the size of the gap, we can restrict the number of $$$k$$$ we need to consider to no more than the size of the gap!
For each gap, calculate this range of valid $$$k$$$, and take the intersection of all the ranges. Now if we use a very similar implementation to the easy version of the problem but only consider $$$k$$$ within this range, and skipping $$$l \rightarrow l$$$ and $$$r \rightarrow r$$$ gaps by calculating their contribution separately, the problem will be solved. Since, say, the range of valid $$$k$$$ has size $$$x$$$, then the number of $$$l \rightarrow r$$$ and $$$r \rightarrow l$$$ gaps will be no more than approximately $$$n/x$$$, and so the final time complexity is $$$O(x \cdot n/x)$$$, which is $$$O(n)$$$.
#include<bits/stdc++.h>
using namespace std;
typedef long long ll;
#define debug(x) cout << #x << " = " << x << "\n";
#define vdebug(a) cout << #a << " = "; for(auto x: a) cout << x << " "; cout << "\n";
mt19937 rng(chrono::steady_clock::now().time_since_epoch().count());
int uid(int a, int b) { return uniform_int_distribution<int>(a, b)(rng); }
ll uld(ll a, ll b) { return uniform_int_distribution<ll>(a, b)(rng); }
const int MOD = 998244353;
template<ll mod> // template was not stolen from https://mirror.codeforces.com/profile/SharpEdged
struct modnum {
static constexpr bool is_big_mod = mod > numeric_limits<int>::max();
using S = conditional_t<is_big_mod, ll, int>;
using L = conditional_t<is_big_mod, __int128, ll>;
S x;
modnum() : x(0) {}
modnum(ll _x) {
_x %= static_cast<ll>(mod);
if (_x < 0) { _x += mod; }
x = _x;
}
modnum pow(ll n) const {
modnum res = 1;
modnum cur = *this;
while (n > 0) {
if (n & 1) res *= cur;
cur *= cur;
n /= 2;
}
return res;
}
modnum inv() const { return (*this).pow(mod-2); }
modnum& operator+=(const modnum& a){
x += a.x;
if (x >= mod) x -= mod;
return *this;
}
modnum& operator-=(const modnum& a){
if (x < a.x) x += mod;
x -= a.x;
return *this;
}
modnum& operator*=(const modnum& a){
x = static_cast<L>(x) * a.x % mod;
return *this;
}
modnum& operator/=(const modnum& a){ return *this *= a.inv(); }
friend modnum operator+(const modnum& a, const modnum& b){ return modnum(a) += b; }
friend modnum operator-(const modnum& a, const modnum& b){ return modnum(a) -= b; }
friend modnum operator*(const modnum& a, const modnum& b){ return modnum(a) *= b; }
friend modnum operator/(const modnum& a, const modnum& b){ return modnum(a) /= b; }
friend bool operator==(const modnum& a, const modnum& b){ return a.x == b.x; }
friend bool operator!=(const modnum& a, const modnum& b){ return a.x != b.x; }
friend bool operator<(const modnum& a, const modnum& b){ return a.x < b.x; }
friend ostream& operator<<(ostream& os, const modnum& a){ os << a.x; return os; }
friend istream& operator>>(istream& is, modnum& a) { ll x; is >> x; a = modnum(x); return is; }
};
using mint = modnum<MOD>;
struct Combi{
vector<mint> _fac, _ifac;
int n;
Combi() {
n = 1;
_fac.assign(n + 1, 1);
_ifac.assign(n + 1, 1);
}
void check_size(int m){
int need = n;
while (need < m) need *= 2;
m = need;
if (m <= n) return;
_fac.resize(m + 1);
_ifac.resize(m + 1);
for (int i = n + 1; i <= m; i++) _fac[i] = i * _fac[i - 1];
_ifac[m] = _fac[m].inv();
for (int i = m - 1; i > n; i--) _ifac[i] = _ifac[i + 1] * (i + 1);
n = m;
}
mint fac(int m){
check_size(m);
return _fac[m];
}
mint ifac(int m){
check_size(m);
return _ifac[m];
}
mint ncr(int n, int r){
if (n < r || r < 0) return 0;
return fac(n) * ifac(n - r) * ifac(r);
}
mint npr(int n, int r){
if (n < r || r < 0) return 0;
return fac(n) * ifac(n - r);
}
} comb;
void solve(){
int n;
cin >> n;
vector<int> a(n);
for (int &x : a) {
cin >> x;
if (x != -1)
x--;
}
bool fixed = true;
for (int i = 0; i < n; i++)
if (a[i] != -1 && a[i] != i)
fixed = false;
if (fixed){ //case where all a[i] != -1 has a[i] = i
vector<mint> prod(n + 1);
prod[0] = 1;
for (int i = 1; i <= n; i++)
prod[i] = prod[i - 1] * 2;
int last = -1;
mint ans = 0;
for (int i = 0; i <= n; i++){
if (i == n || a[i] != -1){
int len = i - last - 1;
ans += prod[len] - len - 1;
last = i;
}
}
ans += 1;
cout << ans << "\n";
return;
}
vector<bool> side(n);
bool found = false;
for (int i = 0; i < n; i++){
if (a[i] == -1)
continue;
if (a[i] < i)
side[i] = false; // left side
if (a[i] > i)
side[i] = true; // right side
if (a[i] == i)
side[i] = found;
if (a[i] != i)
found = true;
}
int last = -1;
vector<array<int, 2>> ranges;
for (int i = 0; i < n; i++){
if (a[i] == -1)
continue;
if (last == -1){
last = i;
continue;
}
if (side[i] && !side[last]){ // left -> right gap
int mn = last - a[last]; // min right already placed
int mx = mn + i - last; // max right already placed
ranges.push_back({a[i] - mx, a[i] - mn + 1});
}
if (!side[i] && side[last]){ // right -> left gap
int mx = i - a[i]; // max right placed by last
int mn = mx - (i - last); // min right placed by last
ranges.push_back({a[last] - mx, a[last] - mn + 1});
}
last = i;
}
int l = 0, r = n - 1;
for (auto [u, v] : ranges){
l = max(l, u);
r = min(r, v);
}
mint coef = 1;
vector<mint> score(n); // score[i] = contribution when splitting at i
for (int i = l; i <= r; i++)
score[i] = 1;
last = -1;
for (int i = 0; i < n; i++){
if (a[i] == -1)
continue;
bool prev = false;
if (last != -1)
prev = side[last];
int spaces = i - last - 1;
if (side[i] && prev || !side[i] && !prev){
int cnt = 0;
if (last == -1)
cnt = a[i];
else
cnt = a[i] - a[last] - 1;
coef *= comb.ncr(spaces, cnt);
}
if (side[i] && !prev){ // left -> right
for (int j = l; j <= r; j++){
int placed = 0;
if (last != -1)
placed = last - a[last]; // how many right i have placed by last
int need = a[i] - (j + placed); // j + placed is the next one i need to place after last
score[j] *= comb.ncr(spaces, need);
}
}
if (!side[i] && prev){ // right -> left
for (int j = l; j <= r; j++){
int big = i - a[i] + j - 1; // last right to be placed by i
int need = big - a[last];
score[j] *= comb.ncr(spaces, need);
}
}
last = i;
}
for (int j = l; j <= r; j++){
if (!side[last]){
score[j] *= comb.ncr(n - last - 1, j - a[last] - 1);
}
else{
score[j] *= comb.ncr(n - last - 1, n - 1 - a[last]);
}
}
mint ans = 0;
for (int i = 0; i < n; i++)
ans += score[i];
ans *= coef;
cout << ans << "\n";
}
int main(){
ios::sync_with_stdio(false);
cin.tie(0);
int t;
cin >> t;
while (t--) solve();
}
Bonus editorial by Friedrich:
Can you uniquely determine, for each element $$$a[i]$$$, which subsequence it belongs to?
For which values of $$$k$$$ does a solution exist?
Let the subsequence on indices $$$1$$$ to $$$k$$$ be $$$L$$$ and the subsequence on indices $$$k+1$$$ to $$$n$$$ be $$$R$$$.
For every $$$a[i]\in L$$$ we have $$$a[i]-i\le 0$$$, since there must be exactly $$$a[i]-1$$$ elements of $$$L$$$ before position $$$i$$$. Symmetrically, for every $$$a[i]\in R$$$ we have $$$a[i]-i\ge 0$$$.
From the same observation, if $$$a[i]\in L$$$ then there must be exactly $$$i-a[i]$$$ indices $$$j \lt i$$$ with $$$a[j]\in R$$$. This lets us decide the side of each element except when $$$a[i]=i$$$, which is ambiguous. Once there exists some $$$a[j]\ne j$$$, the counts of elements in $$$L$$$ and $$$R$$$ to the left and right are constrained and cannot both decrease, so within any interval between fixed points $$$a[i]=i$$$ both types $$$L$$$ and $$$R$$$ must appear unless the whole interval consists of fixed points.
First case. There are positions with $$$a[i]=i$$$. Consider the gaps between consecutive fixed points. In a gap of size $$$m$$$ between indices $$$i$$$ and $$$j$$$ with $$$a[i]=i$$$ and $$$a[j]=j$$$, we can count the valid assignments of the elements $$$i+1,\dots,j-1$$$ to $$$L$$$ or $$$R$$$. Summing over all possible splits gives $$$\binom{m}{0}+\binom{m}{1}+\cdots+\binom{m}{m-1}=2^{m-1}.$$$ To avoid overcounting the fully ordered arrangement, for each gap of size $$$m$$$ we count $$$2^{m-1}-m-1$$$, and then add $$$1$$$ at the end for the completely ordered sequence.
Second case. Every element can be assigned unambiguously to $$$L$$$ or $$$R$$$. Add a sentinel $$$a[0]=0$$$ and assign it to $$$L$$$. Track $$$n+1$$$ dynamic states $$$(i,j)$$$ with initial states $$$(1,1),(1,2),\dots,(1,n+1)$$$, where $$$i$$$ is the next index needed in $$$L$$$ and $$$j$$$ is the next index needed in $$$R$$$. Each state stores the number of ways to reach it, initially $$$1$$$.
Updating every state at every step would be too slow, so only update at gap boundaries. There are two subcases for a gap of size $$$m$$$:
The endpoints of the gap belong to the same side. For each state, we know how many of the $$$m$$$ elements, say $$$k$$$, must go to that side, so we multiply the state’s count by $$$\binom{m}{k}$$$ and advance $$$(i,j)$$$ accordingly by $$$k$$$ and $$$m-k$$$.
The endpoints belong to different sides. First apply any deferred updates. Now either $$$i$$$ or $$$j$$$ is identical across all current states because the last element before the gap was forced to $$$L$$$ or $$$R$$$. For a gap of size $$$m$$$ there are at most $$$m$$$ next states, and for each we can compute the number of shuffles since the counts assigned to $$$L$$$ and $$$R$$$ are fixed. This gives $$$O(m)$$$ work per gap, hence $$$O(n)$$$ overall.
This completes the counting in $$$O(n)$$$ time.
import sys
input = sys.stdin.readline
MOD = 998244353
fac_arr = [1]
finv_arr = [1]
def enlarge_fac():
old_size = len(fac_arr)
new_size = old_size * 2
for i in range(old_size, new_size + 1):
fac_arr.append((fac_arr[-1] * i) % MOD)
ifac = [pow(fac_arr[-1], -1, MOD)]
for i in range(new_size, old_size,-1):
ifac.append((ifac[-1] * i) % MOD)
ifac.reverse()
for x in ifac:
finv_arr.append(x)
def fac(n):
while n >= len(fac_arr): enlarge_fac()
return fac_arr[n]
def finv(n):
while n >= len(finv_arr): enlarge_fac()
return finv_arr[n]
def binom(n, k):
if k < 0 or k > n: return 0
return ((fac(n) * finv(k)) % MOD * finv(n - k)) % MOD
for _ in range(int(input())):
n = int(input())
a = list(map(int, input().split())) + [n+1]
if max([0 if a[i] == -1 else abs(a[i]-i-1) for i in range(n)]) == 0:
sol = 1
last = 0
for i in a:
if i >= 0:
sol += pow(2, i - last - 1, MOD) - 1 - (i - last - 1)
last = i
sol %= MOD
print(sol)
else:
S = [(0,i,1) for i in range(1,n)]
last = -1
left = 1
eq = 0
pl = 0
mult = 1
lastv = 0
nd = 0
for idx in range(n+1):
if a[idx] == -1: continue
gap = idx - last - 1
at = 0
if idx + 1 > a[idx] or (idx + 1 == a[idx] and eq == 0):
if idx + 1 > a[idx]: eq = 1
at = 1
else: eq = 1
TMP = []
if at ^ left:
for i,j,k in S:
if left: j += pl; i = nd
else: i += pl; j = nd
if left: rev = a[idx] - j - 1
else: rev = a[idx] - i - 1
if left: i += gap - rev
else: j += gap - rev
if at: i = a[idx]
else: j = a[idx]
f = binom(gap,rev)
if f: TMP.append((i,j,(k*f)%MOD))
left ^= 1
pl = 0
lastv = a[idx]
nd = a[idx]
S = TMP
else:
rev = a[idx] - lastv - 1
pl += gap - rev
nd = a[idx]
lastv = a[idx]
mult *= binom(gap,rev)
mult %= MOD
last = idx
print(((sum([x[-1] for x in S])%MOD)*mult)%MOD)








first.
lol
Wowee C !!!
FAST EDITORIAL! C + TLE :)
Auto comment: topic has been updated by reirugan (previous revision, new revision, compare).
Auto comment: topic has been updated by reirugan (previous revision, new revision, compare).
fast editorial!
Let's say, we sort the array by increasing value of 'b' array.
then...
The second case is the more tricky case, we need to check the optimal way to make more than one operations for each element, and it is probably the crux of the problemAbove statement can be more optimised saying that we should never perform more than 1 operation on those elements, where b[i] != b[1](1 indexing for array rep. ) ( because, it is never optimal to perform two operation of some b[i], we can always perform one b[1] and one b[i] , which is bound to be less than 2 * b[i] ( because the array is already sorted by 'b' values ).
I figured out that after sorting b[0]+b[1] will be the answer if we can't do better with b0 alone... but I was confused what if there are multiple elements with same cost to update... I thought If I check all them then it will give TLE.. because I was using a set to maintain all the prime divisors and checking the closest prime multiple which I can get for corresponding a[I] of minimum b.. but I can't do that for all n numbers right? if all costs are same... can you please help me...
actually, if there are multiple elements of all same costs in the beginning... then you will need no more than 2 operations of the b[1] ... think about it... 2 operations are always sufficient...
So, now what remains is ( with '2' operation, we are certain we will get it , as established bove ), we need to find if we can do with only one operation in only first part... that is simple, you can do it easily by maintaining prime factors of each numbers and check through map, whether corresponding entry exists or not.
thank you
I was cooked by C2
I was cooked by C1
I was cooked by B.
I was cooked by A
I was cooked by Announcement
I was cooked.
I was
I
It feels like C2 > D
imo, C2 just had some basic mathematical reasoning.
Amazing contest I think AI can't solve any question after C1 thats why this contest feels fair.
Can't agree with a python coder
skill issue
absolute cinema
C2 is so good!
I agree with you. C2 is interesting
+
C1 was a nice problem. Unfortunately I got a tle on my submission I used the same approach as the editorial but didnt precompute the primes. Isn't N * sqrt(N) good enough?
You're making a new array of 2e5 size in each testcase.
Thanks for pointing that out. What is a good enough upper bound for this? Is it maxPrime we encounter while factorising?
I would make sure that the array is global, and that after you're done with it, you make sure that all the values that were set, are set back to 0.
Thank you! I declared the array globally and used fill(all(),0) and it was accepted.
correct submission
You're kind of lucky that it worked, because the fill is so fast. If there would be even more tests per file, it would still be too slow. A better way is to do the loop over prime factors twice, once to set, and once to put everything back to 0 again. Something like I did in my code with the cnt array: 344668807
filling the array with 0 takes O(N) in every test case . so shouldn't it give TLE . because in every test case we go from 0 to 2e5 and there are 1e4 test cases .
C2 was such a nice problem !!!
Loved implementation of A on Minecraft!!
My C1 FSTd on TC 21 (Runtime Error). Does anybody know what the error might have been. I must be overlooking something fairly obvious.
Same bruh but what could be the reason
you have initialized your smallest_pf_fact till 2e5 only, but what if arr[i] = 2e5 then, as per your code val2 = arr[i] + 1, i.e. now you will access smallest_pf_fact[2e5+1] which will return 0, as, it's uninitialized, then you perform division by val2 (i.e. 0), which will obviously give a RTE!
Did same mistake failed in main test.
Its most likely an out of bounds error from where $$$a_i = 2 \cdot 10^5$$$, some participants were accessing the element at index $$$2 \cdot 10^5 + 1$$$ in an array of size $$$2 \cdot 10^5$$$.
There are tests with $$$a_i = 2 \cdot 10^5$$$ but unfortunately I forgot to consider that participants might break early from finding an optimal solution and don't get Runtime Error for these cases.
I apologise for anyone who was affected by this.
I would have got my rank upgraded but anyways contest was actually goated. And I could have avoided checking evens as they can get paired up with anyone in 1 in that case too this can pass
First, thank you for such great contest. Truly enjoyed the questions! Did you mean that the test case 21 was added after contest or during contest?
Hacks made by participants during the round are added to system tests, but not the pretests. Test case 21 is a result of this, so it was added after the contest.
No worries. It was a great contest anyways!
nah that minecraft implementation is funny and crazy
Last week in LeetCode biweekly in D que, got to know about the concept of SPF and today I used it in C1, hope +ve delta.
Man! Sometimes just 1 number can mess up your contest.
Contest Submission:344688102
Normal Submission:344750449
Btw these are my friend's submissions messed his contest rank.
They are not the same code, check the value of global 'N'
Same thing happened with me !
you know reason?
PS: nvm understood
yes, my smallest prime factor array was working for number upto 2e5, but in my code, i checked for a[i]+1 which in the worst case can be 2e5+1, causing runtime error. My rank just got doubled because of this silly mistake .
hey in the editorial for C1...
why do we need to decrease the cnt of prime factors I mean two consecutives numbers are always co-prime so there is no need to do that right?
My code passes without adding that decrementing and adding logic
in fact even if you consider all numbers instead of just prime, the code still passes
in fact even without pre computing my code passed
https://mirror.codeforces.com/contest/2154/submission/344805761
Can anyone help with the error? 344753515 I couldn't find the output bug, how the cat is not reaching to n? (Test Case 1)
Consider the cat moving along this path:
1 -> 5 -> 4 -> 5 -> 4
Clearly doesn't end at n = 5.
Intellegent the test cases on C2 seem weak, the submission passes C2 but throws tle on C1, can you try rejudging C2 with C1 test case 17.
С1 is a nice problem. Is there a way to solve it in Python? I got TLE on the 7th test.
This runs in 600 ms: 344761618
344779355 this runs in 350 ms, using some variation of sieve of eratosthenes.......
Can someone check if my solution to C2 is hackable or not? I have suspicion that my solution will probably TLE: 344744359
Its not, i tested with all coprimes, and it runs in around 1400ms
Thank you Ashar-Usmani
Bonus on D is funny because the moment I read the problem I asked myself how would someone implement a checker for such problem! I was 90% sure that instead of directly solving the problem, it's better to implement the checker and it'll naturally lead to the solution.
Mark all nodes that's on the path from $$$1$$$ to $$$n$$$ as visited.
For operation $$$1$$$, try to move to any unvisited node, away from $$$n$$$ as possible. If no such nodes exist, then move to a visited node away from $$$n$$$ as possible.
For operation $$$2$$$, mark this node as visited.
After doing all operations, check if the end node is $$$n$$$ or not. Of course the checker should account for invalid input and such, but this the overall idea.
I really would like to know how is the checker implemented for this problem.
Real fast editorial!
I really loved the problems, but I did leave a little heartbroken.
I made a mistake in C1 of having MAX = 2e5 + 1 instead of MAX = 2e5 + 2(at least), and that passed the pretests but failed later on. I'm a little heartbroken because I had a good run, especially because I solved C1 rather fast. A lot faster than some people, and friends. In the end I went from +50 to -45 :(
I'm never making that mistake again, but I am still happy as I got the idea and I really enjoyed the problems
My idea of D was, find the leaf nodes and the path from 1 to n, start from a leaf node and do operation 2 then 1 then 2 on it, this ensures the leaf node is removed safely, keep doing same on its adjacent node till you encounter a node which is either in the path from 1 to n or has a degree greater than 2, if it the latter case decrease degree of node by 1 and remove the edge connecting this node to previous node. repeat similar for all leaf node, this leaves us with only the path from 1 to n, now do as we did for leaf node starting from node 1 all the way to node n. so its kind of deleting the branches to main path safely then shortening the main path, takes exact 3n operation
So, i need ur help for C2. I understand that we need to check only the lowest bi, but what if all bi are equal. for example: a: 3 7 11 b: 1 1 1 we can see, that to solve this case we need to make a == 3 7 12 and for this we need to check all bi. It is O(n * (amount of common divisors)) where the second one is rather big. So i suppose this solution mustn't work
If the cheapest element is not unique, then you don't need to consider incrementing any element more than once, since incrementing the cheapest element by 2 already costs at least the same as making the two cheapest elements both even.
More formally, assume the elements are sorted by $$$b_i$$$ ascending, then we can always get a solution at cost $$$b_1 + b_2$$$ by making the first two elements even (less if one of them is already even), so incrementing the first element more than once only makes sense when $$$b_1×2 \lt b_1 + b_2$$$, and when $$$b_1 = b_2$$$ (i.e., the cheapest element is not unique) then $$$b_1×2 = b_1 + b_2$$$.
You can generalize this to: it only makes sense to increment $$$k$$$ times when $$$b_1×k \lt b_1 + b_2$$$, which implies $$$k \lt (b_1 + b_2)/b_1$$$ but this doesn't improve the worst case complexity so it's not really useful for this problem.
W contest. Learned a lot from problems and solutions
Hey Intellegent, Please make more rounds!
ok
Upsolved D to find the exact same solution as the editorial. Very cool contest.
Can anyone help me with the time complexity analysis of my solution for C2? I know I missed the observation but I cannot figure out why mine TLEs. https://mirror.codeforces.com/contest/2154/submission/344750294
Whats the intuition behind using bipartite nature of trees in D?
Let $$$dist[u]$$$ = distance of vertex u from 1. The cat could be at any vertex $$$u$$$ after $$$m$$$ type 1 operations iff $$$dist[u] \le m$$$ and $$$dist[u]$$$ has the same parity as $$$m$$$. You can reason about why the parity should be the same. Now, considering the loose constraint of $$$3n$$$ operations, we don't need to care about the $$$dist[u] \le m$$$ condition. Any leaf vertex can be destroyed in at most 3 operations. So, we will choose to only destroy leaves.
Suppose the previous operation was a type 2. Now, the next operation must be a type 1. After this, if the vertex we are interested in destroying has the same parity as the no. of type 1 operations so far then we can do another type 1 operation and the parity becomes different. Then a type 2 operation can be made to destroy that vertex.
Edit: The reason why $$$dist[u]$$$ should have the same parity as $$$m$$$ is because if we take a simple path from 1 to $$$u$$$, then the path length will be $$$dist[u]$$$. Now, if the cat wants to reach $$$u$$$ in more than $$$dist[u]$$$ moves then it must wander off from this simple path and then come back to continue the path. Coming back retraces the moves made to wander off. So, this will increase the no. of moves taken to reach $$$u$$$ by some even number. This means the cat can reach from 1 to $$$u$$$ in $$$dist[u] + 2k$$$ type 1 moves, for $$$k \ge 0$$$. This is the same as the condition $$$dist[u] \le m$$$ and $$$dist[u]$$$ has the same parity as $$$m$$$.
will this method work? find the leaf nodes and the path from 1 to n, start from a leaf node and do operation 2 then 1 then 2 on it, this ensures the leaf node is removed safely, keep doing same on its adjacent node till you encounter a node which is either in the path from 1 to n or has a degree greater than 2, if it the latter case decrease degree of node by 1 and remove the edge connecting this node to previous node. repeat similar for all leaf node, this leaves us with only the path from 1 to n, now do as we did for leaf node starting from node 1 all the way to node n. so its kind of deleting the branches to main path safely then shortening the main path, takes exact 3n operation
You are on the right track.
Edit: 2 1 2 is not feasible as for the next leaf node you cannot start it with a type 2 operation. |2 1 2|2 1 2| breaks the constraint that there cannot be two consecutive type 2 operations.
ahh, thanks go it. parity helps us decide possibility of cat being on node i
The subset of nodes that the cat can be in at any even turn (0,2,4,...) are always the same, the same for the odd turns (1,3,5,...), so in each turn you can distroy any leaf that are in the opposite subset because the cat can't be in it!
That is simply!
A small corner case is when for example cat is in an even turn, and the are no leaves in the odd subset!
you can simply make the cat move to the odd turn, and delete a leaf from the even turn.
Re-root the tree and make n is the root, and run bfs till you distroy all nodes expect n.
I tried deleting non-path nodes that may or may not be leaves, while keeping track of their mark (color) and ensuring correct alternation between turns.
However, I’m getting WA (Wrong Answer). Can you suggest a test case where deleting a non-leaf node actually affects the final answer? Because from my reasoning, it shouldn’t — so I’d appreciate an example or insight proving otherwise.
Thanks.
You can't pop back from ones if the zeros are empty, pop only if you in turn = 0, the same for zeros
In question D I have done somewhat the same as in the editorial (destroying leaf nodes while checking you are not standing on it using distances) My code is failing on some tc 344776961 ,can anyone help me with this .
I am quite sure my solution for C2 can be hacked(TLE).
Can someone please verify?
3 19 17 14 7 17 18
in this above test case my output is correct meanwhile the judge say' answer should be 7 but your output is 24. Can anyone explain why is this happening ? my submission [link]
It should be 7 though. Increment 19 once at the cost of 7 and gcd(19+1, 14) becomes 2.
my answer is coming 7 only in my compiler and even codechefs online compiler.. can u please check is it correct on your system too ?
Yes, it comes out to be 7 in custom invocation too. Most probably it's a bug due to reusing global variables from the previous test case. The 11th test case on its own gives 7, but when running for the first 11 test cases at once it results in 24.
you are right it might be causing due to global declaration..., btw thanks ^_^
Can someone please help me to understand for C2 why we are not able to do the operation on a number more than once? What if you have a really cheap operation and you just spam it to get to a number and this will be more optimal than any alternative.
I'm rlly confused because there is also an official test case that does more than one operation on a single element?
Testcase:
Correct Output:
we do more than one operation for only that element with smallest b[i], so you did 5 operations on 2 since its b[i] is 1, we don't do this kind of operation on any other values as say you do 2 or more operations on a[i] with second smallest b[i], instead we could have done 1 or 0 operation on this a[i] and 1 or 0 operation on value with smallest b[i] making both even and gcd>=2.
I think D is a fun problem. But I didn't solve C1, and I spent a lot of time on it. My point of D is even less than others' C1's.
The solution's F2, step 3 part have a very small problem:
$$$(r \le i\le r)$$$ should be modify to $$$(r \le i\le n)$$$
Fixed, thanks.
Alternative solution of D, without coloring.
Root the tree at vertex 1. Then, the following works:
Run DFS from the root, send instruction
1when going down, send instruction1and then2 xwhen going up from x. When running DFS in a node $$$x$$$ that can reach $$$n$$$, DFS the other childs that cannot reach $$$n$$$ first, then DFS the child that can reach $$$n$$$ and send instruction1and then2 x. One actually see that this forms something like an euler tour so the steps won't go past $$$3n$$$. Obviously this is correct.344706147
Your solution is actually the same as editorial's and you using coloring implicitly.
Thanks for sharing! I really like your view!
Also, we can use $$$root = n$$$ (like in the editorial) and with your traversal idea code will simplify significantly — just one dfs! Essentially, we removing leaves while managing opposite color for current position, so the code will be correct.
346889705
For D can somebody explain like why my parity approach that is whether the cat has covered odd or even distance doesnt give right ans when i delete farthest node (from root) of such parity instead of deleting some leaf of such parity(like the tutorial solution)
like the farthest will obvi. be some or the other leaf in the complete process at all instances unless we are left with the main branch can somebody provide a counter case to this statement.
What would be the farthest node in case of a chain tree like 1 -- 2 -- 3 -- 4?
I am not removing the nodes in the main path like i am removing all nodes except those nodes
You must delete leaves to ensure connectivity, there will always be a leaf node that is not n, one way you can do this is by always deleting the farthest node from 6. Any other farthest from approach is flawed because you can't delete 6 and it may be the farthest node and then things get a bit icky.
[problem:2154] Cleaner and More Efficient Implementation 344796757
i just think c2 is much more difficult than D,but it's a interesting problem
Agree. For me, even C1 was about of same difficulty with D.
Might a O(n) solution to the problem E? 344800205
Technically it's still O(N * logN) because of sorting. But interesting.
Ternary search solution was mentioned in tutorial for problem E. If somebody is interested, there is an implementation (using Fibonacci search, which is valid for integers): https://mirror.codeforces.com/contest/2154/submission/344804780
Solution for problem D bonus (implement checker):
We assume that we've already checked that the input is well-formed and doesn't have consecutive instructions of type $$$2$$$.
Note that whenever a node is destroyed, there are two possible conditions of failure:
Therefore we may root the tree on node $$$n$$$, and when a node is destroyed, we consider its entire subtree destroyed. This ensures that living nodes are always connected, which simplifies the problem of determining whether the cat may have reached a node.
Let $$$d(v)$$$ be the distance from node $$$1$$$ to node $$$v$$$ in the initial tree, which can be found for all $$$v$$$ via breadth-first search. Whenever a node (or its ancestor) is destroyed we must check if the cat may have reached it; this is the case iff the number of instructions of type $$$1$$$ so far is $$$\ge d(v)$$$, and has the same parity as $$$d(v)$$$.
At the end, we must also check that the only living node that the cat can reach is node $$$n$$$, using the same method.
Also note the corner case that we must ignore instructions of type $$$1$$$ when there is only one living node left.
Looks correct to me, orz strong person.
345020227 Can you please tell me a case where my code is failing
344805176
Could someone tell me why it is causing TLE for C2, please?
In 2154C1, what was the need of first removing all the primes of a[i], then checking for a[i]+1 and then again including the primes of a[i]?
We remove the primes of a[i] to check if a[i] + 1 shares factors with any other number without interference of the current a[i], and then re-add them so those factors are available again for subsequent checks.
you are right, but you know what, a[i] will never interfere with a[i]+1, as the least prime is 2, and if it divides a[i] then it can't divide a[i]+1. So, we don't have to do this removing and including of primes.
My solution was accepted by just checking for primes of a[i]+1
yeah, absolutely right though.
Intellegent I have questions in D, can't cat just go back to parent ( start from root 1 to 3 and then get back to 1) and also it is written that answer always exists for any tree but suppose root itself has 3 adjacent child and we can remove only one so cat can go to second child, how does answer exists here or am I missing a lot of things here?
You just use operation 1 to comeback to root after going to second child
we need minecraft implementation for D, as we have in A
I am not sammyuri unfortunately.
Easy explaination for C2
Case 1: even & even
When there are more than one even number, gcd(even, even) > 1, the answer always is 0.
Case 2: even & odd (2 options)
1) Increment the smallest b[i] (where a[i] is odd), gcd(even, odd + 1) > 1.
2) Increment the only even number in the array multiple times to have a common prime with any of the odd numbers, gcd(even + x, odd) > 1.
Case 3: odd & odd (3 options)
1) Increment the two smallest b values for odd numbers gcd(odd + 1, odd + 1) > 1.
2) Increment every element at most 1 time, and check if there a common prime with the others
gcd(odd + 1, odd) > 1.
3) When there are only one odd number that have the smallest b among all odds, you must try to increment it multiple times and check if there a common prime with the others gcd(odd + x, odd) > 1. (note that you will try only one odd number not all of them which is ok)
Can someone please tell me why my submission for B is failing on pretest 7? The algorithm that I used is the same as that given in the editorial.
Maximum input length is 2e5 in problem but you declared the input array with 2e4 elements.
https://pastebin.com/22PxKa89 -- This is my submission for problem C2, can anyone please tell what is the issue with my code
is this ~~~~~ for (int i = 0; i < n; i++){ for (int x : pfac[a[i]]) cnt[x]--;
for (int x : pfac[a[i] + 1]){ if (cnt[x] > 0) ans= min(ans, b[i]); } for (int x : pfac[a[i]]) cnt[x]++; }~~~~~
in C2 std reasonable??
because gcd(a[i],a[i]+1)=1 so those cnt[x]++/cnt[x]-- cycle are useless
Yeah you are correct, I only added it for clarity.
can anyone please tell me why this submisson 344840270 for C1 is giving tle
C2 is an amazing problem, I really liked it
combining Adhoc, Number Theory and some ez DS and implementation
Really a nice problem!! even it prevented me from being Expert :(
Can anyone explain me sample test cases output of C1
Reason we have to increase both 1's to 2 to make gcd(2,2) > 1
Why answer is not 5, to make sure for each 1<=i,j<=n gcd(a[i], a[j) to be greater than 2 we have to increase all of the numbers by 1 making array as 2 2 728 2 2
Now, if we choose any (i,j) gcd(a[i], a[j]) > 1.
When we are saying answer is 2, which two indices values will be increased? How for each i,j gcd(a[i], a[j]) > 1?
was a bit too diffcult for someone like me
D is a nice problem.
Yes.It's really a good problem
can anyone tell me a case where my code is failing. 345020227
I used square root decomposition to solve problem E. My idea was that after a certain point, there would be a segment where the gain of the previous segment becomes greater than the current one — and those two segments would give the maximum gain for that point (or possibly the last segment).345001346
EDIT-> GPT code with comments so people can understand my messy code: 345025272
There may be such points. But you can discover that they only occur when
gain[mid]is exactly the max.for C2 i did not get intended solution for checking only index with lowest bi. So i made a solution to check for every pair- Solution — Instead of checking how far each number is from every prime, we can reason as follows:
1.If ai is not a prime number, then it must have a factor less than or equal to 500. Therefore, for any aj, there will always exist a number between aj and aj+500 such that gcd (ai,aj) > 1.
2.If ai is a prime number, we only need to consider ai itself rather than its factors. We can sort all numbers in increasing order, and for each prime, we iterate by multiplying it with an integer x. If aj*x< ai , we remove aj*x and insert aj*(x+1). Here, x will be at most 500.
Guyz can anyone tell why this submission to problem C2 is failing :3 (please welp)
C2 was really intresting .
In D , was it necessary that only n must remain at the end , even if we can reach it beforehand ?
For example , for tree 1-3-2 , won't doing only operation 1 once sufficient ?
asking because I wrote solution considering that and got wrong answer on Pretest 2 , but later assumed that at last only n must remains , then it passes :(
Submission (pass) Submission (fail)
The problem is that the checker is verifies the depth also. Your logic works correctly only when the cat and
nare positioned in a straight linear path, for example:1 -> 4, 4 -> 2, 4 -> 3.However, consider a case where the path is like
1 - 6 - 2. lets say 1 connects also to -> nodes 3, 4, and 5 . So, at the very least, you’d need to remove them so have to do like 2 3, 1 ,2 4, 1 ,2 5.That means
1is used twice at list so the cat can reach point2. This is why it’s giving a wrong answer (WA).[del]
classic hollow knight references :)
Problem D is beautiful.
i want to know why f2 step8 is correct,since the size of k is x
Hi, I'm working on C1 and I had the exact same idea as the editorial. However, I always get TLE on testcase 5. Can someone help me figure out what is the issue?
In
primes[]not only prime numbersAlso number of primes $$$\approx N / \ln N$$$. So
for(int x : a) for(int p : primes)so longThank you for pointing that out.