👋 Salam Codeforces!
Here are the step-by-step solutions. Hoping you enjoy them! 😊
The announcement blog is available here and you can also explore all blogs related to Rayan here!
2034A - King Keykhosrow's Mystery
Idea: ArshiaDadras — Preparation: AmirrzwM
- Step 1: Prove that for the minimum value of $$$m$$$, we must have $$$m \% a = m \% b = 0$$$.
- Step 2: To prove this, show that if $$$m \% a = m \% b = x > 0$$$, then $$$m-1$$$ will also satisfy the problem's requirements.
- Step 3: Since $$$m \ge \min(a , b)$$$, if $$$x > 0$$$, then $$$m > \min(a , b)$$$ must hold. Therefore, $$$m - 1 \ge \min(a , b)$$$ implies that $$$m-1$$$ satisfies the requirements.
- Step 4: Thus, $$$m$$$ must be divisible by both $$$a$$$ and $$$b$$$. The smallest such $$$m$$$ is $$$lcm(a, b)$$$ which can be calculated in $$$O(\log(\max(a, b)))$$$.
#include <bits/stdc++.h>
using namespace std;
int main(){
int tt;
cin >> tt;
while(tt--){
int a, b;
cin >> a >> b;
cout << lcm(a , b) << endl;
}
}
2034B - Rakhsh's Revival
Idea: MohammadParsaElahimanesh and AmirrzwM — Preparation: AmShZ
We will solve the problem using the following approach:
- Start from the leftmost spot and move rightwards.
- Whenever a consecutive segment of $$$m$$$ weak spots (i.e., $$$0$$$'s) is found, apply Timar to a segment of length $$$k$$$, starting from the last index of the weak segment.
- Repeat this process until no segment of $$$m$$$ consecutive weak spots remains.
The key idea behind this solution is that whenever we encounter a block of $$$m$$$ consecutive $$$0$$$'s, we need to strengthen it. Since we can apply Timar to a segment of length $$$k$$$, the optimal strategy is always to apply Timar starting at the last index of the block of $$$m$$$ consecutive $$$0$$$'s.
Correctness Proof:
- For any block of $$$m$$$ consecutive $$$0$$$'s, we must apply Timar to at least one index within this block. Hence, the strengthened segment of length $$$k$$$ must overlap with the block of weak spots.
- Suppose an optimal solution exists where Timar is applied to a segment starting leftward within the block. Suppose we shift this segment one step to the right (closer to the end of the block). In that case, the solution remains valid and optimal since it covers all weak spots in the block while reducing unnecessary overlap with already-strengthened areas.
By always starting from the last index of a block of $$$m$$$ consecutive $$$0$$$'s, this greedy strategy ensures that Timar is used in the minimum number of applications, making it correct and efficient.
# include <bits/stdc++.h>
using namespace std;
const int xn = 2e5 + 10;
int q, n, m, k, ps[xn];
string s;
int main() {
cin >> q;
while (q --) {
cin >> n >> m >> k >> s;
fill(ps, ps + n, 0);
int ans = 0, cnt = 0, sum = 0;
for (int i = 0; i < n; ++ i) {
sum += ps[i];
if (sum || s[i] == '1') cnt = 0;
else {
cnt++;
if (cnt == m) {
sum++, ans++, cnt = 0;
if (i + k < n) ps[i + k]--;
}
}
}
cout << ans << "\n";
}
}
2034C - Trapped in the Witch's Labyrinth
Idea: MohammadParsaElahimanesh — Preparation: AmirrzwM
- If a cell has a fixed direction (i.e., it points to another cell), and following that direction leads outside the maze, it must eventually exit the maze. Such cells cannot be part of any loop.
- We can analyze the remaining cells once we identify cells that lead out of the maze. Any undirected cell or $$$?$$$ cell might either lead to the exit or form part of a loop.
- If all neighboring cells of a $$$?$$$ cell can eventually lead out of the maze, then this $$$?$$$ cell will also lead out of the maze. The state of such $$$?$$$ cells can be determined based on their surroundings.
- For any remaining cells (directed cells that do not lead out of the maze, or other $$$?$$$ cells that cannot be determined to lead to an exit), we can assign directions such that starting from those cells will eventually lead to a loop. These cells will form the loops.
- To find how many cells will eventually lead to a loop, we can use a Depth-First Search (DFS) on the reversed graph, where all directions are reversed. By performing DFS starting from the "out-of-maze" cells, we can identify all cells that are reachable from the outside and thus will eventually lead out of the maze.
- Count the number of cells that can reach the exit. Subtract this number from the total number of cells in the maze to determine how many are part of loops (i.e., cells that cannot reach the exit).
#include <bits/stdc++.h>
using namespace std;
int main()
{
int tc;
cin >> tc;
while(tc--){
int n, m;
cin >> n >> m;
string c[n+1];
for(int i = 1 ; i <= n ; i++) cin >> c[i] , c[i] = "-" + c[i];
vector<pair<int,int>> jda[n+2][m+2];
for(int i = 1 ; i <= n ; i++){
for(int j = 1 ; j <= m ; j++){
if(c[i][j] == 'U') jda[i-1][j].push_back({i , j});
if(c[i][j] == 'R') jda[i][j+1].push_back({i , j});
if(c[i][j] == 'D') jda[i+1][j].push_back({i , j});
if(c[i][j] == 'L') jda[i][j-1].push_back({i , j});
}
}
int vis[n+2][m+2] = {};
queue<pair<int,int>> q;
for(int j = 0 ; j <= m+1 ; j++) vis[0][j] = 1 , q.push({0 , j});
for(int i = 1 ; i <= n+1 ; i++) vis[i][0] = 1 , q.push({i , 0});
for(int j = 1 ; j <= m+1 ; j++) vis[n+1][j] = 1 , q.push({n+1 , j});
for(int i = 1 ; i <= n ; i++) vis[i][m+1] = 1 , q.push({i , m+1});
while(q.size()){
auto [i , j] = q.front();
q.pop();
for(auto [a , b] : jda[i][j]){
if(vis[a][b] == 0){
vis[a][b] = 1;
q.push({a , b});
}
}
}
for(int i = 1 ; i <= n ; i++){
for(int j = 1 ; j <= m ; j++){
if(c[i][j] == '?' and
vis[i-1][j] and vis[i][j+1] and vis[i+1][j] and vis[i][j-1]) vis[i][j] = 1;
}
}
int ans = n * m;
for(int i = 1 ; i <= n ; i++){
for(int j = 1 ; j <= m ; j++){
if(vis[i][j] == 1) ans -= 1;
}
}
cout << ans << endl;
}
return 0;
}
2034D - Darius' Wisdom
Idea and Preparation: MohammadParsaElahimanesh
- Step 1: Using two moves, we can move an element to any arbitrary position in the array. Thus, we can place all $$$0$$$'s in their correct positions with at most $$$2 \min(count(0), count(1) + count(2))$$$ moves.
- Step 2: After placing all $$$0$$$'s, the rest of the array will contain only $$$1$$$'s and $$$2$$$'s. To sort this part of the array, we need at most $$$min(count(1), count(2))$$$ moves.
- Step 3: The first step takes at most $$$n$$$ moves, and the second step takes at most $$$\frac{n}{2}$$$ moves. However, it can be proven that the total number of moves is at most $$$\frac{8n}{7}$$$.
- Step 4: We can assume $$$count(0) \leq count(2)$$$ without loss of generality (Why?). So, the maximum number of moves are:
Better approach:
- Step 1: Since we are allowed to perform $$$n$$$ moves, assign each index one "move" as its "specified cost".
- Step 2: While there exists an index with a value of $$$0$$$ or $$$2$$$ that can be fixed with just one move, fix it using its assigned cost.
- Step 3: After fixing all $$$0$$$'s, $$$2$$$'s, and all $$$1$$$'s except one, the remaining array will have the following structure and we are now allowed to use $$$2x+1$$$ moves:
- Step 4: First, swap the $$$1$$$ with a random element (denote it as $$$r$$$). Then, for $$$2x-1$$$ moves, swap the index with the value $$$1$$$ with any index where the correct value must be placed, except $$$r$$$. Finally, swap $$$1$$$ and $$$r$$$.
#include <bits/stdc++.h>
using namespace std;
const int N = 200000;
int n, cnt[3], a[N];
vector<int> vip[3][3]; // Value In Position
vector<pair<int, int>> swaps;
inline int Pos(int index) {
if(index < cnt[0])
return 0;
else if(index < cnt[0]+cnt[1])
return 1;
else
return 2;
}
inline void AddBack(int index) {
vip[a[index]][Pos(index)].push_back(index);
}
inline void RemoveBack(int index) {
vip[a[index]][Pos(index)].pop_back();
}
inline void Swap(int i, int j) {
swaps.push_back({i, j});
RemoveBack(i);
RemoveBack(j);
swap(a[i], a[j]);
AddBack(i);
AddBack(j);
}
inline void Fix(int x) {
while(!vip[1][x].empty() or !vip[2-x][x].empty()) {
if(vip[1][x].empty()) {
if(!vip[1][2-x].empty())
Swap(vip[2-x][x].back(), vip[1][2-x].back());
else
Swap(vip[2-x][x].back(), vip[1][1].back());
}
if(!vip[x][1].empty())
Swap(vip[1][x].back(), vip[x][1].back());
else
Swap(vip[1][x].back(), vip[x][2-x].back());
}
}
int main()
{
ios_base::sync_with_stdio(false), cin.tie(0);
int t;
cin >> t;
while (t--) {
cin >> n;
for(int i = 0; i < n; i++)
cin >> a[i], cnt[a[i]]++;
for(int i = 0; i < n; i++)
AddBack(i);
if(cnt[0] <= cnt[2]) {
Fix(0);
Fix(2);
} else {
Fix(2);
Fix(0);
}
cout << swaps.size() << endl;
for(auto [i, j]: swaps)
cout << i+1 << ' ' << j+1 << endl;
cnt[0] = cnt[1] = cnt[2] = 0;
for(int i = 0; i < 3; i++)
for(int j = 0; j < 3; j++)
vip[i][j].clear();
swaps.clear();
}
return 0;
}
// In the name of god
#include <bits/stdc++.h>
using namespace std;
const int N = 200000;
int n, cnt[3], a[N];
vector<int> vip[3][3]; // Value In Position
vector<pair<int, int>> swaps;
inline int Pos(int index) {
if(index < cnt[0])
return 0;
else if(index < cnt[0]+cnt[1])
return 1;
else
return 2;
}
inline void AddBack(int index) {
vip[a[index]][Pos(index)].push_back(index);
}
inline void RemoveBack(int index) {
vip[a[index]][Pos(index)].pop_back();
}
inline void Swap(int i, int j) {
swaps.push_back({i, j});
RemoveBack(i);
RemoveBack(j);
swap(a[i], a[j]);
AddBack(i);
AddBack(j);
}
inline void Fix() {
bool change;
do {
change = false;
while ((!vip[1][0].empty()) && (!vip[0][1].empty()))
Swap(vip[1][0].back(), vip[0][1].back()), change = true;
while ((!vip[1][0].empty()) && (!vip[0][2].empty()))
Swap(vip[1][0].back(), vip[0][2-0].back()), change = true;
while ((!vip[1][2].empty()) && (!vip[2][1].empty()))
Swap(vip[1][2].back(), vip[2][1].back()), change = true;
while ((!vip[1][2].empty()) && (!vip[2][0].empty()))
Swap(vip[1][2].back(), vip[2][0].back()), change = true;
} while (change);
}
inline void PingPong() {
if(vip[0][2].empty())
return;
Swap(vip[1][1].back(), vip[0][2].back());
while (true){
Swap(vip[1][2].back(), vip[2][0].back());
if(vip[0][2].empty())
break;
Swap(vip[1][0].back(), vip[0][2].back());
}
Swap(vip[1][0].back(), vip[0][1].back());
}
int main()
{
ios_base::sync_with_stdio(false), cin.tie(0);
int t;
cin >> t;
while (t--) {
cin >> n;
for(int i = 0; i < n; i++)
cin >> a[i], cnt[a[i]]++;
for(int i = 0; i < n; i++)
AddBack(i);
Fix();
PingPong();
cout << swaps.size() << endl;
for(auto [i, j]: swaps)
cout << i+1 << ' ' << j+1 << endl;
// reset
cnt[0] = cnt[1] = cnt[2] = 0;
for(int i = 0; i < 3; i++)
for(int j = 0; j < 3; j++)
vip[i][j].clear();
swaps.clear();
}
return 0;
}
2034E - Permutations Harmony
Idea: AmirrzwM and MohammadParsaElahimanesh — Preparation: MohammadParsaElahimanesh
- Step 1: There are $$$n$$$ positions that must be equal, and their sum is $$$\frac{n \cdot (n+1) \cdot k}{2}$$$. Hence, each position must be $$$\frac{(n+1) \cdot k}{2}$$$. Additionally, there must be $$$k$$$ distinct permutations, so $$$k \leq n!$$$.
- Step 2: For even $$$k$$$, we can group $$$n!$$$ permutations into $$$\frac{n!}{2}$$$ double handles, where each group corresponds to a solution for $$$k = 2$$$. Then, pick $$$\frac{k}{2}$$$ handles. The match for permutation $$$a_1, a_2, \ldots, a_n$$$ is $$$(n+1)-a_1, (n+1)-a_2, \ldots, (n+1)-a_n$$$.
- Step 3: For $$$k = 1$$$, $$$n$$$ must be $$$1$$$. Symmetrically, $$$k$$$ cannot be $$$n! - 1$$$. Solutions for other odd $$$k$$$ will now be provided.
- Step 4: To construct an answer for $$$k = 3$$$ and $$$n = 2x + 1$$$, consider the following derived using a greedy approach:
$$$p_1$$$ | $$$p_2$$$ | $$$p_3$$$ | $$$p_4$$$ | $$$p_5$$$ | $$$ \ldots $$$ | $$$p_{2x-1} $$$ | $$$ p_{2x} $$$ | $$$ p_{2x+1} $$$ |
---|---|---|---|---|---|---|---|---|
1 | 2 | 3 | 4 | 5 | $$$ \ldots $$$ | $$$ 2x-1 $$$ | $$$ 2x $$$ | $$$ 2x+1 $$$ |
$$$ x+1 $$$ | $$$ 2x+1 $$$ | $$$ x $$$ | $$$ 2x $$$ | $$$ x-1 $$$ | $$$ \ldots $$$ | 2 | $$$ x+2 $$$ | 1 |
$$$ 2x+1 $$$ | $$$ x $$$ | $$$ 2x $$$ | $$$ x-1 $$$ | $$$ 2x-1 $$$ | $$$ \ldots $$$ | $$$ x+2 $$$ | 1 | $$$ x+1 $$$ |
- Step 5: Now, combine the solution for even $$$k$$$ and the $$$k = 3$$$ solution by selecting the 3 permutations and $$$\frac{k-3}{2}$$$ other handles.
// In the name of God
#include <bits/stdc++.h>
using namespace std;
int main() {
ios_base::sync_with_stdio(false), cin.tie(0);
int t;
cin >> t;
int f[8] = {1,1,2,6,24,120,720,5040};
while(t--) {
int n, k;
cin >> n >> k;
if(min(n, k) == 1) {
if(n*k == 1) {
cout << "Yes\n1\n";
} else cout << "No\n";
} else if(n < 8 and (f[n] < k or f[n] == k+1)) {
cout << "No\n";
} else if(n % 2 == 0 and k % 2 == 1) {
cout << "No\n";
} else {
vector<vector<int>> base, all;
vector<int> per(n);
for(int i = 0; i < n; i++) per[i] = i+1;
if(k % 2) {
vector<int> p1(n), p2(n);
for(int i = 0; i < n; i += 2) p1[i] = (n+1)/2-i/2, p2[i] = n-i/2;
for(int i = 1; i < n; i += 2) p1[i] = n-i/2, p2[i] = n/2-i/2;
all = base = {per, p1, p2};
k -= 3;
}
do {
if(k == 0)
break;
vector<int> mirror(n);
for(int i = 0; i < n; i++)
mirror[i] = n+1-per[i];
if(per < mirror) {
bool used = false;
for(auto &p: base) used |= (p == per), used |= (p == mirror);
if(not used) {
k -= 2;
all.push_back(per);
all.push_back(mirror);
}
}
} while (next_permutation(per.begin(), per.end()));
cout << "Yes\n";
for(auto p: all) {
for(int i = 0; i < n; i++)
cout << p[i] << (i+1==n?'\n':' ');
}
}
}
return 0;
}
// Thanks God
2034F1 - Khayyam's Royal Decree (Easy Version)
Idea and Preparation: TheScrasse
- Step 1: For simplicity, redefine the special conditions for the number of rubies and sapphires in your satchel (not chest). Add two dummy states, $$$(0,0)$$$ and $$$(n,m)$$$ for convenience (the first one indexed as $$$0$$$ and the second one indexed as $$$k+1$$$). Note that these dummy states won’t involve doubling the value.
- Step 2: Order the redefined conditions $$$(x, y)$$$ in increasing order based on the value of $$$x + y$$$.
- Step 3: Define $$$ways_{i,j}$$$ as the number of ways to move from state $$$i$$$ to state $$$j$$$ without passing through any other special condition. This can be computed using inclusion-exclusion in $$$O(k^3)$$$.
- Step 4: Define $$$cost_{i,j}$$$, the increase in value for moving directly from state $$$i$$$ to state $$$j$$$ without intermediate doubling, as:
- Step 5: Define $$$dp_i$$$ as the total sum of the value of your satchel across all ways to reach the state defined by the $$$i$$$-th condition. This can be computed recursively as:
- Step 6: Compute the final answer as the value of $$$dp_{k+1}$$$ divided by the total number of ways to move from $$$(0,0)$$$ to $$$(n,m)$$$, which is $$$\binom{n+m}{n}$$$.
#include <bits/stdc++.h>
using namespace std;
#define nl "\n"
#define nf endl
#define ll long long
#define pb push_back
#define _ << ' ' <<
#define INF (ll)1e18
#define mod 998244353
#define maxn 400010
ll fc[maxn], nv[maxn];
ll fxp(ll b, ll e) {
ll r = 1, k = b;
while (e != 0) {
if (e % 2) r = (r * k) % mod;
k = (k * k) % mod; e /= 2;
}
return r;
}
ll inv(ll x) {
return fxp(x, mod - 2);
}
ll bnm(ll a, ll b) {
if (a < b || b < 0) return 0;
ll r = (fc[a] * nv[b]) % mod;
r = (r * nv[a - b]) % mod;
return r;
}
int main() {
ios::sync_with_stdio(0);
cin.tie(0);
fc[0] = 1; nv[0] = 1;
for (ll i = 1; i < maxn; i++) {
fc[i] = (i * fc[i - 1]) % mod; nv[i] = inv(fc[i]);
}
ll t; cin >> t;
while (t--) {
ll n, m, k; cin >> n >> m >> k;
vector<array<ll, 2>> a(k + 2, {0, 0});
for (ll i = 1; i <= k; i++) {
cin >> a[i][0] >> a[i][1];
a[i][0] = n - a[i][0]; a[i][1] = m - a[i][1];
}
a[k + 1] = {n, m}; k++;
sort(a.begin() + 1, a.end());
auto paths = [&](ll i, ll j) {
ll dx = a[j][0] - a[i][0], dy = a[j][1] - a[i][1];
return bnm(dx + dy, dx);
};
auto add = [&](ll &x, ll y) {
x = (x + y) % mod;
x = (x + mod) % mod;
};
vector direct(k + 1, vector<ll>(k + 1, 0));
for (ll i = 1; i <= k; i++) {
for (ll j = i - 1; j >= 0; j--) {
direct[j][i] = paths(j, i);
for (ll l = j + 1; l < i; l++) {
add(direct[j][i], -paths(j, l) * direct[l][i]);
}
}
}
vector<ll> dp(k + 1, 0);
for (ll i = 1; i <= k; i++) {
for (ll j = 0; j < i; j++) {
if (direct[j][i] == 0) continue;
ll partial = dp[j];
ll delta = 2 * (a[i][0] - a[j][0]) + (a[i][1] - a[j][1]);
add(partial, paths(0, j) * delta);
add(dp[i], partial * direct[j][i]);
}
if (i != k) dp[i] = (2 * dp[i]) % mod;
}
ll ans = (dp[k] * inv(bnm(n + m, m))) % mod;
cout << ans << nl;
}
return 0;
}
2034F2 - Khayyam's Royal Decree (Hard Version)
Idea and Preparation: TheScrasse
- Step 1: For simplicity, redefine the special conditions for the number of rubies and sapphires in your satchel (not chest).
- Step 2: Order the redefined conditions $$$(x, y)$$$ in increasing order based on the value of $$$x + y$$$.
- Step 3: Define $$$total_{i,j}$$$ as the total number of ways to move from state $$$i$$$ to state $$$j$$$ (ignoring special condition constraints). This can be computed as:
- Step 4: Define $$$weight_i$$$ as the total contribution of all paths passing through condition $$$i$$$ to reach the final state $$$(n, m)$$$. This can be computed recursively as:
- Step 5: The main insight is to account for the doubling effect of passing through multiple scrolls. If a path passes through a sequence of conditions $$$s_1, \dots, s_c$$$, each gem collected before entering $$$s_1$$$ is counted with multiplicity $$$2^c$$$. Instead of explicitly multiplying by $$$2^c$$$, consider the number of subsets $$$q_1, \dots, q_d$$$ of $$$s_1, \dots, s_c$$$. By summing over all subsets, the correct multiplicity is automatically handled.
- Step 6: Define $$$dp_i$$$ as the total value of all paths passing through condition $$$i$$$, considering the contribution of each state’s rubies and sapphires. This can be computed as:
- Step 7: Compute the final answer as $$$\sum dp_i$$$ divided by the total number of ways to move from $$$(0,0)$$$ to $$$(n,m)$$$, which is equal to $$$\binom{n+m}{n}$$$.
Clarification: The approach hinges on the insight that $$$2^i$$$ can be derived from the structure of subsets of scrolls $$$s_1, \dots, s_c$$$. Generalizations to $$$3^i$$$ or other multiplicative factors are possible by appropriately modifying $$$weight_i$$$ and adjusting the factor in Step 5. For example, a factor of 3 can be applied by multiplying path contributions by 2 at the relevant steps.
#include <bits/stdc++.h>
using namespace std;
#define nl "\n"
#define nf endl
#define ll long long
#define pb push_back
#define _ << ' ' <<
#define INF (ll)1e18
#define mod 998244353
#define maxn 400010
ll fc[maxn], nv[maxn];
ll fxp(ll b, ll e) {
ll r = 1, k = b;
while (e != 0) {
if (e % 2) r = (r * k) % mod;
k = (k * k) % mod; e /= 2;
}
return r;
}
ll inv(ll x) {
return fxp(x, mod - 2);
}
ll bnm(ll a, ll b) {
if (a < b || b < 0) return 0;
ll r = (fc[a] * nv[b]) % mod;
r = (r * nv[a - b]) % mod;
return r;
}
int main() {
ios::sync_with_stdio(0);
cin.tie(0);
fc[0] = 1; nv[0] = 1;
for (ll i = 1; i < maxn; i++) {
fc[i] = (i * fc[i - 1]) % mod; nv[i] = inv(fc[i]);
}
ll t; cin >> t;
while (t--) {
ll n, m, k; cin >> n >> m >> k;
vector<array<ll, 2>> a(k + 2, {0, 0});
for (ll i = 1; i <= k; i++) {
cin >> a[i][0] >> a[i][1];
a[i][0] = n - a[i][0]; a[i][1] = m - a[i][1];
}
a[k + 1] = {n, m}; k++;
sort(a.begin() + 1, a.end());
auto paths = [&](ll i, ll j) {
ll dx = a[j][0] - a[i][0], dy = a[j][1] - a[i][1];
return bnm(dx + dy, dx);
};
auto add = [&](ll &x, ll y) {
x = (x + y) % mod;
x = (x + mod) % mod;
};
vector<ll> cnt_weighted(k + 1, 0);
cnt_weighted[k] = 1;
for (ll i = k - 1; i >= 1; i--) {
for (ll j = i + 1; j <= k; j++) {
add(cnt_weighted[i], paths(i, j) * cnt_weighted[j]);
}
}
ll ans = 0;
for (ll i = 1; i <= k; i++) {
ll delta = 2 * a[i][0] + a[i][1];
add(ans, delta * paths(0, i) % mod * cnt_weighted[i]);
}
ans = (ans * inv(bnm(n + m, m))) % mod;
cout << ans << nl;
}
return 0;
}
Note: Thanks to peti1234 and _istil for suggesting the idea of including this subtask!
2034G1 - Simurgh's Watch (Easy Version)
Idea: ArshiaDadras — Preparation: AmShZ and ArshiaDadras
- It is easy to check if the solution can be achieved with only one color. For any time point $$$x$$$, there must be at most one interval containing $$$x$$$, since if multiple intervals contain $$$x$$$, they must be colored differently.
- A simple strategy is to solve the problem using three colors. First, we color some intervals with colors 1 and 2, then color others with color 3.
- For each step, we find the leftmost point that has not been colored yet and color the segment that contains this point.
- We always choose the interval with the largest endpoint that contains the current point.
- By coloring the intervals alternately with colors 1 and 2, we ensure that all points are covered by exactly one of these colors.
- Now, we check if we can color the intervals with just two colors using a greedy algorithm:
- We iterate over the intervals sorted by start (increasingly) and then by end (decreasingly).
- At each point, we keep track of the number of colors used in previous intervals that are not yet closed. Let this number be $$$I$$$, and suppose we are currently at interval $$$i$$$.
- We color the current interval based on the value of $$$I$$$:
- If $$$I = 0$$$, color interval $$$i$$$ with color 1.
- If $$$I = 1$$$, color interval $$$i$$$ with the opposite color of the current used color.
- If $$$I = 2$$$, color interval $$$i$$$ with the opposite color of the interval with the greatest endpoint among the currently open intervals.
- If it is impossible to assign a unique color between overlapping intervals at any point, it can be shown that coloring the intervals using only 2 colors is impossible.
Solving G1 using G2:
- It’s sufficient to check the integer points and half-points (e.g., 1.5, 2.5, …) to verify whether the coloring is valid (Why?).
- To handle this, we can multiply all the given points by two, effectively converting the problem into one in which only integer points exist.
- After this transformation, we solve the problem in the integer system of G2, where the intervals and coloring rules are defined using integer boundaries!
Note: A brief explanation of why this greedy algorithm works can be found here.
/* In the name of Allah */
// Welcome to the Soldier Side!
// Where there's no one here, but me...
#include<bits/stdc++.h>
using namespace std;
const int N = 2e5 + 5;
int t, n, l[N], r[N], col[N];
vector<int> st[N << 1], en[N << 1];
void compress_points() {
vector<int> help;
for (int i = 0; i < n; i++) {
help.push_back(l[i]);
help.push_back(r[i]);
}
sort(help.begin(), help.end());
help.resize(unique(help.begin(), help.end()) - help.begin());
for (int i = 0; i < n; i++) {
l[i] = lower_bound(help.begin(), help.end(), l[i]) - help.begin();
r[i] = lower_bound(help.begin(), help.end(), r[i]) - help.begin();
}
}
void record_points() {
for (int i = 0; i < n; i++) {
st[l[i]].push_back(i);
en[r[i] + 1].push_back(i);
}
for (int i = 0; i < 2 * n; i++)
sort(st[i].begin(), st[i].end(), [](int i, int j) {
return r[i] > r[j];
});
}
void try3_points() {
fill(col, col + n, 0);
int cur = -1, nxt = -1, c = 2;
for (int i = 0; i < 2 * n; i++) {
if (st[i].empty())
continue;
if (!~cur || i > r[cur]) {
if (cur ^ nxt && r[nxt] < i) {
col[nxt] = (c ^= 3);
cur = nxt;
}
if (cur ^ nxt)
cur = nxt;
else {
cur = st[i][0];
for (int p: st[i])
if (r[p] > r[cur])
cur = p;
nxt = cur;
}
col[cur] = (c ^= 3);
}
for (int p: st[i])
if (r[p] > r[nxt])
nxt = p;
}
if (cur ^ nxt)
col[nxt] = c ^ 3;
}
bool is_bad(set<pair<int, int>> s[2]) {
int cnt1 = s[0].size(), cnt2 = s[1].size();
return cnt1 + cnt2 && cnt1 ^ 1 && cnt2 ^ 1;
}
void try2_points() {
set<pair<int, int>> s[2];
for (int i = 0; i <= 2 * n; i++) {
for (int p: en[i])
s[col[p]].erase({r[p], p});
if (is_bad(s)) {
try3_points();
return;
}
for (int p: st[i]) {
int cnt1 = s[0].size();
int cnt2 = s[1].size();
if (!cnt1 || !cnt2)
col[p] = cnt1 > 0;
else if (cnt1 ^ cnt2)
col[p] = cnt1 < cnt2;
else
col[p] = s[0].begin()->first > s[1].begin()->first;
s[col[p]].insert({r[p], p});
if (is_bad(s)) {
try3_points();
return;
}
}
}
}
void read_input() {
cin >> n;
for (int i = 0; i < n; i++)
cin >> l[i] >> r[i];
}
void solve() {
compress_points();
record_points();
try2_points();
}
void write_output() {
cout << *max_element(col, col + n) + 1 << endl;
for (int i = 0; i < n; i++)
cout << col[i] + 1 << "\n "[i < n - 1];
}
void reset_variables() {
for (int i = 0; i < n; i++) {
col[i] = 0;
st[l[i]].clear();
en[r[i] + 1].clear();
}
}
int main() {
ios:: sync_with_stdio(0), cin.tie(0), cout.tie(0);
for (cin >> t; t--; reset_variables())
read_input(), solve(), write_output();
return 0;
}
Note: Special thanks to Prof. Mohammad Ali Abam for proposing this problem!
Fun fact: He wasn’t able to solve it himself, but when he presented it to me [Arshia is talking...], he said, “I don’t know how to solve it yet, but I guess its difficulty would be at most around E…”)
2034G2 - Simurgh's Watch (Hard Version)
Idea and Preparation: ArshiaDadras
- Step 1: It is easy to check if the solution can be achieved with only one color. For any time point $$$x$$$, there must be at most one interval containing $$$x$$$, since if multiple intervals contain $$$x$$$, they must be colored differently.
- Step 2: A simple strategy is to solve the problem using three colors; First, we color some intervals with colors 1 and 2, then color others with color 3. For each step, we find the leftmost point that has not been colored yet and color the segment that contains this point. We always choose the interval with the largest endpoint that contains the current point. By coloring the intervals alternately with colors 1 and 2, we ensure that all points are covered by exactly one of these colors.
- Step 3: Now, we check if we can color the intervals with just two colors. For some point, $$$x$$$, suppose we have already colored the intervals $$$[l_i, r_i]$$$ with $$$l_i \leq x$$$, such that all points before $$$x$$$ have a unique color. At each step, we only need to determine which of the intervals like $$$p$$$ that $$$l_p \leq x \leq r_p$$$ can have a unique color. The key observation is that if an interval can be uniquely colored at time $$$x$$$, it can also remain uniquely colored for all times $$$t$$$ such that $$$x \leq t \leq r_i$$$.
- Lemma: If an interval $$$[l_i, r_i]$$$ can be uniquely colored at time $$$x$$$, it can also be uniquely colored at all subsequent times $$$x \leq t \leq r_i$$$.
- Proof: Consider coloring the intervals at time $$$x$$$. Intervals starting at $$$x + 1$$$ will be colored with the opposite color to interval $$$i$$$, ensuring that the interval remains uniquely colored at time $$$x+1$$$.
- With this lemma, we can conclude that the changes in the coloring are $$$O(n)$$$. It suffices to track the intervals that are added and removed at each point in time.
- Step 4: To efficiently move from time $$$x$$$ to $$$x + 1$$$, we perform the following steps:
- Remove the intervals that have $$$r_i = x$$$ (since they no longer contain $$$x+1$$$).
- Add the intervals that have $$$l_i = x + 1$$$.
- Update the set of intervals that can be uniquely colored at time $$$x+1$$$.
Step 5: Finally, we observe that only the following points are important for the coloring:
- $$$l_i$$$ and $$$r_i$$$ for each interval.
- $$$l_i - 1$$$ and $$$r_i + 1$$$, since these points mark the boundaries where intervals start or end.
Thus, we can compress the numbers to reduce the range of values we need to process.
/* In the name of Allah */
// Welcome to the Soldier Side!
// Where there's no one here, but me...
#include<bits/stdc++.h>
using namespace std;
const int N = 2e5 + 5;
vector<int> st[N << 2], en[N << 2];
int t, n, k, l[N], r[N], dp[N], col[N], prv[N];
void compress_numbers() {
vector<int> help;
for (int i = 0; i < n; i++) {
help.push_back(l[i] - 1);
help.push_back(l[i]);
help.push_back(r[i]);
help.push_back(r[i] + 1);
}
sort(help.begin(), help.end());
help.resize(k = unique(help.begin(), help.end()) - help.begin());
for (int i = 0; i < n; i++) {
l[i] = lower_bound(help.begin(), help.end(), l[i]) - help.begin();
r[i] = lower_bound(help.begin(), help.end(), r[i]) - help.begin();
}
}
void save_checkpoints() {
for (int i = 0; i < n; i++) {
st[l[i]].push_back(i);
en[r[i]].push_back(i);
}
}
bool check_one() {
for (int i = 0, open = 0; i < k; i++) {
open += st[i].size();
if (open > 1)
return false;
open -= en[i].size();
}
return true;
}
void color_with_two() {
for (int i = k - 1, cur = -1; ~i; i--) {
if (en[i].empty())
continue;
while (!~cur || i < dp[cur])
if (~cur && ~prv[cur]) {
col[prv[cur]] = col[cur];
if (r[prv[cur]] >= l[cur])
col[prv[cur]] ^= 1;
cur = prv[cur];
}
else
for (int p: en[i])
if (~dp[p] && (!~cur || dp[p] < dp[cur]))
cur = p;
for (int p: en[i])
if (p ^ cur)
col[p] = col[cur] ^ 1;
}
}
bool check_two() {
set<int> goods, bads;
fill(dp, dp + n, -1);
fill(prv, prv + n, -1);
for (int i = 0; i < k; i++) {
int prev = -1;
if (i)
for (int p: en[i - 1]) {
bads.erase(p), goods.erase(p);
if (~dp[p] && (!~prev || dp[p] < dp[prev]))
prev = p;
}
int open = goods.size() + bads.size();
if (open == 1 || (open == 2 && !goods.empty())) {
for (int p: bads) {
if (open == 1)
prv[p] = prev;
else
prv[p] = *goods.begin();
goods.insert(p);
dp[p] = i;
}
bads.clear();
}
if (open == 1)
prev = *goods.begin();
for (int p: st[i])
if (!open || open == 1 || ~prev) {
goods.insert(p);
prv[p] = prev;
dp[p] = i;
}
else
bads.insert(p);
open += st[i].size();
if (open && goods.empty())
return false;
}
color_with_two();
return true;
}
void color_with_three() {
int cur = -1, nxt = -1;
for (int i = 0; i < k; i++) {
if (st[i].empty())
continue;
if (~cur && i > r[cur] && nxt ^ cur) {
col[nxt] = col[cur] ^ 3;
cur = nxt;
}
if (!~cur || i > r[cur]) {
for (int p: st[i])
if (!~cur || r[p] > r[cur])
cur = p;
col[nxt = cur] = 1;
}
for (int p: st[i])
if (r[p] > r[nxt])
nxt = p;
}
if (cur ^ nxt)
col[nxt] = col[cur] ^ 3;
}
void read_input() {
cin >> n;
for (int i = 0; i < n; i++)
cin >> l[i] >> r[i];
}
void solve() {
compress_numbers();
save_checkpoints();
if (check_one())
return;
if (check_two())
return;
color_with_three();
}
void write_output() {
cout << *max_element(col, col + n) + 1 << endl;
for (int i = 0; i < n; i++)
cout << col[i] + 1 << "\n "[i < n - 1];
}
void reset_variables() {
fill(col, col + n, 0);
for (int i = 0; i < k; i++) {
st[i].clear();
en[i].clear();
}
}
int main() {
ios:: sync_with_stdio(0), cin.tie(0), cout.tie(0);
for (cin >> t; t--; reset_variables())
read_input(), solve(), write_output();
return 0;
}
Note: Thanks to _istil and Error_Yuan for suggesting the idea of including this subtask!
2034H - Rayan vs. Rayaneh
Idea and Preparation: MohammadParsaElahimanesh
- Step 1: According to Bézout's Identity, we can compute $$$\gcd(x_1, \ldots, x_t)$$$ and all its multipliers as an integer linear combination of $$$x_1, x_2, \ldots, x_t$$$.
- Step 2: A set {$$$a_1, \ldots, a_k$$$} is good (integer linearly independent) if for every $$$i$$$, $$$\gcd($$${$$$a_j \mid j \neq i$$$}$$$) \nmid a_i$$$.
- Step 3: A set {$$$a_1, \ldots, a_k$$$} is good if and only if there exists a set {$$${p_1}^{q_1}, {p_2}^{q_2}, \ldots, {p_k}^{q_k}$$$} such that $$${p_i}^{q_i} \mid a_j$$$ for $$$j \neq i$$$ and $$${p_i}^{q_i} \nmid a_i$$$.
- Step 4: The set {$$$a_1, \ldots, a_k$$$} can be identified by determining {$$${p_1}^{q_1}, {p_2}^{q_2}, \ldots, {p_k}^{q_k}$$$}. Assume $$$p_1^{q_1} < p_2^{q_2} < \ldots < p_k^{q_k}$$$, where $$$p_i \neq p_j$$$ and $$$p_i$$$ is prime.
- Step 5: Let $$$G = {p_1}^{q_1} \cdot {p_2}^{q_2} \ldots \cdot {p_k}^{q_k}.$$$ Then {$$$a_1, \ldots, a_k$$$} is good if and only if $$$\frac{G}{{p_i}^{q_i}} \mid a_i$$$ and $$$G \nmid a_i$$$ for every $$$i$$$.
- Step 6: The answer is a singleton if, for every pair of numbers $$$x$$$ and $$$y$$$ in the array, $$$x \mid y$$$ or $$$y \mid x$$$. Since the numbers are distinct, a good subset {$$$a_1, a_2$$$} can always be found by searching the first $$$\log M + 2$$$ elements.
- Step 7: Define $$$CM[i]$$$ (count multipliers of $$$i$$$) as the number of $$$x$$$ such that $$$i \mid a_x$$$. This can be computed in $$$O(n + M \log M)$$$.
- Step 8: A corresponding set {$$$a_1, \ldots, a_k$$$} exists for a set {$$${p_1}^{q_1}, {p_2}^{q_2}, \ldots, {p_k}^{q_k}$$$} if and only if $$$CM\left[\frac{G}{{p_i}^{q_i}}\right] > CM[G] \geq 0$$$ for all $$$i$$$.
- Step 9: Iterate over all valid sets of the form {$$${p_1}^{q_1}, {p_2}^{q_2}, \ldots, {p_k}^{q_k}$$$}, and check if a corresponding {$$$a_1, a_2, \ldots, a_k$$$} exists. Note that $$$k \geq 3$$$ since a good subset {$$$a_1, a_2$$$} is found using another method.
- Step 10: We know $$$\frac{G}{{p_1}^{q_1}} \leq M$$$ and also $$${p_1}^{q_1} \leq \sqrt{M},$$$ as $$${p_1}^{q_1} \leq \sqrt{{p_2}^{q_2} \cdot {p_3}^{q_3}} \leq \sqrt{\frac{G}{{p_1}^{q_1}}} \leq \sqrt{M}.$$$
- Step 11: There are $$$\sum_{i=1}^{\frac{\log M}{2}} P[\lfloor \sqrt[2i]{M} \rfloor]$$$ numbers in the form $$${p_1}^{q_1}$$$, where $$$P[i]$$$ denotes the number of primes in the range $$$[1, i]$$$. This count is $$$O(\frac{\sqrt M}{\log M})$$$.
- Step 12: The value of $$$k$$$ is at most 6 (denoted as $$$K$$$), as $$${p_2}^{q_2} \ldots {p_k}^{q_k} = \frac{G}{{p_1}^{q_1}} \leq M,$$$ and $$$3 \cdot 5 \cdot 7 \cdot 11 \cdot 13 \leq M < 3 \cdot 5 \cdot 7 \cdot 11 \cdot 13 \cdot 17.$$$
- Step 13: We can determine {$$$a_1, \ldots, a_k$$$} from {$$${p_1}^{q_1}, {p_2}^{q_2}, \ldots, {p_k}^{q_k}$$$} in $$$O(n \cdot K)$$$.
The total time complexity is $$$O\left(T \cdot M \cdot \frac{\sqrt M}{\log M} \cdot K + T \cdot M \cdot \log M + \sum_{i=0}^T n_i \cdot K\right).$$$
/// In the name of God the most beneficent the most merciful
#pragma GCC optimize("Ofast,no-stack-protector,unroll-loops,fast-math,O3")
#include <bits/stdc++.h>
using namespace std;
typedef long long ll;
constexpr int T = 100;
constexpr int M = 100001;
constexpr int SQM = 320;
constexpr int LGM = 20;
vector<pair<int,int>> factor;
int t, n[T], count_multipliers[T][M];
bitset<M> is_composite;
vector<int> ans[T], a[T];
inline void calculate_importants() {
for(int i = 2; i < SQM; i++)
if(!is_composite[i]) {
for(int j = i; j < M; j *= i)
factor.push_back({j,i});
for(int j = i*i; j < M; j += i)
is_composite.set(j);
}
for(int i = SQM; i < M; i++)
if(!is_composite[i])
factor.push_back({i,i});
sort(factor.begin(), factor.end());
}
void check(vector<int> &factors, int G) {
if(factors.size() > 2u) {
for(int i = 0; i < t; i++)
if(ans[i].size() < factors.size()) {
int count_product = (G < M? count_multipliers[i][G] : 0);
bool can = true;
for(auto u: factors)
if(count_multipliers[i][G/factor[u].first] == count_product) {
can = false;
break;
}
if(can)
ans[i] = factors;
}
}
int bound = (factors.size() == 1 ? SQM : M);
if(1LL*G/factor[factors[0]].first*factor[factors.back()].first > bound)
return;
for(int new_factor = factors.back(); G/factor[factors[0]].first*factor[new_factor].first <= bound; new_factor++)
if(G%factor[new_factor].second) {
factors.push_back(new_factor);
check(factors, G*factor[new_factor].first);
factors.pop_back();
}
}
int main() {
ios_base :: sync_with_stdio(false); cin.tie(nullptr);
calculate_importants();
cin >> t;
for(int i = 0; i < t; i++) {
cin >> n[i];
a[i].resize(n[i]);
for(int j = 0; j < n[i]; j++) {
cin >> a[i][j];
count_multipliers[i][a[i][j]]++;
}
ans[i] = {a[i][0]};
sort(a[i].begin(), a[i].begin()+min(n[i], LGM));
for(int c = 0; c+1 < n[i]; c++)
if(a[i][c+1]%a[i][c]) {
ans[i] = {a[i][c], a[i][c+1]};
break;
}
for(int c = 1; c < M; c++)
for(int j = c+c; j < M; j += c)
count_multipliers[i][c] += count_multipliers[i][j];
}
for(int i = 0; factor[i].first < SQM; i++) {
vector<int> starter = {i};
check(starter, factor[i].first);
}
for(int i = 0; i < t; i++) {
int k = ans[i].size();
cout << k << '\n';
if(k == 1u) {
cout << ans[i][0] << '\n';
} else if(k == 2u) {
cout << ans[i][0] << ' ' << ans[i][1] << '\n';
} else {
int subset[k];
for(auto u: a[i]) {
int ls = -1;
for(int j = 0; j < (int)k; j++)
if(u%factor[ans[i][j]].first)
ls = (ls == -1? j: -2);
if(ls >= 0)
subset[ls] = u;
}
for(int j = 0; j < k; j++)
cout << subset[j] << (j+1 == k? '\n' : ' ');
}
}
return 0;
}
/// Thank God . . .
will $$$O(1000*1000*100)$$$ pass for problem A?
constraints for problem A were a bit misleading, as soon as i saw the constraints, i decided to just write bruteforce since (lcm < $$$10^6$$$), and pretests passed.
But in system testing, my solution failed.
This is an unfortunate coincidence.
I'm sure the Polygon guide also specifies to test in Python for easy problems though, do they pass easily too? Also, none of the testers tried it with
long long
?We tested brute force and Python, we forgot to test brute force in Python. And yes, exactly one tester tried to use long long and got AC in 827 ms because there was an "almost max test" and not the max test only containing
1000 999
.Yes, this could have been fixed in time. Fortunately we avoided other (more critical) issues, such as both the model solution and the checker being wrong in problem G.
I used long long int and the code is somewhat similar to yours. My code got passed in the pretest but got TLE in the main test. If i had gotten the TLE verdict in time i could've tried to fix my code. BTW congrats on becoming IM :)
Yes, it was actually lucky for me to get TLE on pretests. It could've miserably failed on main tests without it.
Also thanks :) It's surprising that I got 7 more rating than carrot's prediction.
What is the polygon rule?
The contests are prepared on Polygon. Problemsetters should follow some rules there. I don't know if you are allowed to see them.
I think we are. This document was given in this blog.
Unfortunately it costed me becoming master :(
my bruteforce solution passed
Same !
Misleading for sure. Most would assume $$$O(1e8)$$$ time complexoty, but forget that the modulo operation isn't exactly fast. One of my friends also used brute force and failed on test 6. Another one of mine tried brute forcing but implemented poorly, so he could catch his mistake on pretest 3.
I think that using long long is also a factor. Sometimes changing from long long to int helped me reduce two-thirds of my time. Anyway, I just tested his solution with t = 100 and 999 1000 for each test case, and it ran for more than 2 seconds on C++11 and C++14. I think I learnt something as I've never considered this a major problem :)
https://hydro.ac/d/wz1212002/
I don't know about checking all possible solutions. But this simple solution works.
I solved 4 questions with no WA... super happy!!!
bhai mai naya hu maine 2 hhi solve kar paya abhi sad hu d me apne kya lagya hai greedy ?any approach u can tell me about it?
Yes, iterated from right to left, and if the element can be made bigger I borrowed from the left.
Feeling Bad for tourist.
Can someone tells me if my intuition is right or wrong ? — I though around the condition which was mention in the problem like m will be atleast greater than a or b so let take the maximum of a and b bc m can't be smaller than a or b and then just used the simple while loop with increment which is lead to answer !! I' am Right ??
I think your code's okay? I just submitted your exact same code and it got Accepted from the verdict.
Yeah i know just want to know if my intuition is also right bc i also saw the solution with a*b/gcd(a,b) , AND still can't able to solve atleast two problems again in this contest !!
Yeah that’s what I used to solve A too. Don’t worry, it’s a Div1 + Div2 contest so just practice more and you’ll get used to Div2B problems. 6 months ago Div2B problems are still challenging to me :) Have a nice day to you. Today I got negative delta too, not a good performance.
Thank you, have a great day too !! ^_^
Hi, Can anyone help me understand how to intuitively come up with the solution for k=3 case in problem E? I thought about the other cases but didn't get the idea about this case.. Thanks
Try writing a brute force and noticing a pattern, or just try random strategies for constructions and find one that works which is what I did but this is prob a lot slower than just noticing the pattern.
Bruh, I managed to get an Idleness limit exceeded in a non interactive problem
294097379
figured it out .
I was outputing an N x M matrix to the error stream
Top 60, excluding profiles with no countries.
Thanks
The editorial for E is quite unclear. I understand, that if k is even, we can form [k/2] pairs of [pi, reverse(pi)], as each pair sums to a constant row, the table is good. Now let k mod 2 = 1, then the solution suggests that we can represent k as k = 2 + ... + 2 + 3.
Also, the solution constructs an example of three permutations {p1, p2, p3} that p1 + p2 + p3 is a constant row.
What I don't understand, is why we can choose the remaining pairs in a way they don't intersect with the set {p1, p2, p3}
Note that $$$k$$$ cannot be $$$n! - 1$$$ (similar as it cannot be $$$1$$$). Therefore, $$$k \leq n! - 3$$$. An answer always exists since there are three forbidden permutations (the reverses of $$$p_1, p_2,$$$ and $$$p_3$$$).
I see it now, thanks!
In problem D, with a little bit of optimization, a "brute force" solution using 3 sets to store indices of 2s, 1s and 0s is accepted, is this intended?
I used the same approach.... got accepted.
ilgljvk vv
I think using one variable to store the index of one "1" is enough.
why do you consider this brute force?
Oh I guess it's just the nature of my solution lol. I just used a loop while the array is not sorted then swap 2s with 1s and 1s with 0s. And it turned out to be accepted, and also not TLE.
I think it depends more on how you determine which swaps to do. The sets on their own gives you log(n) access and removal, which is already better than nothing.
I would think if you used more than n swaps, your solution wouldn't be accepted, and if you're applying n swaps, as long as determining each swap takes less than O(n) time, it would be accepted and I'd consider that not brute force.
i fall from rank around 4k to 7k just because of using long long instead of int. can't believe.
just another approach for task D which is more clear for me than the editorial's one. Lets iterate over our array and notice that if we have 2 on i-th position then it is either already sorted (like 0 1 2) or we need to swap it with some 1 (cause the diff between i-th and j-th element must be exactly 1) from the right side of our array (like 0 2 1 -> 0 1 2). It is obvious that it is better for us to swap it with some 1 that has the biggest index, so we make sure that it will be sorted ( 2 1 1 -> 1 1 2). Now we will have the same logic for every 1 that we have but we will swap it with the rightest 0 instead. So my solution 294080860 does it by saving priority_queue for 1-es and 0-es position ( may be we dont even need those and should use only 2 variables instead). So when i meet 2 i do swap it with top of my priority_queue ( it is the index of the rightest 1) so now i should decrease value of my number by 1 and increase the rightest 1 by 1( 2 2 1 0 1 2 -> 1 2 2 0 1 2) and pop the value from my priority_queue(just in case that 1 is 2 after the swap). Doing the same for every 1 i meet dont forget to update queue for 1-es cause when we swap our 1 with the rightest 0 new 1 can become the rightest 1 for some 2 to swap with (for example 1 2 1 0 -> 0 2 1 1 -> 0 1 1 2). If u sure that we dont need these queues and may store only 2 values or this solution has a countertest let me know pls)
Problem D can also be solved in simple way by just comparing original array and the sorted version of it. Just handle cases greedily by traversing array A from back (B is sorted version of array A):
AC solution — 294056505
Yes, I used a similar approach 294077012
Can you give an intuitive explanation or proof for why this always works in <= n operations?
Note that for case 3 we have to swap the leftest 2 in array A.
Since you can do at most n operations, you can "assign" each operation to some index, so that every index has at most 1 operation assigned to it. Then, for cases 1, 2 we assign their operations to index i. The catch is when we try to do the 3rd case. Let's assign swap operation from 0 to 1 to index of 0 and swap operation from 1 to 2 to index of 2. Notice that we would need to perform a swap on index of 2 (let's say x) only if there is some 2 to the left of x (because A[x] will become 1 after swapping it). That means if we swap the leftmost 2 in 3rd case we will be fine.
Can anyone tell why this code is giving TLE on TC: 7 for problem C: 294109488.
Thanks!
Maybe This part Makes it $$$(nm)^2$$$:
I tried adding counters on the number of iterations and it seems it is timing out at BFS loop somehow. See this submission: 294108604
How is the part you mentioned (nm)^2 MohammadParsaElahimanesh.
Latest submission (still TLE on tc7): 294111670
Not sure what will fix this :(
You do BFS $$$n*m$$$ times, BFS is O(N+M) itself where N denoted the number of vertices ($$$n*m$$$ here), same holds for your dfs part I think.
to help you with local debugging, you can run your code on this test and see that it is slow
I think that the problem is here (I had to replace the dollar sign with # as it was breaking markup)
So inside bfs you are following some paths, so each "recursive" bfs call is not $$$O(1)$$$ anymore but can be up to $$$O(|path|)$$$ and path in bad cases can be as long as $$$O(n^2)$$$, thus overall asymptotic is $$$O(n^3)$$$
The only issue with the above code was that
visited_nodes
was local to the lambda function, causing the complexity to escalate to O((nm)^2) (because although we are marking 'v' as well but if we call bfs from some other node i' and j' then it is possible that we again try to check for loops which itself can have worst case complexity as n*m — Since in my TLE code, the iteration was constrained by this condition:if (visited_nodes.count({nx, ny})) return;
, changing this toif (v[nx][ny]) return;
fixed the code), removing it and only using 'v' vector for marking visited nodes resolved the issue. Here is the AC submission: 294113385.Thanks a lot MohammadParsaElahimanesh and polka125 , really appreciate your insights!
I like the question will upsolve c,d for sure and nice stories thanks.. for the good questions
In the editorial for F2, where did upper bound k come from in the sum i <= j <= k. There is no mention of what k is here.
Hours later I reread my comment and realize that k is the variable from the problem statement.
Could you explain F2 more intuitively? For example what does $$$weight_i$$$ means? I don't see anywhere we're processing the doubling of value.
Also it seems that by calculating the sum of $$$dp_i$$$, we're summing up the contribution of the values before reaching the first special point. What about the values we gained after reaching that point?
I understand that these ideas are meant to solve F1 only. However I just can't come up with how to optimize from F1 to F2. Or F2 is a completely different approach from F1?
The approach to F2 is very different and should not be seen as an optimization of F1 and I think it's not explained clearly in the editorial. We never consider values like "number of ways to go from scroll $$$i$$$ to scroll $$$j$$$ without passing through any other scroll on the way". The main idea uses the fact that $$$2^i$$$ is actually a very specific function — I don't think the approach can be generalized to even $$$3^i$$$ for example.
The key insight is to relate $$$2^i$$$ with the number of subsets of an $$$i$$$-element set. If you consider a path that goes through scrolls $$$s_1, \ldots, s_c$$$ (and no other scrolls) and we consider a gem taken before entering $$$s_1$$$, it is counted with multiplicity $$$2^c$$$. But rather than considering it with the multiplicity $$$2^c$$$, we will count it once per an event "for that gem, there is a path that goes through some set of scrolls $$$q_1, \ldots, q_d$$$ after taking that gem (but possibly some other scrolls as well)". There are $$$2^c$$$ of ways you can choose $$$q_1, \ldots, q_d$$$ out of $$$s_1, \ldots, s_c$$$, so if you just ignore the "disjointness condition" (i.e. the "without passing through any other scroll on the way" part) then multiplicities will magically work out by themselves.
But even after that point, getting the final result out of these is nontrivial as well. I used the identity that $$$2^c = 2^{c-1} + \ldots + 2^1 + 2^0 + 1$$$.
I think you can solve it with $$$3^i$$$, just add a factor of $$$2$$$ here:
add(cnt_weighted[i], 2 * paths(i, j) * cnt_weighted[j]);
We considered giving this version in the contest, would it have made sense?
Right, that's true :). It crossed my mind when writing the previous comment, but I didn't have time to think it through. Nevertheless, it seems like when coming up with $$$3^i$$$ one has to solve $$$2^i$$$ before as an intermediate step. It would make sense to pose it on the contest, but its score would have to be increased appropriately
This is very cool, pity that the person who wrote the editorial didn't understand the solution
⏳ This happened because we were in a hurry to publish it, juggling multiple tasks, and dealing with limited time. Apologies for the oversight!
Sorry for the earlier miswriting and for missing parts of the problem in the editorial. I’ve edited it, and I believe the current version is now clear and detailed enough for anyone to understand the solution steps.
I’ve also tried to address the points raised in your comments and those from others as well. Please let me know if you still feel something is missing or needs further clarification.
Apologies again, and thank you for your feedback! 💡
this is not true, mine is. You can optimize it through some coefficient magic.
elaborating slightly, note that you do inclusion exclusion on the paths, so for a fixed i, and a fixed j where you are considering transitions from state j to i, we need to subtract ways from i to k, then k to j for some intermediate state k.
The core observation is whenever k is included the set of numbers, so will i, and thus, we can associate the subtracted coefficient of the path i -> k -> j with k (along with its normal coefficient where you go directly from k -> j).
dang, if some red is struggling with it, then i have no chance
I actually think F2 is some kind of obscure magic, and I don't get why it has so many solves...
Similar ideas have appeared before, e.g. https://atcoder.jp/contests/dp/tasks/dp_y
In my opinion, dp_y is the easy part of the problem.
I'll try to explain this the way I solved, which seems similar to the editorial solution but in a more intuitive manner.
Observe that the process is equivalent to making the following change: Instead of multiplying the value of rubies in our satchel by 2 when a special condition is achieved, we spawn one identical ruby for each ruby currently in our satchel.
Let $$$dp_{x,y}$$$ represent the expected value of rubies spawned at state $$$(x, y)$$$, where $$$x$$$ is the number of red rubies and $$$y$$$ is the number of blue rubies in the satchel when a special condition is achieved.
The probability of transitioning from state $$$(x_1, y_1)$$$ to $$$(x_2, y_2)$$$ is given by the hypergeometric distribution:
where $$$x_2 \geq x_1$$$ and $$$y_2 \geq y_1$$$, otherwise, we cannot reach $$$(x_2, y_2)$$$ from $$$(x_1, y_1)$$$.
In the DP transitions, we either spawn some rubies from those drawn from the chest, or we spawn rubies from those spawned at a previous state.
For the first case, we have:
where we multiply by $$$2x + y$$$, which represents the value of rubies spawned for each ruby drawn from the chest.
For the second case, we have:
where we multiply the expected value of rubies from a previous state by the probability of transitioning to the new state.
So, iterate over states in sorted order and calculate the DP values. In the end, the answer will be expected value of spawned rubies $$$+$$$ rubies drawn from chest $$$=$$$ (sum of $$$dp_i$$$ over all $$$i$$$) $$$ + $$$ $$$(2n + m)$$$.
This method can be extended to multiply by any number $$$m$$$ by spawning $$$(m - 1)$$$ rubies for each ruby in the satchel.
Say for the sake of simplicity you have one gem, ever. In the end, if we define phantom conditions $$$(0,0)$$$ and $$$(m,n)$$$, you end up counting expressions of the form $$$p(0,i_1)p(i_1,i_2)\ldots p(i_j,k+1)$$$ where $$$p(a,b)$$$ is the number of ways to get from condition $$$a$$$ to condition $$$b$$$ without restrictions, with the conditions topologically sorted. A path that goes through exactly $$$x$$$ conditions will be counted $$$2^x$$$ times: for example, the path 1 -> 2 -> $$$k+1$$$ is counted twice, in 1 -> k+1 and 1 -> 2 -> k+1. Then, accounting for gems is straightforward because we can treat all the gems as spawning at once when we reach a condition, lazy prop style.
As to how to motivate this: I agree that F1 is kinda a dead end. One scenario I can think of would be as follows:
After doing F1, you stare at the formula for inclusion-exclusion. For getting from condition $$$0$$$ to condition $$$k+1$$$ withour passing through anything else, you add expressions of the form $$$p(0,k+1),p(0,i_1)p(i_1,i_2)p(i_2,k+1),\ldots$$$ and $$$-p(0,i_1)p(i_1,k+1),-p(0,i_1)p(i_1,i_2)p(i_2,i_3)p(i_3,k+1)\ldots$$$
You somehow figured that you likely have to do some algebraic manipulations that modify inclusion-exclusion so that you do it once per $$$k$$$ instead of $$$O(k)$$$ times. Let's go back to the basics: consider the 1-gem case with $$$k=1$$$ (which is a subproblem of larger $$$k$$$ anyways) is there a way to count the path that meets the condition twice right away? You can just add $$$p(0,1)p(1,2)$$$ instead of subtracting it, and that can be tracked in the dp — then you basically have the solution.
Can someone help me figure out the reason for MLE despite the space complexity being O(n)?
https://mirror.codeforces.com/contest/2034/submission/294082426
The idea was to first swap all 1's and 2's using 2 pointers and then swap all 0's and 1's to sort them followed by 1's and 2's
I guess you cannot sort it well and you have infinite loop witch led to many swaps and MLE. you can use assertion like swaps.size() <= 2n to find infinite loop. (I cannot test it my self right now, maybe some hours later:/)
294071265 this gives runtime error on pretest 8 for problem C . Can anyone tell what is the reason . I have not seen similar verdicts in any other solutions , so I am curious what caused this error ?
can someone explain F2 in more detail?
Read the editorial again. It updated!
Why does F1 solution work with non-equal probabilities of transitions from (r, b) to (r-1, b) and (r, b-1), it seems to treat every path as having the same probability ?
Let's calculate the probability of going from $$$(0, 0)$$$ to $$$(n, m)$$$ following a specific path.
So the probability is $$$\binom{n+m}{n}^{-1}$$$ regardless of the path.
for problem A , can we use binary search on answers
I don't think so.
you can have a try. It is useful but it is not the best. the ans must from 1 to m*n; so you can use the binary search,the time is O(log(m*n)). but in fact we have known that the ans=lcm(m,n),the time is O(log(max(m,n))
There’s nothing binary-searchable in this problem. You could not determine whether the answer is greater or smaller than a specific value or not.
i thought it like if the condition is meet , so what ever the m value on right will also meet so , just eliminate the right side and check on left for least value
But it's not true:
$$$3 = (15 \% 4) = (15 \% 6) = 3$$$, but $$$0 = (16 \% 4) \neq (16 \% 6) = 4$$$
Gotcha !
wow, so F1 editorial is just my explanation T_T
Yes, we used almost the same writing format. Your solution is more complete, because you also described how to compute $$$ways_{i,j}$$$ as well!
Does anyone know why this submission has runtime error?
I am trying to greedily solve the problem in parts, first by the groups of 0s and 1s
This div was really discouraging.
At which point? The tourist dramas, or the
#define int long long
dramas, or something else? =))A way simpler solution for problem D: https://mirror.codeforces.com/contest/2034/submission/294029509
Nice!
can you help me in problem D here is the link to the comment or you can scroll down
in problem E is there some proof that there's no solution for k = (n!)-1 ?
It's pretty obvious, because if you use all $$$n!$$$ permutations, you get the same sum $$$S$$$ in each column. Then, if you remove any permutation, the sum in each column will be unique.
For example if you remove something like $$$[1, 3, 4, 2]$$$, then the sum in each column will be $$$[S - 1, S - 3, S - 4, S - 2]$$$.
In questions F1 and F2 why the constraint like
It is guaranteed that the sum of n and the sum of m over all test cases do not exceed $$$2⋅10^5$$$
is there? It has nothing to do with the approach used, or is there some other possible way to solve it?
Probably just a general technique to hide intended solution.
I got into this trap and tried to come up with some $$$\mathcal{O}((n+m) \cdot k)$$$ solutions but failed :(
Can anyone provide a proof for the greedy solution in G1? I came up with some similar greedy to check whether the answer is 2, but they all turned out to be incorrect.
Think of it this way:
A solution whose complexity is $$$O(T\cdot M\cdot \log M\cdot K + M\cdot \sqrt{M} + \sum_{i=1}^Tn_i\cdot \log n_i)$$$ cannot easily pass H. I think this solution might not be worse than the tutorial. It may be better if it is guaranteed that the sum of $$$M$$$ doesn't exceed $$$3\cdot 10^6$$$.
$$$M$$$ equals $$$\max a_i = 10^5$$$. Actually I don't know The exact complexity (Theta) of solution, But I'm sure It's fast enough as it's appear in the blog too.
Why $$$\log n_i$$$?
I just sort, though I think it's not important to the complexity.
i dont know what is happening but i am stuck in this weird problem where i am getting correct answer in my local system but after submission i am getting a totally different answer
294374002
in the above submission if you go to the second test case and then for the test case
4
2 0 0 1
i am getting a weird answer: 5 1 4 2 1 2 3 3 4 4 3
but in my local system i am getting correct answer here is the image of the execution
need help
These types of differences arise due to the varying behavior of compilers on different systems. For instance, if you access an out-of-bounds index in an array (e.g., a[-1]), your local compiler might return a default value like 0 due to its specific behavior, while on the judge system, it could return a completely random value from memory (e.g., 1948329449) or even result in a runtime error (RTE) if the compiler detects the invalid access.
In your case, I suspect something similar is happening, which is why you’re seeing different results on your local system compared to the judge output. The good news is that you’ve identified the specific test case causing the issue, and it’s a small test. This makes it easier to debug your code by carefully checking for instances where you’re accessing out-of-bounds elements in your arrays.
Thank you Arshia i really appreciate your response, I will look into it carefully thankyou once again
can anyone point out logic error in my code It gives wrong answer on test case 7. I am marking inescapable cell as infected and then doing floodfill from them. https://mirror.codeforces.com/contest/2034/submission/294627595
Submission:294661590
Anyone, please help !! my submission has the same memory for all test cases. it passed the first 6 test cases but why it failed on the 7th ??
These types of issues occur due to exceeding the stack memory size as a result of making too many recursive function calls (almost infinitely), which ultimately leads to MLE (Memory Limit Exceeded). Even if you manage to fix the MLE issue, I believe you might still encounter TLE (Time Limit Exceeded) because of the sheer number of recursive calls being made.
MLE tends to happen faster than TLE, which is why, in some cases where you’re using a fixed amount of memory for each test, the program runs into MLE first due to excessive recursion before it can hit the time limit.
thanks !!
https://mirror.codeforces.com/contest/2034/submission/294830979
Can someone please explain if doing a BFS would work for problem C? Marking all the exitable cells and the '?' cells which are surrounded by these exitable cells on all 4 directions and then subtracting them from the total.
My solution to F2:
Consider the expectations directly. Let $$$f_i$$$ stands for the expected value of the path from $$$(n, m)$$$ to condition $$$i$$$. Initially $$$f_i=2(n-r_i)+(m-b_i)$$$. Then we'll calculate the contribution of each condition seperately. This is ok because of the linearity of expectations. If we passes a condition, then at this moment the value doubled. In other words, the contribution of this condition is the value of the path before it. Therefore for each condition $$$j$$$, if we can go from $$$j$$$ to $$$i$$$, let $$$p$$$ equals to the probability of a path from $$$(n, m)$$$ to $$$i$$$ passes $$$j$$$, $$$f_i\gets f_i+pf_j$$$.
This works no matter what the coefficient of the conditions is.
I'm posting this because I'm not sure if my idea is correct, or I just accidentally wrote a right code.
shit E :( and good F :)