A — Constructive Problems
Author: valeriu
We can observe an invariant given by the problem is that every time we apply adjacent aid on any state of the matrix, the sets of rows that have at least one rebuilt city, respectively the sets of columns that appear that have at least one rebuilt city remain constant. Therefore, if we want to have a full matrix as consequence of applying adjacent aid multiple times, both of these sets must contain all rows/columns. As such, the answer is bounded by $$$max(n, m)$$$.
We can tighten this bound by finding an example which always satisfies the statement. If we take, without loss of generality, $$$n \le m$$$, the following initial setting will satisfy the statement: $$$(1, 1), (2, 2), (3, 3), ..., (n, n), (n, n + 1), (n, n + 2), .. (n, m)$$$
I have proposed this div2A at 3 contests and after 1 year of waiting I finally was able to propose it (mainly because theoretically, this was supposed to be my round :)).
As the title suggests, this problem is inspired by one day trying to solve some constructive problem that required to draw some weird grid with some properties. And, as I was drawing multiple grids to try out multiple strategies, I was wondering how to draw these grids more optimally, as actually having to count for every matrix the height/width was pretty annoying, and I could just eyeball it by drawing it next to another drawn (but filled out) grid. As such, I needed an already drawn grid "below" the current one and another "to the left".
#include <bits/stdc++.h>
#define all(x) (x).begin(),(x).end()
using namespace std;
using ll = long long;
using ld = long double;
//#define int ll
#define sz(x) ((int)(x).size())
using pii = pair<int,int>;
using tii = tuple<int,int,int>;
const int nmax = 1e6 + 5;
const int inf = 1e9 + 5;
int n, k, m, q;
vector<int> g[nmax];
int v[nmax];
static void testcase() {
cin >> n >> m;
cout << max(n, m) << '\n';
return;
}
signed main() {
ios::sync_with_stdio(0);
cin.tie(0);
int t;
cin >> t;
for (int i = 0; i < t; i++)
testcase();
}
/**
Anul asta se da centroid.
-- Surse oficiale
*/
B — Beginner's Zelda
Author: valeriu
We can prove by induction that on any tree with $$$K$$$ leaves, the answer is $$$[{\frac{K + 1}{2}}]$$$, where with $$$[x]$$$ we denote the greatest integer smaller than $$$x$$$. This can be proven by induction, we will give an overview of what a proof would look like:
- For two leaves, the answer is clearly $$$1$$$.
- For three leaves, the answer is clearly $$$2$$$.
- For more than four leaves, it is always the case that we can find two leaves for which the node that will be created as a result of applying an operation on these two will have degree greater than $$$1$$$ (i.e. it will not be a leaf)
The third argument holds because in a tree with four leaves, we have either at least two nodes with degree at least $$$3$$$ (and as such we can choose two leaves which contain these two nodes on their chain), or a node with degree at least $$$4$$$. Furthermore, it reduces the number of leaves in the tree by $$$2$$$.
#include <cmath>
#include <functional>
#include <fstream>
#include <iostream>
#include <vector>
#include <algorithm>
#include <string>
#include <set>
#include <map>
#include <list>
#include <time.h>
#include <math.h>
#include <random>
#include <deque>
#include <queue>
#include <unordered_map>
#include <unordered_set>
#include <iomanip>
#include <cassert>
#include <bitset>
#include <sstream>
#include <cstring>
#include <numeric>
#define all(x) (x).begin(),(x).end()
using namespace std;
using ll = long long;
using ld = long double;
//#define int ll
#define sz(x) ((int)(x).size())
using pii = pair<int,int>;
using tii = tuple<int,int,int>;
const int nmax = 1e6 + 5;
const int inf = 1e9 + 5;
int n, k, m, q;
int freq[nmax];
static void testcase() {
cin >> n;
for (int i = 1, a, b; i < n; i++) {
cin >> a >> b;
freq[a]++;
freq[b]++;
}
int cnt = 0;
for(int i = 1; i <= n; i++)
cnt += (freq[i] == 1),
freq[i] = 0;
cout << (cnt + 1) / 2 << '\n';
return;
}
signed main() {
ios::sync_with_stdio(0);
cin.tie(0);
int t;
cin >> t;
for (int i = 0; i < t; i++)
testcase();
}
/**
Anul asta se da centroid.
-- Surse oficiale
*/
C — Largest Subsequence
Author: tibinyte
We can notice that this operation will ultimately reverse the lexicographically largest subset of the initial string.
Thus, we can easily check if the string is sortable, and for finding the number of operations we will subtract the length of the largest prefix of equal values of the subset from its length.
This solution works in $$$O(n)$$$ time.
#include <bits/stdc++.h>
using namespace std;
int main()
{
cin.tie(nullptr)->sync_with_stdio(false);
int q;
cin >> q;
while (q--)
{
int n;
string s;
cin >> n >> s;
s = '$' + s;
vector<int> subset;
for (int i = 1; i <= n; ++i)
{
while (!subset.empty() && s[subset.back()] < s[i])
{
subset.pop_back();
}
subset.push_back(i);
}
int ans = 0;
int m = (int)subset.size() - 1;
while (ans <= m && s[subset[ans]] == s[subset[0]])
{
ans++;
}
ans = m - ans + 1;
for (int i = 0; i <= m; ++i)
{
if (i < m - i)
{
swap(s[subset[i]], s[subset[m - i]]);
}
}
for (int i = 2; i <= n; ++i)
{
if (s[i] < s[i - 1])
{
ans = -1;
break;
}
}
cout << ans << '\n';
}
}
D — Cyclic MEX
Author: tibinyte
Let's analyze how the values of each prefix mex changes upon performing a cyclic shift to the left:
- The first prefix mex is popped.
- Each prefix mex with a value less than $$$p_1$$$ doesn't change.
- Each prefix mex with a value greater than $$$p_1$$$ becomes $$$p_1$$$.
- $$$n$$$ is appended to the back.
Let's keep our prefix mexes compressed ( meaning that we keep the value and its frequency instead of keeping multiple same values ). After that, we can simulate the above process naively with a deque, because the potential will decrease by the number of performed operations.
This solution works in $$$O(n)$$$ time.
#include <bits/stdc++.h>
#define int long long
using namespace std;
int32_t main()
{
cin.tie(nullptr)->sync_with_stdio(false);
int q;
cin >> q;
while (q--)
{
int n;
cin >> n;
vector<int> a(n + 1);
for (int i = 1; i <= n; ++i)
{
cin >> a[i];
}
deque<pair<int, int>> dq;
vector<int> f(n + 1);
int mex = 0;
int sum = 0;
for (int i = 1; i <= n; ++i)
{
f[a[i]]++;
while (f[mex])
{
mex++;
}
dq.push_back({mex, 1});
sum += mex;
}
int ans = sum;
for (int i = 1; i < n; ++i)
{
pair<int, int> me = {a[i], 0};
sum -= dq.front().first;
dq.front().second--;
if (dq.front().second == 0)
{
dq.pop_front();
}
while (!dq.empty() && dq.back().first >= a[i])
{
sum -= dq.back().first * dq.back().second;
me.second += dq.back().second;
dq.pop_back();
}
dq.push_back(me);
sum = sum + me.first * me.second;
dq.push_back({n, 1});
sum += n;
ans = max(ans, sum);
}
cout << ans << '\n';
}
}
E — One-X
Author: tibinyte
Let's try to solve a slightly easier problem first: changing the coefficient of the label to be the msb of the label.
We can note that at each depth, every label will have the same number of digits ( in base 2 ), thus the same msb. And we can notice that for each depth there are at most $$$2$$$ different interval lengths. Combining the former with the latter, we can solve this case in $$$O(log(N))$$$ time complexity, since the maximum depth is bounded by $$$log(N)$$$.
We can find an easy generalization to this: for the $$$k$$$-th most significant bit of each label to be $$$1$$$, we have to go right from a node whose depth is $$$k-1$$$. Thus the above solution can be extended to find the contribution of the $$$k$$$-th most significant bit of each label.
Doing this for all bits gives us a time complexity of $$$O(log^2(N))$$$ which is sufficient to pass the given constraints.
#include <bits/stdc++.h>
#define int long long
using namespace std;
const int mod = 998244353;
struct Mint
{
int val;
Mint(long long x = 0)
{
val = x % mod;
}
Mint operator+(Mint oth)
{
return val + oth.val;
}
Mint operator-(Mint oth)
{
return val - oth.val + mod;
}
Mint operator*(Mint oth)
{
return val * oth.val;
}
};
Mint powmod(int a, int b)
{
if (b == 0)
{
return 1;
}
if (b % 2 == 1)
{
return powmod(a, b - 1) * a;
}
Mint P = powmod(a, b / 2);
return P * P;
}
map<int, vector<vector<tuple<int, int, int>>>> memo;
vector<vector<tuple<int, int, int>>> find_ranges(int lg) // log^2
{
if (memo.find(lg) != memo.end())
{
return memo[lg];
}
if (lg == 1)
{
return {{{1, 1, 1}}};
}
vector<vector<tuple<int, int, int>>> l = find_ranges((lg + 1) / 2);
vector<vector<tuple<int, int, int>>> r = find_ranges(lg / 2);
vector<vector<tuple<int, int, int>>> ans(max(l.size(), r.size()) + 1);
Mint x = (powmod(2, (lg + 1) / 2) - 1) * (powmod(2, lg / 2) - 1);
ans[0].push_back({lg, 1, x.val});
for (int i = 0; i < (int)l.size(); ++i)
{
for (auto j : l[i])
{
ans[i + 1].push_back(j);
}
}
for (int i = 0; i < (int)r.size(); ++i)
{
for (auto j : r[i])
{
bool ok = false;
for (auto &[size, cnt, ways] : ans[i + 1])
{
if (size == get<0>(j))
{
ok = true;
cnt += get<1>(j);
}
}
if (!ok)
{
ans[i + 1].push_back(j);
}
}
}
return memo[lg] = ans;
}
Mint count(int lg, int coef)
{
vector<vector<tuple<int, int, int>>> adam = find_ranges(lg);
Mint ans = 0;
Mint pow_2 = 1;
for (int i = 0; i < (int)adam.size(); ++i)
{
for (auto [size, cnt, ways] : adam[i])
{
ans = ans + pow_2 * coef * cnt * ways;
}
pow_2 = pow_2 * 2;
}
return ans;
}
int32_t main()
{
cin.tie(nullptr)->sync_with_stdio(false);
int q;
cin >> q;
while (q--)
{
int n;
cin >> n;
vector<vector<tuple<int, int, int>>> adam = find_ranges(n);
Mint ans = count(n, 1);
for (int i = 1; i < (int)adam.size(); ++i)
{
for (auto [size, cnt, ways] : adam[i - 1])
{
int lsize = (size + 1) / 2;
int rsize = size / 2;
if (rsize)
{
ans = ans + count(rsize, cnt);
}
}
}
cout << ans.val << '\n';
memo.clear();
}
}
F — Field Should Not Be Empty
Author: tibinyte
The key observation to this problem is that most swaps are useless. In fact, we can find that only $$$2n$$$ swaps can increase our initial cost:
- The first type of meaningful swaps is $$$(i,p_i)$$$
- For each $$$1 < i < n$$$, consider $$$k$$$ and $$$l$$$ such that $$$p_k = max(p_1,p_2,...,p_{i-1})$$$ and $$$p_l = min(p_{i+1},p_{i+2},...,p_n)$$$. The second type is $$$(k,l)$$$.
The reason why this is true is because the first type of swap can create a new good index ( since every good index must be a fixed point ) and the second type of swap can fix an already good index. It's obvious that if a swap does neither of the above, it can't increase our current cost.
Calculating $$$f(p)$$$ after each swap can be done with a segment tree. Consider adding one on the ranges $$$(min(i,p_i),max(i,p_i))$$$. Now an index is good if and only if its value is $$$1$$$, which is also the minimum value an index can have. Thus our segment tree has to find the number of minimums while performing range additions which can be easily maintained by lazy propagation.
This solution works in $$$O(nlog(n))$$$ time complexity.
#include <bits/stdc++.h>
using namespace std;
struct aint
{
vector<pair<int, int>> a;
vector<int> lazy;
void resize(int n)
{
a = vector<pair<int, int>>(4 * n);
lazy = vector<int>(4 * n);
}
void init(int node, int left, int right)
{
a[node].second = (right - left + 1);
if (left != right)
{
int mid = (left + right) / 2;
init(2 * node, left, mid);
init(2 * node + 1, mid + 1, right);
}
}
void prop(int node, int left, int right)
{
a[node].first += lazy[node];
if (left != right)
{
lazy[2 * node] += lazy[node];
lazy[2 * node + 1] += lazy[node];
}
lazy[node] = 0;
}
pair<int, int> merge(pair<int, int> a, pair<int, int> b)
{
if (a.first == b.first)
{
return pair<int, int>{a.first, a.second + b.second};
}
return min(a, b);
}
void update(int node, int left, int right, int st, int dr, int val)
{
prop(node, left, right);
if (right < st || left > dr)
{
return;
}
if (st <= left && dr >= right)
{
lazy[node] += val;
prop(node, left, right);
return;
}
int mid = (left + right) / 2;
update(2 * node, left, mid, st, dr, val);
update(2 * node + 1, mid + 1, right, st, dr, val);
a[node] = merge(a[2 * node], a[2 * node + 1]);
}
};
struct bit
{
vector<int> a;
void resize(int n)
{
a = vector<int>(n + 1);
}
void update(int pos, int val)
{
int n = (int)a.size() - 1;
for (int i = pos; i <= n; i += i & (-i))
{
a[i] += val;
}
}
int query(int pos)
{
int ans = 0;
for (int i = pos; i; i -= i & (-i))
{
ans += a[i];
}
return ans;
}
int query(int st, int dr)
{
return query(dr) - query(st - 1);
}
};
int32_t main()
{
cin.tie(nullptr)->sync_with_stdio(false);
int q;
cin >> q;
while (q--)
{
int n;
cin >> n;
vector<int> a(n + 1);
bool sortat = true;
for (int i = 1; i <= n; ++i)
{
cin >> a[i];
if (a[i] != i)
{
sortat = false;
}
}
if (sortat)
{
cout << n - 2 << '\n';
continue;
}
vector<pair<int, int>> qui(n + 1);
stack<int> s;
bit tree;
tree.resize(n);
for (int i = 1; i <= n; ++i)
{
while (!s.empty() && a[s.top()] < a[i])
{
s.pop();
}
if (!s.empty())
{
qui[i].first = s.top();
}
if (tree.query(a[i], n) > 1)
{
qui[i].first = 0;
}
tree.update(a[i], 1);
s.push(i);
}
while (!s.empty())
s.pop();
tree.resize(n);
for (int i = n; i >= 1; --i)
{
while (!s.empty() && a[s.top()] > a[i])
{
s.pop();
}
if (!s.empty())
{
qui[i].second = s.top();
}
if (tree.query(1, a[i]) > 1)
{
qui[i].second = 0;
}
tree.update(a[i], 1);
s.push(i);
}
aint lesgo;
lesgo.resize(n);
lesgo.init(1, 1, n);
function<void(int, int)> upd = [&](int ind, int sign)
{
lesgo.update(1, 1, n, min(ind, a[ind]), max(ind, a[ind]), sign);
};
function<int()> count = [&]()
{
if (lesgo.a[1].first == 1)
{
return lesgo.a[1].second;
}
return 0;
};
function<void(int, int)> mySwap = [&](int i, int j)
{
upd(i, -1);
upd(j, -1);
swap(a[i], a[j]);
upd(i, 1);
upd(j, 1);
};
for (int i = 1; i <= n; ++i)
{
upd(i, 1);
}
int ans = 0;
for (int i = 1; i <= n; ++i)
{
if (qui[i].first && qui[i].second)
{
mySwap(qui[i].first, qui[i].second);
ans = max(ans, count());
mySwap(qui[i].first, qui[i].second);
}
}
vector<int> pos(n + 1);
for (int i = 1; i <= n; ++i)
{
pos[a[i]] = i;
}
for (int i = 1; i <= n; ++i)
{
int qui1 = i;
int qui2 = pos[i];
mySwap(qui1, qui2);
ans = max(ans, count());
mySwap(qui1, qui2);
}
cout << ans << '\n';
}
}
why the name of problem C is Adam lying face when it is Largest subsequence? or am i stupid
upd. fixed, thanks
the same with problem A
Problem names of A, B and C are wrong:
A — Indirect Sort vs 1905A - Constructive Problems
B — Beginner's Zelda vs 1905B - Begginer's Zelda
C — Adam Lying Face vs 1905C - Largest Subsequence
upd: Thanks for fixing it :)
In problem C my method has utilised O(n) complexity and yet it is giving tle at last test(9). please look upon this..237829405
v.insert(...)
runs inO(n)
time. I can offer deeper feedback if you rewrite your code in a more readable styleyah,i realised that btw thanks...my codes are horrible lol
Problem C did not mention whether the string had to be sorted in ascending or descending order. I got a wrong answer on test case 2 (21th test case dcbbaa) as i considered sorting in descending order too. Please look into this issue.
gray round, that says it all...
dude, Link
omg gray comment...
people when there is no alts
ratism?
Can someone please explain C.
for finding the number of operations we will subtract the length of the largest prefix of equal values of the subset from its length.
Didn't understand this.
If the largest subsequence is
zzzba
for example, the number of operations would be2
.Total length is
5
and the largest prefix of equal values would bezzz
This is because after two operations, the largest subsequence will be
zzz
and further operations will be no-op.In E you can also notice that for a subtree of fixed size the sum of LCAs for all the verices in this subtree is some linear function of $$$v$$$: $$$f(v) = kv + b$$$, where $$$v$$$ is root of the subtree. Then it is easy to solve the problem for arbitrary $$$v$$$, thus getting a linear function. Since there's at most $$$O(\log (n)^2)$$$ different subtrees you can easily solve the problem using recursion with memoization.
This is my implementation: 237520087
Used the same idea, except the only candidates for subtree sizes are $$$ \left\lfloor \frac{n}{2^x} \right\rfloor$$$ and $$$ \left\lceil \frac{n}{2^x} \right\rceil$$$.
So this actually gives $$$O(\log n)$$$ per test case (assuming you merge in constant time).
$$$O(\log n^2)$$$ = $$$O(2\log n)$$$ = $$$O(\log n)$$$
What I've written was $$$O(\log (n)^2) = O((\log (n))^2)$$$, and you can't simplify the way you did. Maybe I should've written $$$O((\log (n))^2)$$$ in the first place, but nevertheless.
oh, I see. but isn't number of different subtrees actually $$$O(2\log n)$$$ ?
According to ToniB that is true, but I don't have the proof yet, so I'm not too keen on believing this. Actually, I might've found counterexamples, but I'm not too sure.
UPD. Tried to test out different values of $$$n$$$, and found that it is, indeed, true, and my counterexamples were wrong.
Actually, if it really is the bound, then my solution also checks at most $$$O(\log (n))$$$ different subtrees per test case.
However, I don't really know how to formally prove that it is really the case. Can you share your proof?
for any $$$n$$$: $$$\left\lfloor\frac{n}{2}\right\rfloor$$$ , $$$\left\lceil\frac{n}{2}\right\rceil$$$, $$$\left\lfloor\frac{n + 1}{2}\right\rfloor$$$ and $$$\left\lceil\frac{n+1}{2}\right\rceil$$$ have at most 2 unique values, and they are also some $$$x$$$ and $$$x + 1$$$
so for every depth of the tree we have at most 2 unique ranges.
so max number of different subtrees is $$$O(2\log n) $$$
For any $$$L \leq n \leq R$$$ you can see $$$\left\lfloor \frac{L}{2} \right\rfloor \leq \left\lfloor \frac{n}{2} \right\rfloor \leq \left\lceil \frac{n}{2} \right\rceil \leq \left\lceil \frac{R}{2} \right\rceil$$$.
Now you prove by induction. Suppose $$$\left\lfloor \frac{n}{2^x} \right\rfloor \leq k \leq \left\lceil \frac{n}{2^x} \right\rceil$$$ where $$$k$$$ is a subtree size on $$$x$$$-th layer from the top. Using the inequality above, you get
which simplifies to
Since these are the only possible sizes of subtrees in the next layer, the next step will also hold. Base case is trivial.
UPD: or just what ibrahimwq said, looks simpler.
Can you explain more clearly how to construct this formula: f(v) = kv + b Thank you!
Well, firstly, suppose we have some subtree of size $$$n$$$, and by induction for trees of size smaller than $$$n$$$ we already know their linear functions. Then what is the answer for this subtree (LCA is some vertex of this subtree)? LCA(S) = $$$v$$$ only in subsets, for which there is at least 1 leaf from the left son, at least 1 leaf from the right son, and there are no leafs besides this subtree (or LCA would be some ancestor of v). For a tree of size $$$n$$$ left son has the size $$$\lceil \frac{n}{2} \rceil$$$, and right son has the size $$$\lfloor \frac{n}{2} \rfloor$$$. Then there are $$$(2^{\lceil \frac{n}{2} \rceil} - 1)(2^{\lfloor \frac{n}{2} \rfloor} - 1)$$$ subsets for which LCA is $$$v$$$.
Now we need to calculate the answer for the case when LCA is equal to some vertex in the left or the right son. Because by induction we already know what formula of the answer for the left and right son is, we have $$$f_l(v)=k_l v + b_l$$$, $$$f_r(v) = k_r v + b_r$$$. Then the answer for the left son is $$$f_l(2v)$$$, and $$$f_r(2v+1)$$$ for the right son. Composition of linear functions is also a linear function (given $$$f(v)=av+b, g(v)=pv+q, $$$ we have $$$f(g(v)) = a(pv+q)+b = (ap)v + (aq+b)$$$)
Combining everything together, the answer for the subtree is $$$f(v) = (2^{\lceil \frac{n}{2} \rceil} - 1)(2^{\lfloor \frac{n}{2} \rfloor} - 1) v + f_l(2v) + f_r(2v+1).$$$
UPD: base for this induction is linear function for subtree of size 1: $$$f(v) = v$$$
can someone help me in trying to check which testcase my code if failing for problem C :( 237528783
In the last for loop, you have to iterate over lar.size() not n.
sweet. i so fking dumb. cant even see shit. thanks
The author's solution F is too long, it seems it could have been written simpler. https://mirror.codeforces.com/contest/1905/submission/237540164
SegmentTree W
Another simple solution with just regular segtrees https://mirror.codeforces.com/contest/1905/submission/238286775
Problem B from graph :p,though not related to graph related algos :)
We can rather think of B in this ways:
No matter what path we choose, we only reduce the total no. of leaf nodes by 2. And it is obvious that we will only choose a path having 2 leaf nodes. So, if initially the total no. of leaf nodes is x. After 1 operation, it will be x-2. So, how many times you subtract 2 from x? ceil(x/2)
You can use this site to visualize the trees and confirm yourself for better understanding: https://csacademy.com/app/graph_editor/
can any one tell me the intuition of D?
I might have had a different intuition than the Editorial, but i can try. For simplicity, let $$$pm(i)$$$ denote $$$\text{mex}({p_1, p_2, ..., p_i})$$$, that is the mex of a prefix of $$$p$$$ of length $$$i$$$.
Lets assume $$$p_n=0$$$. What's the answer? $$$n$$$. Because every mex of a prefix will be equal to $$$0$$$ except the prefix containing the entire permutation.
Now, what happens when $$$p_{n-1}=0$$$?
If you try a few cases you will see that the answer in this case is $$$p_n + n$$$. If $$$p_n=1$$$ then the answer must be $$$1 + n$$$. Because $$$pm(i)=0$$$ for all $$$i<n-1$$$, and because $$$1$$$ is excluded from $$$p_1, ..., p_{n-1}$$$, $$$pm(n-1)=1$$$. $$$pm(n)=n$$$ as usual.
We can see that the mex is $$$0$$$ until we "hit" the index $$$n-1$$$ (where $$$0$$$ sits). When that happens, mex will become equal to $$$p_n$$$, because that is the only remaining number that is excluded.
Using this information, we can build a solution where we consider the positions of where $$$0$$$ can be in decreasing order. In other words we consider the cyclic shifts in this order (i assume WLOG that $$$p_n=0$$$):
$$$[p_1, ..., 0]$$$
$$$[p_2, ..., 0, p_1]$$$
$$$[p_3, ..., 0, p_1, p_2]$$$
And so on. The nice thing about doing it this way is that we know that the mex is $$$0$$$ for all indices before the index of the number $$$0$$$ as explained above. This way we can think of the operation not as a cyclic shift, but as appending numbers to the end of our list.
So our operations really look like this:
$$$[0]$$$
$$$[0, p_1]$$$
$$$[0, p_1, p_2]$$$
$$$[0, p_1, p_2, p_3]$$$
When appending a new number, we need to consider how it can change our answer. It can be helpful to think about some cases.
If we ever append $$$1$$$ to the end, we know that all mexes will be $$$1$$$ except the last one, which will be $$$n$$$ (assuming our simplified view of 'appending to list'). If we ever append a $$$2$$$ after this, we know we won't change any of mexes before the index of $$$1$$$, but between the index of $$$1$$$ and the index of $$$2$$$ the mex will be $$$2$$$.
If you try this for some different cases on paper, it shouldn't be impossible to see that what we need to do when appending a new number, say $$$p_i$$$, is that we must find the index $$$j$$$ of the last number that is smaller than $$$p_i$$$. I.e. we must find the largest $$$j$$$ in our list so far such that $$$p_j<p_i$$$. Because we will not affect any mexes up to $$$j$$$. All mexes between $$$i$$$ and $$$j$$$ will then be equal to our new value $$$p_i$$$, because by construction, $$$p_i$$$ is smaller than all numbers in this range. Thus, we will update our answer with $$$(i - j) \cdot p_i + \text{whatever answer we got when appending }p_j$$$. Our final answer will be the maximum of all of the $$$n$$$ answers from each 'shift'.
How do you find the last element smaller than $$$p_i$$$? Edit: Oh, that's why VectorViking has a monotonic stack. We read the array of $$$p_i$$$ from left to right, and at every $$$p_i$$$, we pop all elements bigger than $$$p_i$$$ before pushing $$$p_i$$$. Thank you very much!
and anyway in general if you wanna keep track of nextLarger, nextSmaller or prevLarger, prevSmaller you use monotonic stack
It's actually the same as the editorial's solution, just changes the order to calculate ans. But your way is more easy to understand, and then we can clearly see that each number would push on and pop out in the stack for at most once, thus the time complexity is O(n).
Your maths skill are very good in my case I don't even know why 2 is greater then 1
I guess there are some typos after the sentence
Those $$$p_j$$$ here should be $$$p_i$$$
I tried implementing this solution but idk where is my error. I get the following verdict: 45478th numbers differ — expected: '31', found: '0' even though at the end of my code I output something like $$$max+n$$$, where $$$max$$$ is always $$$max\geq 0$$$. Here is my code
When n=1 you don't read the whole input
pretty good explanation and the code was clean and easy to understand , got the idea clearly Thanks bro!!
Do you have any intuition that zero start from head? (0, p1, p2, p3, ...)
My intuition is zero start from head, and i think of it for a day. But i can not find the solution like monotonic stack which can calculate the sum in O(1).
EDIT: Or how do you come up with the intuition that zero start end but not front?
EDIT2: I probably found out why zero can only be put on right.
consider zero move right like 0 1 2 3 -> 3 0 1 2 I can use two pointer from left two right to found out where the integer start add. However, the number of right hand sid of number we found is not fixed, the number in the sequence may still become bigger. To detect this, I need O(N). For example:
number : 0 1 4 5 2 3 6 7 -> 7 6 3 2 0 1 4 5
sum : ___1 2 2 2 3 6 7 8 -> 0 0 0 0 1 4 5 8
On the other hand, let zero start from right can avoid this problem. Because when I move some number from left to right we can use a monotonic stack to keep the changing of number.
The different is that whether you need to traverse the array to check the current number or not.
Let's solve the following example: 2 3 6 7 0 1 4 5
We first place 0 at the end: 1 4 5 2 3 6 7 0
What's the cost? It's 8 ( =n ), because everything before 0 adds +0, and at the end there is always +n (we used all numbers from 0 to n-1).
Now we rotate the sequence to the left.
Why is that? Because now 2 is at the end and we can use it for places where we used 4 and 5 (they are bigger). It's enough to maintain a monotonic stack while iterating, so the solution is linear.
You can see my solution at: 237547878. Hope this helps!
Why is the cost of the 4th iteration not $$$1 + 2 \times 2 + 8$$$?
Because the cost is computed as the sum of mex, for each iteration:
As you can see, when we "rotated" 2 from the start of the sequence to the end, it became the new smallest non-negative integer ( mex ) instead of 4 and 5. The third 2 comes from the rotation itself.
Thank you!
Can someone please explain D. I'm not able to understand the tutorial. Thanks!
These were my ideas during the contest. I hope it helps!
We are working on cyclic shifts so we can concatenate the given permutation to itself. Now a cyclic shift can be represented by a contiguous subsequence of length n.
For any permutation $$$mex([a_1, a_2, \dots, a_i]) = min([a_{i+1}, a_{i+2}, \dots, a_n])$$$
Let's apply the contribution to the sum technique. For every element in the permutation we want to find it's contribution to $$$\sum_{i=1}^{n} mex([a_1, a_2, \dots, a_i]$$$. Consider that $$$a_j = min([a_i, a_{i+1}, \dots, a_j])$$$, where $$$i$$$ is minimal i. e. $$$a_j > min([a_{i-1}, a_i, \dots, a_j])$$$, the $$$j^{th}$$$ element contributes to interval $$$[i, j)$$$ with $$$a_j$$$.
The data structure part (not the best): For every $$$j$$$ we can calculate $$$i$$$ using a monotonic stack and for updating the contribution we can use any data structure that supports range update and range query: for every $$$j$$$ we add $$$a_j$$$ to interval $$$[i, j)$$$ and for every $$$k \geq n$$$ we calculate the sum from interval $$$[k - n + 1, k]$$$. I used a segment tree because: In this sad world full of imperfections, ugly segment trees exist.
Source code: https://mirror.codeforces.com/contest/1905/submission/237526381
lol
can you explain why you do a[i]++ in your code ? Thanks a lot.
In problem B solution. I think there should be ceil(K/2) instead of ceil(K+1/2)
does it matter?
I've written about floor((X+1)/2), which is practically ceil(X/2) :)
If you want to write the floor function properly, you can use $$$\left\lfloor\frac{K+1}{2}\right\rfloor$$$ (
\left\lfloor\frac{K+1}{2}\right\rfloor
) instead of $$$[\frac{K+1}{2}]$$$.unbalanced round
Can anyone tell why I am getting a runtime error on this submission(C)- 237514134
Type cast d.size()-1 to (int)d.size()-1, now u are getting wrong answer
It worked, thanks, but can you tell what exactly is causing the problem?
Bcoz size() returns an unsigned int, so if used in the loop condition might cause undefined behaviour
Ohh alright thanks!
This is the proof that D worst time is O(n)
I will use the accounting method (the second method of amortized analysis)
The operations of the algorithm to solve problem D:
So, the upper bound of the algorithm is O(n^2) It is correct, but not tight
Note two important things:
1- we have only n operations
2- we cannot remove frequency of any element unless it was incremented before
Can't we just make the cost of increment to be 2 instead of 1? one for incrementing and one as a credit to be the cost of the future remove operation so we can assume the cost of merge operation to be 0, and the increment operation to be 2
hence, the total cost $$$<= \sum_i^n 2 <= 2n$$$ Which is O(n)
Can someone pls see why my code for problem C fails for 1043rd numbers on test case 2 ? 237554366
I think the following line is causing an issue
After checking the next letter, you simply set
last1
tolast2
. But you forget the possibility thatlast2
could be smaller thanlast1
.So the correct code is
last1 = max(last1, last2)
Found a test case that your solution will fail
thanks a lot.. It got an AC :)
this contest was great
in C I don't know what is
s = '$' + s
for and what is the name of title I should study for this.just to for from 1 to n if you dont have that its from 0
thanks
Do you know there is a good problem about MEX in luogu, too?
Link
Hi! Friends :)
Hi! lol
here is my code for problem C,i cant understand why it doesnt work.here is mine.#237576156
https://mirror.codeforces.com/contest/1905/my
czddeneeeemigec may have conut 7 but it return 8
i think problem F is easier than D for someone like me :(
Video Editorial For Problem A,B,C,D.
can anyone explain the editorial solution of E
The problem D. The second cycle may be from 1 to n .
"Thus, we can easily check if the string is sortable". How that's literally the point of an editorial plus for number of operations you're not explaining why which kind of defeats the whole purpose of an editorial.
Is there a simpler O(n) solution for F ?
https://mirror.codeforces.com/contest/1905/submission/237719024 this solution is O(N). I've used hashmap to store the pair of swaps, so that should be O(N) too.
Here comes a $$$O(\log N)$$$ solution for problem E.
We still focus on the fact that, for each depth there are at most 2 different interval lengths, and let's assume they are $$$(2k,2k+1)$$$, which will become $$$(k, k+1)$$$ a layer deeper. For the case $$$(2k-1, 2k)$$$ we do it in a similar way.
We can count the number and the sum of ids of both kinds of intervals, e.g. there are $$$cnt_0$$$ intervals of length $$$2k$$$, and the sum of their ids is $$$sum_0$$$. The same for $$$cnt_1, sum_1$$$ with intervals of length $$$2k+1$$$.
If we already know these values of $$$(2k, 2k+1)$$$, we can calculate those for $$$(k, k+1)$$$.
Then, we count the number of leaves of a segment-tree with a $$$(k,k+1)$$$-length interval root, e.g. there are $$$lf _0$$$ leaves in a segment-tree with a $$$k$$$-length interval root, and $$$lf _1$$$ for $$$k+1$$$.
We can calculate these values of $$$(2k, 2k+1)$$$ from $$$(k, k+1)$$$.
At last, the contribution of this depth is $$$sum _0*(2 ^{lf _0}-1)*(2 ^{lf_0}-1) + sum _1*(2^{lf_0}-1)*(2^{lf_1}-1)$$$.
The time complexity is $$$O(\log N)$$$. You can see my submission for more details.
I solve D with Treap algorithm.
can you please explain how did you do it?
Just replace your segment tree with Treap)))
Thank you! I think it was a nice idea. It's much more easier than author's solution!
What is author's solution?
He used deque.
What is deque?
It's a treap but you can use only first and last element.
thank you, petarda
you are welcome, traktor.
Thats enough of internet today
Can someone please help me figure out the error for C in my submission 237655544? It fails on the 2124th test case, test 3.
This is so weird. I rewrite your code in Python3 and it passed. 237667769
Update:
Since you are storing index in
v1
, so you should usevector<int>
instead ofvector<char>
.See 237668463 for more details.
Thanks a lot
can any one tell me the intuition of E?
I'm not able to understand the editorial.
Will using set instead of deque in D exceed the time limit ? I used a set to maintain which numbers present in the current prefix mex array are grater than p1 and with a time complexity of O(nlogn) my code is giving tle on submission
https://mirror.codeforces.com/contest/1905/submission/237714597
wdym? it got AC
yeah that was the corrected submission here's the one using a set https://mirror.codeforces.com/contest/1905/submission/237711310
cool
I solved F without segment tree, here is my solution: 1) There is only 2 * n possible good swaps same as normal solution: n solutions of type(i, pi) and n solutions of type: can actually make some other numbers good 2) We calculate the solution for first type in O(n) time by precalculating for every i: are all elements before i-th index smaller than i in boolean array? So when we swap a[i] and a[a[i]] we need to "only" check did they become good solutions now(for pi = i we just check if it is good in our boolean array because if it is after swap it will be added to answer, but we also need to check is pi = ppi(if we have situation: 1 2 3 5 4, after swaping 5 and 4 we also make 5 in good spot) and similarly check will it increase solution.Yes, after doing this operation some more indexes might become good, but it will be checked in second case :) 3) Second case: which indexes can become good after swaping some 2 elements? Only those that have 1 pair inversed on left and right of them. We can find for every element can it become good and those 2 numbers in O(nlogn). After doing it, we just sort those pairs of indexes and for every pair (i,j) that makes x indexes good we add x to solution without swaps. Note: in this part we also check will (i, j) also get i or j in their good position and do same checks as in previous part. Solution:237847430
Very late to this, but I also found another solution for problem F that avoids segment trees or sets: 238072983
If we think of the permutation $$$p$$$ as a function $$$(i\mapsto p_i)$$$, then an index $$$x$$$ is good if and only if $$$p_x=x$$$ and no arc $$$i\rightarrow p_i$$$ jumps across it (where having the arc jump over $$$x$$$ means $$$i<x<p_i$$$ or $$$p_i<x<i$$$). Then the only helpful swaps are those that split the bounding range of a cycle into two cycles with disjoint bounding ranges (all other swaps merge two cycles into a bigger cycle, which we do not want). The set of new good indices that will result from splitting the cycle are the set of indices such that they are between the new resulting bounding ranges, they satisfy $$$p_x=x$$$, and they have no more arcs jumping over them after doing the swap (i.e. they had exactly 2 arcs jumping over them before doing the swap).
We can create an array where each index $$$i$$$ stores the number of arcs jumping over it, by simulating many range addition updates (this can be done efficiently by first constructing the array of partial differences, then calculating prefix sums). Then we create another array that keeps track of which indices currently satisfy $$$p_x=x$$$ and also have exactly 2 arcs jumping over them, which we perform range sum queries on (and can also be computed efficiently by using prefix sums). Then iterate over each cycle, over each adjacent pair of the cycle's elements in left-to-right order.
The only special case is when $$$p_x=x$$$ for all $$$x$$$, in which case the answer to the original problem is $$$n-2$$$.
Runtime is $$$O(n\log n)$$$ because we have to sort indices of each cycle, although this could easily be modified to run in $$$O(n)$$$ time by marking each index $$$1\dots n$$$ with a number identifying which cycle it is in, then scanning left-to-right.
Hey guys, can please someone help me out here ? In problem D, why are we running the last for loop in the code for n-1 one times and why not n times ? if we run it for n times, then we will end up with the original configuration only right so the answer should not change. Just curious to know why.
If you have determined that the solution has stabilized, meaning that further iterations do not significantly change the result, there is no need to continue the loop for the full n iterations. This phenomenon is often referred to as convergence, and it is common in iterative algorithms. Stopping the iterations early can lead to faster execution times and more efficient resource utilization.
Thanks!
why am I getting TLE on this submission 238664593 but not on this one 238664527?
1905E
E can be considered in another way by first observing how to use O(logN) time to keep deduct segment tree size pair (size is for range) (k, k + 1) to segment tree size pair (k / 2, k / 2 + 1), then define $$$F_k(x)$$$ as sum for a segment tree with range having size $$$k$$$ and root node as $$$x$$$.
Then a transition can be: $$$F_k(x) = (2^{lsz} - 1) (2^{rsz} - 1) * x + F_{lsz}(2x) + F_{rsz}(2x + 1)$$$, where $$$lsz$$$ and $$$rsz$$$ are the size of range of the subtrees. Then, notice $$$F_k(x)$$$ will be an affine transformation in $$$\mathbb Z \to \mathbb Z$$$ (or $$$\mathbb R \to \mathbb R$$$). Considering $$$2x, 2x+1$$$ are both affine, and affine transformation is closed under adding, multiplying by a scalar (forms a vector space), and also closed under composition ($$$C(Ax + b) + d = ACx+ Cb + d$$$, so the result $$$F_k(x)$$$ is still an affine transformation. So by induction $$$F_k(x)$$$ is affine for all $$$k \in \mathbb N$$$.
I think the answer for problem A for n,m>=2 will always be 2 for minimum number of cities.
Then why is our answers the maximum of n,m?
Problem A reminds me of the folklore problem grid infection (try it if you haven't seen it before!) which is very nice.
Given $$$n \times n$$$ grid, some squares are initially infected. In each subsequent step in time, any square which has two or more infected neighbors also becomes infected. Squares that are infected stay infected forever. What's the minimum number of squares that need to be infected at the start for the infection to eventually spread to every square in the $$$n \times n$$$ grid?
I got TLE on problem D by using the lazy segment tree O(nlogn),maybe due to the big factor.Then I use the monotone stack to solve problem D in O(n) time and finally got AC.It`s such a really tough experience,but I enjoy it hahahaha! :(
i do not understand test case on problem C 15 czddeneeeemigec output:6 please help me
An easy implimentaion of problen C is given: https://mirror.codeforces.com/contest/1905/submission/268933428 , it can also be improved.
Solution: https://mirror.codeforces.com/contest/1905/submission/275880393
F can be solved quite simply and without a segment tree. The condition for an index being good is equivalent to
Therefore we can iterate through all prefixes of p noticing that there is either zero or one swaps that improves the answer.
These can be found through casework: Case 1: the index x is already good (zero swaps will improve the answer) Case 2: p[x]=x and p[1...x] is missing exactly one element-> in this case notice there must be one element greater than x, the swap is then swap it for the missing element Case 3: p[x]>x and we are missing only x -> swap p[x] and x Case 4: Otherwise -> no swaps will improve the answer
Our answer is the most improving swap (makes the most elements good) + the number of indices already good. If all indices are good, it can be shown that the optimal answer is to swap the two last elements.
Wait! What if a swap makes an index no longer good! We can show that this will never happen. Notice that all our swaps swap a larger element that occurs earlier in the array with a smaller one that occurs later. Using this fact let us use casework (that fact is sufficient but this is easier to understand):
Case 1: ...a...b...x -> ...b...a...x : obviously in this swap x will remain good because the order of elements before has no effect on whether they form a permutation 1...x
Case 2: ...x...a...b -> ...x...a...b : our condition does not care about elements after x nor their order
Case 3: ...a...x...b -> ...b...x...a : b<a thus x cannot be good bc if b<x then a<x but if x is good p[1...x] must contain all elements <x
Case 4: ...a...x... -> ...x...a... : x<a therefore x cannot be good thus a contradiction
Case 5: ...x...a... -> ...a...x... : a<x thus x cannot be good.
Thus we don't need to worry about invalidating good indices when making swaps that improve other indices. (Of course in the case of 1,2,3,...N-1, N there are no good swaps thus our best answer is N-2 because each x can only be good in one location. Swapping the last two elements gives us that best case)
This can be implemented by sweeping left to right with three sets (missing elements, all elements in the prefix, a multiset of all good swaps)
Incidentally, this also proves there are at most N swaps.
Problem C was good,required a great depth of thought to avoid wrong subs.
I solved with the solution given by Apachee, but I don't understand the editorial solution. Can someone explain?
for problem c,is there any method to do it without finding the lexicographically largest subsequence?