1831A — Twin Permutations
Author: Gheal
If $$$a_i+b_i \le a_{i+1}+b_{i+1}$$$, then $$$a_i+b_i$$$ can be equal to $$$a_{i+1}+b_{i+1}$$$.
Building on the idea from the first hint, can we build a permutation $$$b$$$ such that $$$a_1+b_1=a_2+b_2=\ldots=a_n+b_n$$$?
Since $$$a_i+b_i \le a_{i+1}+b_{i+1}$$$, then $$$a_i+b_i$$$ can be equal to $$$a_{i+1}+b_{i+1}$$$.
Therefore, any permutation $$$b$$$ which satisfies $$$a_1+b_1=a_2+b_2=\ldots=a_n+b_n$$$ is a valid answer.
If $$$b_i=n+1-a_i$$$, then:
- $$$b$$$ is a permutation;
- $$$a_1+b_1=a_2+b_2=\ldots=a_n+b_n=n+1$$$
Consequently, $$$b=[n+1-a_1,n+1-a_2,\ldots,n+1-a_i,\ldots,n+1-a_n]$$$ is a valid answer.
Time complexity per test case : $$$O(N)$$$
#include<bits/stdc++.h>
using namespace std;
typedef long long ll;
void tc(){
ll n;
cin>>n;
for(ll i=0;i<n;i++){
ll x;
cin>>x;
cout<<n+1-x<<' ';
}
cout<<'\n';
}
int main()
{
ios_base::sync_with_stdio(false); cin.tie(0);
ll t; cin>>t; while(t--)
tc();
return 0;
}
1831B — Array Merging
Author: tibinyte
When we merge two arrays $$$a$$$ and $$$b$$$, we can force the resulting array to have $$$[a_{l_1},a_{l_1+1},\ldots,a_{r_1},b_{l_2},b_{l_2+1},\ldots,b_{r_1}]$$$ as a subarray, for some $$$1 \le l_1 \le r_1 \le n$$$ and $$$1 \le l_2 \le r_2 \le n$$$.
If $$$a_{l_1}=b_{l_1}$$$, then we can achieve a contiguous sequence of $$$(r_1-l_1+1)+(r_2-l_2+1)$$$ equal elements in the resulting array.
Let $$$max_a(x)$$$ be the length of the longest subarray from $$$a$$$ containing only elements equal to $$$x$$$. If $$$x$$$ doesn't appear in $$$a$$$, then $$$max_a(x)=0$$$.
Similarly, let $$$max_b(x)$$$ be the length of the longest subarray from $$$b$$$ containing only elements equal to $$$x$$$. If $$$x$$$ doesn't appear in $$$b$$$, then $$$max_b(x)=0$$$.
$$$max_a$$$ and $$$max_b$$$ can be computed in $$$O(N)$$$ by scanning the array while updating current maximal subarray.
When merging two arrays, it is possible to force a particular subarray $$$[a_{l_1},a_{l_1+1},\ldots,a_{r_1}]$$$ to be adjacent to another particular subarray $$$[b_{l_2},b_{l_2+1},\ldots,b_{r_2}]$$$ in the merged array.
We can construct the merged array as follows:
If $$$a_{l_1}=b_{l_2}$$$, then the merged array will have a subarray consisting of $$$(r_1-l_1+1)+(r_2-l_2+1)$$$ equal elements.
Therefore, the answer is equal to:
Time complexity per testcase: $$$O(N)$$$.
#include <bits/stdc++.h>
using namespace std;
int32_t main()
{
cin.tie(nullptr)->sync_with_stdio(false);
int q;
cin >> q;
while (q--)
{
int n;
cin >> n;
vector<int> a(n + 1);
vector<int> b(n + 1);
for (int i = 1; i <= n; ++i)
{
cin >> a[i];
}
for (int i = 1; i <= n; ++i)
{
cin >> b[i];
}
vector<int> fa(n + n + 1);
vector<int> fb(n + n + 1);
int p = 1;
for (int i = 2; i <= n; ++i)
{
if (a[i] != a[i - 1])
{
fa[a[i - 1]] = max(fa[a[i - 1]], i - p);
p = i;
}
}
fa[a[n]] = max(fa[a[n]], n - p + 1);
p = 1;
for (int i = 2; i <= n; ++i)
{
if (b[i] != b[i - 1])
{
fb[b[i - 1]] = max(fb[b[i - 1]], i - p);
p = i;
}
}
fb[b[n]] = max(fb[b[n]], n - p + 1);
int ans = 0;
for (int i = 1; i <= n + n; ++i)
{
ans = max(ans, fa[i] + fb[i]);
}
cout << ans << '\n';
}
}
1830A — Copil Copac Draws Trees
Author: alecs
What is the answer if $$$n=3$$$?
The previous case can be generalised to find the answer for any tree.
This problem can be solved via dynamic programming. From here on out, step $$$1$$$ from the statement will be called a "scan".
Let $$$dp[i]$$$ be the number of scans needed to activate node $$$i$$$, and $$$id[i]$$$ be the index (in the order from the input) of the edge which activated node $$$i$$$.
Intially, since only $$$1$$$ is active, $$$dp[1]=1$$$ and $$$id[1]=0$$$.
We will perform a dfs traversal starting from node $$$1$$$. When we process an edge $$$(u,v)$$$, one of the following two cases can happen:
- If $$$index((u,v)) \ge id[u]$$$, we can visit $$$v$$$ in the same scan as $$$u$$$:
- If $$$index((u,v)) \lt id[u]$$$, $$$v$$$ will be visited in the next scan after $$$dp[u]$$$:
The answer is $$$max_{i=1}^n(dp[i])$$$.
Time complexity per test case: $$$O(n)$$$
#include<bits/stdc++.h>
using namespace std;
const int NMAX = 2e5+5;
int dp[NMAX], id[NMAX];
vector<pair<int,int>> edg[NMAX];
void dfs(int u){
for(auto it : edg[u]){
if(dp[it.first] == 0){
dp[it.first] = dp[u] + (it.second <= id[u]);
id[it.first] = it.second;
dfs(it.first);
}
}
}
void tc(){
int n;
cin>>n;
for(int i=1; i<=n; i++) dp[i] = id[i] = 0, edg[i].clear();
for(int i=1; i<n; i++){
int u,v;
cin>>u>>v;
edg[u].push_back({v,i});
edg[v].push_back({u,i});
}
dp[1] = 1;
dfs(1);
int ans = 0;
for(int i=1; i<=n; i++) ans=max(ans,dp[i]);
cout<<ans<<'\n';
}
int main()
{
ios_base::sync_with_stdio(false);
int t;
cin>>t;
while(t--)
tc();
return 0;
}
#include <bits/stdc++.h>
const int NMAX = 3e5 + 5, INF = 1e9;
int n, f[NMAX], d[NMAX];
std :: vector < std :: pair < int, int > > G[NMAX];
void DFS(int node, int t) {
f[node] = true;
for (int i = 0; i < G[node].size(); ++ i) {
int u = G[node][i].first, c = G[node][i].second;
if (f[u] == false) {
d[u] = (c < t) + d[node];
DFS(u, c);
}
}
return;
}
int main() {
std :: ios_base :: sync_with_stdio(false);
std :: cin.tie(nullptr);
int tc;
std :: cin >> tc;
while (tc --) {
std :: cin >> n;
for (int i = 1, u, v; i < n; ++ i) {
std :: cin >> u >> v;
G[u].push_back({v, i});
G[v].push_back({u, i});
}
for (int i = 1; i <= n; ++ i)
f[i] = false;
d[1] = 0;
DFS(1, n);
int Max = 0;
for (int i = 1; i <= n; ++ i)
Max = std :: max(Max, d[i]);
std :: cout << Max << "\n";
for (int i = 1; i <= n; ++ i)
G[i].clear();
}
return 0;
}
1830B — The BOSS Can Count Pairs
Author: Gheal
Since $$$b_i \le n$$$ and $$$b_j \le n$$$, $$$b_i+b_j = a_i \cdot a_j \le 2 \cdot n$$$.
Since $$$a_i \cdot a_j \le 2 \cdot n$$$, then $$$\min(a_i,a_j) \le \sqrt{2 \cdot n}$$$.
Since $$$b_i \le n$$$ and $$$b_j \le n$$$, $$$b_i+b_j = a_i \cdot a_j \le 2 \cdot n$$$. Therefore, $$$\min(a_i,a_j) \le \sqrt{2 \cdot n}$$$.
Let $$$lim=\sqrt{2 \cdot n}$$$ and $$$fr[a_i][b_i]$$$ be the number of pairs $$$(a_i,b_i)$$$ from the input such that $$$a_i \le lim$$$.
A pair $$$(i,j)$$$ is good if it satisfies $$$a_i \cdot a_j=b_i+b_j$$$.
Firstly, we'll count the number of good pairs $$$(i,j)$$$ such that $$$a_i=a_j$$$. Since $$$min(a_i,a_j) \le lim$$$, we can see that $$$a_i=a_j \le lim$$$.
This sum can be written as:
The remaining good pairs will have $$$a_i \neq a_j$$$, and instead of counting the pairs which have $$$i \lt j$$$, we can count the pairs which have $$$a_i \gt a_j$$$.
Since $$$a_j=min(a_i,a_j)$$$, we can say that $$$a_j \le lim$$$.
Substituting $$$a_j$$$ for $$$j$$$, this second sum can be written as:
Since we've already established that $$$j \le lim = \sqrt{2 \cdot n}$$$, calculating this sum takes $$$O(n\sqrt{n})$$$ time.
Be especially careful when calculating these sums, as $$$a_i \cdot a_i - b_i$$$ and $$$a_i \cdot j-b_i$$$ can end up being either negative or greater than $$$n$$$.
Time complexity per testcase: $$$O(n \sqrt n)$$$
#include<bits/stdc++.h>
using namespace std;
typedef long long ll;
const ll NMAX=2e5+5,SQRMAX=635,MOD=1e9+7;
int fr[SQRMAX][NMAX];
ll a[NMAX],b[NMAX];
void tc(){
ll n,ans=0;
cin>>n;
ll lim=sqrt(n*2);
for(ll i=0;i<n;i++){
cin>>a[i];
}
for(ll i=0;i<n;i++){
cin>>b[i];
if(a[i]<=lim)
fr[a[i]][b[i]]++;
}
for(ll i=0;i<n;i++){
if(a[i]<=lim){
if(a[i]*a[i]-b[i]>=1 && a[i]*a[i]-b[i]<=n)
ans+=fr[a[i]][a[i]*a[i]-b[i]];
}
}
for(ll i=2;i<=lim;i+=2)
ans-=fr[i][i*i/2];
ans/=2;
for(ll i=0;i<n;i++){
for(ll j=1;j<=lim && j<a[i] && j*a[i]<=2*n;j++){
if(j*a[i]-b[i]>=1 && j*a[i]-b[i]<=n)
ans+=fr[j][j*a[i]-b[i]];
}
}
for(ll i=0;i<n;i++){
if(a[i]<=lim)
fr[a[i]][b[i]]=0;
}
cout<<ans<<'\n';
}
int main()
{
ios_base::sync_with_stdio(false); cin.tie(0); cout.tie(0);
ll t;
cin>>t;
while(t--)
tc();
return 0;
}
for _ in range(int(input())):
n = int(input())
ta = list(map(int, input().split()))
tb = list(map(int, input().split()))
a = [(x, y) for x, y in zip(ta, tb)]
a.sort()
cnt = [0] * (2 * n + 1)
pr = 0
ans = 0
for i in range(n):
if pr != a[i][0]:
pr = a[i][0]
if pr * pr > 2 * n:
break
cnt = [0] * (2 * n + 1)
for j in range(i + 1, n):
t = a[i][0] * a[j][0] - a[j][1]
if t >= 0 and t <= 2 * n:
cnt[t] += 1
ans += cnt[a[i][1]]
if i + 1 < n:
t = a[i][0] * a[i + 1][0] - a[i + 1][1]
if t >= 0 and t <= 2 * n:
cnt[t] -= 1
print(ans)
1830C — Hyperregular Bracket Strings
Author: Gheal
While not necessarily a hint, this problem cannot be solved without knowing that there are $$$C_n=\frac{1}{n+1}\binom{2n}{n}$$$ Regular Bracket Strings of length $$$2 \cdot n$$$.
What's the answer if $$$q=1$$$?
What's the answer if $$$q=2$$$ and the two intervals partially overlap?
Based on the previous hint, we can get rid of all partially overlapping intervals. The remaining intervals will have a tree-like structure.
Finding the tree is actually unnecessary and also very difficult.
The brackets on the positions covered by the same subset of intervals must form an RBS.
Hashing. Xor hashing specifically.
First and foremost, the number of regular bracket strings of length $$$2 \cdot n$$$ is equal to $$$C_n=\frac{1}{n+1}\binom{2n}{n}$$$.
Secondly, for a bracket string $$$\overline{s_1s_2 \ldots s_k}$$$, let:
A bracket string $$$\overline{s_1s_2 \ldots s_k}$$$ is a regular bracket string if both of the following statements are true:
$$$\Delta_k=0$$$
$$$\Delta_i \ge 0, i=\overline{1,k}$$$
From now on we'll call a set of indices $$$i_1 < i_2 < \ldots < i_k$$$ a group if $$$\overline{s_{i_1}s_{i_2}\ldots s_{i_k}}$$$ must be an RBS.
There are two main cases to consider, both of which can be proven with the aforementioned conditions for a string to be an RBS:
Let's consider two intervals $$$[l_1,r_1]$$$ and $$$[l_2,r_2]$$$ such that $$$l_1 \le l_2 \le r_2 \le r_1$$$.
The two groups formed by these intervals are:
- $$$[l_2,r_2]$$$
- $$$[l_1,l_2-1] \cup [r_2+1,r_1]$$$
Let's consider two intervals $$$[l_1,r_1]$$$ and $$$[l_2,r_2]$$$ such that $$$l_1 \lt l_2 \le r_1 \lt r_2$$$.
The three groups formed by these two intervals are:
- $$$[l_1,l_2-1]$$$
- $$$[l_2,r_1]$$$
- $$$[r_1+1,r_2]$$$
By taking both of these cases into account, we can conclude that all indices $$$i_k$$$ covered by the same subset of intervals are part of the same group.
Finding the subset of intervals which cover a certain index $$$i$$$ can be implemented using difference arrays and xor hashing.
Each interval $$$[l_i,r_i]$$$ will be assigned a random 64-bit value $$$v_i$$$.
The value of a subset of intervals $$$i_1,i_2,\ldots,i_k$$$ is equal to $$$v_{i_1} \wedge v_{i_2} \wedge \ldots \wedge v_{i_k}$$$.
For each interval $$$[l_i,r_i]$$$, $$$\text{diff}[l_i] \wedge = v_i$$$, $$$\text{diff}[r_i+1] \wedge = v_i$$$.
The value of the subset of intervals which cover position $$$i$$$ is equal to $$$\text{diff}[1] \wedge \text{diff}[2] \wedge \ldots \wedge \text{diff}[i]$$$.
Time complexity: $$$O(maxn \cdot log(mod))$$$ for precomputing every $$$C_n$$$, and $$$O(k\cdot log(k))$$$ per test case.
#include<bits/stdc++.h>
using namespace std;
typedef long long ll;
const int NMAX=3e5+5, MOD=998244353;
mt19937_64 gen(chrono::steady_clock::now().time_since_epoch().count());
uniform_int_distribution<ll> rnd(0,LLONG_MAX);
ll fact[NMAX], invfact[NMAX], C[NMAX];
ll binPow(ll x, ll y){
ll ans=1;
for(;y ;y>>=1, x = x*x%MOD)
if(y&1)
ans = ans*x%MOD;
return ans;
}
map<ll,ll> diff,freq;
void add_interval(ll l, ll r){
ll Hash = rnd(gen);
diff[l] ^= Hash, diff[r+1] ^= Hash;
}
void tc(){
ll n,k;
cin>>n>>k;
diff.clear(), freq.clear();
add_interval(1,n); /// the initial string must be an RBS
for(ll i=0;i<k;i++){
ll l,r;
cin>>l>>r;
add_interval(l,r);
}
ll Hash = diff[1];
for(map<ll,ll> :: iterator it=next(diff.begin()); it!=diff.end(); it++){
freq[Hash] += it->first - prev(it)->first;
Hash ^= it->second;
}
ll ans=1;
for(const auto& it : freq)
ans=ans*C[it.second]%MOD;
cout<<ans<<'\n';
}
int main()
{
fact[0] = invfact[0] = 1;
for(ll i=1; i<NMAX; i++){
fact[i] = fact[i-1] * i % MOD;
invfact[i] = binPow(fact[i], MOD - 2);
}
for(ll i=0; i*2<NMAX; i++) C[i*2] = fact[i*2] * invfact[i] % MOD * invfact[i+1] % MOD;
ios_base::sync_with_stdio(false); cin.tie(0); cout.tie(0);
ll t;
cin>>t;
while(t--)
tc();
return 0;
}
1830D — Mex Tree
Author: Gheal
Why is bipartite coloring not always optimal?
How good is a bipartite coloring actually?
Disclaimer: I ( tibinyte ) wanted to cut $$$O(n \sqrt{n})$$$ memory because I thought that setting 1024 MB memory limit would spoil the solution. So only blame me for this.
We will do a complementary problem which is finding the minimum loss of a coloring. The initial cost is the maximum possible cost, $$$n \cdot (n+1)$$$.
If we analyze how good a bipartite coloring of the given tree is we can note that $$$loss \le 2 \cdot n$$$
Now suppose the tree has a connected component of size $$$k$$$. We can note that in such a coloring $$$loss \ge \frac{k \cdot (k+1)}{2}$$$
By the $$$2$$$ claims above, we can note that in an optimal coloring, the maximum size $$$k$$$ of a connected component respects $$$\frac{k \cdot (k+1)}{2} \le 2 \cdot n$$$. Thus we can safely say $$$k \le \sqrt{4 \cdot n}$$$.
Now let $$$dp_{i,j,color}$$$ be the minimum loss if we color the subtree of $$$i$$$ and the connected component of vertex $$$i$$$ has size $$$j$$$. We can calculate this in a knapsack style by adding subtrees successively. The computation takes $$$O(n \cdot \sqrt{n})$$$ if we use the $$$7$$$-th trick from this blog.
Now we are only left to optimize memory, since now it's $$$O(n \cdot \sqrt{n})$$$. We can directly apply this to get linear memory.
However, the bound of $$$k \le \sqrt{4 \cdot n}$$$ is overestimated, for $$$n=200000$$$ it can be proven that the worst case for $$$k$$$ is $$$258$$$.
Assume have a connected component of size $$$k$$$ we want it to be colored $$$0$$$ across all optimal colorings. Then we can attach to each node some subtrees such that after flipping its whole subtree the total cost doesn't increase. We will assume all subtrees are leaves for simplicity. Doing such, we can get some inequalities about the number of leaves we need to attatch to each node.
In a star tree, the number of leaves we should attatch to nodes has the smallest sum.
#include <bits/stdc++.h>
#define int long long
using namespace std;
const int lim = 893;
const int inf = 1e16;
struct dp_state
{
vector<int> a;
vector<int> b;
void init()
{
a.resize(lim + 1, inf);
b.resize(lim + 1, inf);
a[1] = 1;
b[1] = 2;
}
};
int32_t main()
{
cin.tie(nullptr)->sync_with_stdio(false);
int q;
cin >> q;
while (q--)
{
int n;
cin >> n;
vector<vector<int>> g(n + 1);
vector<int> sz(n + 1);
for (int i = 1; i < n; ++i)
{
int x, y;
cin >> x >> y;
g[x].push_back(y);
g[y].push_back(x);
}
function<int(int, int)> shuffle_kids = [&](int node, int parent)
{
int sz = 1;
pair<int, int> best;
for (int i = 0; i < (int)g[node].size(); ++i)
{
if (g[node][i] != parent)
{
int sz2 = shuffle_kids(g[node][i], node);
best = max(best, {sz2, i});
sz += sz2;
}
}
if (!g[node].empty())
{
swap(g[node][0], g[node][best.second]);
}
return sz;
};
shuffle_kids(1, 0);
vector<vector<int>> merged(2 * lim + 1, vector<int>(2, inf));
function<dp_state(int, int)> dfs = [&](int node, int parent)
{
dp_state dp;
sz[node] = 1;
bool hasinit = false;
for (auto i : g[node])
{
if (i != parent)
{
dp_state qui = dfs(i, node);
if (!hasinit)
{
dp.init();
hasinit = true;
}
for (int j = 0; j <= min(sz[node], lim) + min(sz[i], lim); ++j)
{
merged[j][0] = merged[j][1] = inf;
}
for (int k = 1; k <= min(sz[node], lim); ++k)
{
for (int l = 1; l <= min(sz[i], lim); ++l)
{
merged[k][0] = min(merged[k][0], dp.a[k] + qui.b[l]);
merged[k][1] = min(merged[k][1], dp.b[k] + qui.a[l]);
merged[k + l][0] = min(merged[k + l][0], dp.a[k] + qui.a[l] + k * l);
merged[k + l][1] = min(merged[k + l][1], dp.b[k] + qui.b[l] + k * l * 2);
}
}
sz[node] += sz[i];
for (int k = 1; k <= min(sz[node], lim); ++k)
{
dp.a[k] = merged[k][0];
dp.b[k] = merged[k][1];
}
}
}
if (!hasinit)
{
dp.init();
hasinit = true;
}
return dp;
};
dp_state dp = dfs(1, 0);
int ans = inf;
for (int i = 1; i <= lim; ++i)
{
ans = min(ans, dp.a[i]);
ans = min(ans, dp.b[i]);
}
cout << n * (n + 1) - ans << '\n';
}
}
1830E — Bully Sort
Author: valeriu
Thanks to errorgorn for the solution!
First of all, we notice that if some element moves left, it will never move right.
Proving this is not hard, imagine $$$S$$$ to be the set of suffix minimas. Then if an element is in $$$S$$$ we know that $$$p_x \le x$$$.
Since after every bully swap an element cannot disappear from $$$S$$$ and after each bully swap, the $$$2$$$ swapped elements can only get closer to their desired position, we conclude the proof.
Obviously, if an element moves right, it will never move left since it will continue to move right until it reaches its final position.
By the $$$2$$$ claims above, we conclude that left and right movers are distinct.
Now suppose we swap indicies $$$i$$$ and $$$j$$$ and note that such swap kills $$$2 \cdot (j-i) - 1$$$ inversions and the left mover $$$j$$$ moves $$$j-i$$$ steps.
Now the magic is that if we let $$$s$$$ be the sum over $$$2 \cdot (i-p_i)$$$ for all left movers we have $$$s - ans = inversions$$$, thus our answer is just $$$s-inversions$$$.
Now to handle the data structure part, we just need to be able to calculate inversions while being able to perform point updates. There are many ways to do this, for example for using a fenwick tree and a bitwise trie/ordered_set in $$$O(nlog^2n)$$$.
#include <bits/stdc++.h>
#include <ext/pb_ds/assoc_container.hpp>
#include <ext/pb_ds/tree_policy.hpp>
using namespace std;
using namespace __gnu_pbds;
#define ordered_set tree<int, null_type, less<int>, rb_tree_tag, tree_order_statistics_node_update>
typedef long long ll;
/* .___ __ __ .__
____ ____ __| _/____ _______/ |______ ________/ |_ ______ | |__ ___________ ____
_/ ___\/ _ \ / __ |/ __ \ / ___/\ __\__ \\_ __ \ __\/ ___/ | | \_/ __ \_ __ \_/ __ \
\ \__( <_> ) /_/ \ ___/ \___ \ | | / __ \| | \/| | \___ \ | Y \ ___/| | \/\ ___/
\___ >____/\____ |\___ > /____ > |__| (____ /__| |__| /____ > |___| /\___ >__| \___ >
\/ \/ \/ \/ \/ \/ \/ \/ \/
*/
struct bit
{
int n;
vector<ordered_set> lesgo;
void resize(int _n)
{
n = _n;
lesgo = vector<ordered_set>(n + 1);
}
void update(int pos, int val, int sign)
{
for (int i = pos; i <= n; i += i & (-i))
{
if (sign == -1)
{
lesgo[i].erase(val);
}
if (sign == 1)
{
lesgo[i].insert(val);
}
}
}
int query_smaller(int pos, int val)
{
int ans = 0;
for (int i = pos; i; i -= i & (-i))
{
ans += lesgo[i].order_of_key(val);
}
return ans;
}
int query_greater(int pos, int val)
{
int ans = 0;
for (int i = pos; i; i -= i & (-i))
{
int sz = lesgo[i].size();
ans += sz - lesgo[i].order_of_key(val + 1);
}
return ans;
}
int query_smaller(int st, int dr, int val)
{
return query_smaller(dr, val) - query_smaller(st - 1, val);
}
int query_greater(int st, int dr, int val)
{
return query_greater(dr, val) - query_greater(st - 1, val);
}
};
int32_t main()
{
cin.tie(nullptr)->sync_with_stdio(false);
int n, q;
cin >> n >> q;
vector<int> a(n + 1);
for (int i = 1; i <= n; ++i)
{
cin >> a[i];
}
bit lesgo;
lesgo.resize(n);
for (int i = 1; i <= n; ++i)
{
lesgo.update(i, a[i], 1);
}
ll inv = 0;
for (int i = 1; i <= n; ++i)
{
inv += lesgo.query_greater(1, i, a[i]);
}
ll sum = 0;
for (int i = 1; i <= n; ++i)
{
sum += max(a[i] - i, 0);
}
while (q--)
{
int x, y;
cin >> x >> y;
if (x != y)
{
lesgo.update(x, a[x], -1);
lesgo.update(y, a[y], -1);
sum -= max(a[x] - x, 0);
sum -= max(a[y] - y, 0);
if (x + 1 <= y - 1)
{
int lesgo1 = lesgo.query_smaller(x + 1, y - 1, a[x]);
int lesgo2 = lesgo.query_greater(x + 1, y - 1, a[y]);
inv += 2 * (y - x - 1 - lesgo1 - lesgo2);
}
if (a[x] > a[y])
{
inv--;
}
else
{
inv++;
}
swap(a[x], a[y]);
sum += max(a[x] - x, 0);
sum += max(a[y] - y, 0);
lesgo.update(x, a[x], 1);
lesgo.update(y, a[y], 1);
}
cout << sum + sum - inv << '\n';
}
}
#include <bits/stdc++.h>
using namespace std;
mt19937 rng(chrono::steady_clock::now().time_since_epoch().count());
int random(int st, int dr)
{
uniform_int_distribution<int> dist(st, dr);
return dist(rng);
}
typedef long long ll;
/* .___ __ __ .__
____ ____ __| _/____ _______/ |______ ________/ |_ ______ | |__ ___________ ____
_/ ___\/ _ \ / __ |/ __ \ / ___/\ __\__ \\_ __ \ __\/ ___/ | | \_/ __ \_ __ \_/ __ \
\ \__( <_> ) /_/ \ ___/ \___ \ | | / __ \| | \/| | \___ \ | Y \ ___/| | \/\ ___/
\___ >____/\____ |\___ > /____ > |__| (____ /__| |__| /____ > |___| /\___ >__| \___ >
\/ \/ \/ \/ \/ \/ \/ \/ \/
*/
struct decomp
{
vector<int> a;
vector<pair<int, int>> buckets;
vector<int> sp;
vector<int> spbucket;
vector<int> wh;
const int sz = 400;
void resize(int n)
{
a = vector<int>(n + 1);
sp = vector<int>(n + 1);
wh = vector<int>(n + 1);
for (int i = 1; i <= n; i += sz)
{
int st = i;
int dr = min(n, st + sz - 1);
for (int j = st; j <= dr; ++j)
{
wh[j] = (int)buckets.size();
}
buckets.push_back({st, dr});
spbucket.push_back(0);
}
}
void update(int pos, int x)
{
a[pos] += x;
int qui = wh[pos];
sp[buckets[qui].first] = a[buckets[qui].first];
for (int i = buckets[qui].first + 1; i <= buckets[qui].second; ++i)
{
sp[i] = sp[i - 1] + a[i];
}
spbucket[0] = sp[buckets[0].second];
for (int i = 1; i < (int)buckets.size(); ++i)
{
spbucket[i] = spbucket[i - 1] + sp[buckets[i].second];
}
}
int query(int pos)
{
int qui = wh[pos];
int ans = sp[pos];
if (qui)
{
ans += spbucket[qui - 1];
}
return ans;
}
};
struct geralt
{
vector<int> a;
vector<decomp> lesgo;
vector<pair<int, int>> buckets;
vector<int> wh;
const int sz = 3000;
void resize(int n)
{
a = vector<int>(n + 1);
wh = vector<int>(n + 1);
for (int i = 1; i <= n; i += sz)
{
int st = i;
int dr = min(n, st + sz - 1);
for (int j = st; j <= dr; ++j)
{
wh[j] = (int)buckets.size();
}
buckets.push_back({st, dr});
lesgo.push_back({});
lesgo.back().resize(n);
}
}
void update(int pos, int val)
{
int qui = wh[pos];
if (a[pos] != 0)
{
lesgo[qui].update(a[pos], -1);
}
a[pos] = val;
lesgo[qui].update(a[pos], 1);
}
int query_brute(int st, int dr, int val)
{
int ans = 0;
for (int i = st; i <= dr; ++i)
{
if (a[i] <= val)
{
ans++;
}
}
return ans;
}
int query_smaller(int st, int dr, int val) // number of values <= val in st...dr
{
if (wh[st] == wh[dr])
{
return query_brute(st, dr, val);
}
int qui1 = wh[st], qui2 = wh[dr];
int ans = query_brute(st, buckets[qui1].second, val) + query_brute(buckets[qui2].first, dr, val);
qui1++;
qui2--;
for (int i = qui1; i <= qui2; ++i)
{
ans += lesgo[i].query(val);
}
return ans;
}
int query_greater(int st, int dr, int val) // number of values > val in st...dr
{
return (dr - st + 1) - query_smaller(st, dr, val);
}
};
int32_t main()
{
cin.tie(nullptr)->sync_with_stdio(false);
int n, q;
cin >> n >> q;
vector<int> a(n + 1);
for (int i = 1; i <= n; ++i)
{
cin >> a[i];
}
geralt lesgo;
lesgo.resize(n);
ll inv = 0;
for (int i = 1; i <= n; ++i)
{
lesgo.update(i, a[i]);
inv += lesgo.query_greater(1, i, a[i]);
}
ll sum = 0;
for (int i = 1; i <= n; ++i)
{
sum += max(a[i] - i, 0);
}
while (q--)
{
int x, y;
cin >> x >> y;
if (x != y)
{
sum -= max(a[x] - x, 0);
sum -= max(a[y] - y, 0);
if (x + 1 <= y - 1)
{
int lesgo1 = lesgo.query_smaller(x + 1, y - 1, a[x]);
int lesgo2 = lesgo.query_greater(x + 1, y - 1, a[y]);
inv += 2 * (y - x - 1 - lesgo1 - lesgo2);
}
if (a[x] > a[y])
{
inv--;
}
else
{
inv++;
}
swap(a[x], a[y]);
sum += max(a[x] - x, 0);
sum += max(a[y] - y, 0);
lesgo.update(x, a[x]);
lesgo.update(y, a[y]);
}
cout << sum + sum - inv << '\n';
}
}
#include <bits/stdc++.h>
using namespace std;
#define ll long long
#define ii pair<int,int>
#define i4 tuple<int,int,int,int>
#define fi first
#define se second
#define endl '\n'
#define debug(x) cout << #x << ": " << x << endl
#define pub push_back
#define pob pop_back
#define puf push_front
#define pof pop_front
#define lb lower_bound
#define ub upper_bound
#define rep(x,start,end) for(int x=(start)-((start)>(end));x!=(end)-((start)>(end));((start)<(end)?x++:x--))
#define all(x) (x).begin(),(x).end()
#define sz(x) (int)(x).size()
mt19937 rng(chrono::system_clock::now().time_since_epoch().count());
struct FEN{ //we want to count the reverse
int arr[500005];
void update(int i,int k){
i=500005-i;
while (i<500005){
arr[i]+=k;
i+=i&-i;
}
}
int query(int i){
i=500005-i;
int res=0;
while (i){
res+=arr[i];
i-=i&-i;
}
return res;
}
} fen;
int n,q;
int arr[500005];
long long ans[50005];
void dnc(int l,int r,vector<i4> upd,vector<i4> que){
vector<i4> updl,updr;
vector<i4> quel,quer;
int m=l+r>>1;
for (auto [a,b,c,d]:upd){
if (c<=m) updl.pub({a,b,c,d});
else updr.pub({a,b,c,d});
}
for (auto [a,b,c,d]:que){
if (c<=m) quel.pub({a,b,c,d});
else quer.pub({a,b,c,d});
}
int i=0;
for (auto it:quer){
while (i<sz(updl) && get<0>(updl[i])<get<0>(it)){
fen.update(get<1>(updl[i]),get<3>(updl[i]));
i++;
}
ans[get<2>(it)]+=fen.query(get<1>(it))*get<3>(it);
}
while (i){
i--;
fen.update(get<1>(updl[i]),-get<3>(updl[i]));
}
if (l!=m) dnc(l,m,updl,quel);
if (m+1!=r) dnc(m+1,r,updr,quer);
}
signed main(){
ios::sync_with_stdio(0);
cin.tie(0);
cout.tie(0);
cin.exceptions(ios::badbit | ios::failbit);
cin>>n>>q;
rep(x,1,n+1) cin>>arr[x];
vector<i4> upd,que;
rep(x,1,n+1) upd.pub({x,arr[x],-1,1});
rep(x,1,n+1) que.pub({x,arr[x],0,-1});
rep(x,1,n+1) ans[0]+=abs(x-arr[x]);
rep(x,1,q+1){
int a,b;
cin>>a>>b;
if (a==b) continue;
if (arr[a]<arr[b]) ans[x]--;
else ans[x]++;
upd.pub({a,arr[a],x-1,-1});
upd.pub({b,arr[b],x-1,-1});
ans[x]-=abs(a-arr[a])+abs(b-arr[b]);
swap(arr[a],arr[b]);
upd.pub({a,arr[a],x,1});
upd.pub({b,arr[b],x,1});
ans[x]+=abs(a-arr[a])+abs(b-arr[b]);
que.pub({b,arr[b],x,-2});
que.pub({b,arr[a],x,2});
que.pub({a,arr[b],x,2});
que.pub({a,arr[a],x,-2});
}
sort(all(upd)),sort(all(que));
dnc(-1,q,upd,que);
rep(x,1,q+1) ans[x]+=ans[x-1];
rep(x,1,q+1) cout<<ans[x]<<" "; cout<<endl;
}
1830F — The Third Grace
Author: tibinyte
Thanks to errorgorn for the editorial!
Let $$$dp_i$$$ be the maximum sum of costs of activated points that are $$$< i$$$ over all states that has point $$$i$$$ activated.
When we transition from $$$dp_i$$$ to $$$dp_j$$$, we need to add the cost contributed by point $$$i$$$. The number of ranges where point $$$i$$$ is the largest coordinate within it are the ranges $$$[l,r]$$$ which satisfy $$$l \leq i \leq r < j$$$. So we have $$$dp_j = \max\limits_{i < j}(dp_i + p_i \cdot S_{i,j})$$$ where $$$S_{i,j}$$$ is the number of ranges $$$[l,r]$$$ which satisfy $$$l \leq i \leq r < j$$$. Now, we note that $$$S_{i,j} z$$$
With some work, we can get calculate this $$$dp$$$ in $$$O(n^2)$$$. But this will be too slow, let's try to speed this up.
Let us define $$$dp_{i,j} = \max\limits_{k \leq i}(dp_k + p_k \cdot S_{k,j})$$$. We have $$$dp_{j-1,j}=dp_j$$$. Our goal is to go from (implictly) storing $$$dp_{i,i+1},dp_{i,i+2},\ldots,dp_{i,n}$$$ to $$$dp_{i+1,i+2},dp_{i+1,i+3},\ldots,dp_{i+1,n}$$$. As $$$dp_i + p_i \cdot S_{i,j}$$$ looks like a linear function w.r.t. $$$S_{i,j}$$$, we can try to use convex hull data structures to maintain it.
Let's figure out how $$$S_{i+1,j}$$$ relates to $$$S_{i,j}$$$. We need to subtract the number of ranges with $$$r=i$$$ and add the number of ranges with $$$l=i+1$$$ and $$$r < j$$$. This corresponds to $$$S_{i,*}$$$ and $$$S_{i+1,*}$$$ differing by amortized $$$O(1)$$$ suffix increment updates. Also, note that $$$S_{i,*}$$$ is non-decreasing.
So we want to support the following data structure: Initially we have an arrays $$$A$$$ and $$$X$$$ that are both initially $$$0$$$. Handle the following updates:
1 m c
$$$A_i \gets \max(A_i,m \cdot X_i + c)$$$ for all $$$i$$$2 j k
$$$X_i \gets X_i + k$$$ for all $$$j \leq i$$$. It is guaranteed that $$$X$$$ will always be non-decreasing.3 i
find $$$A_i$$$
This can be maintained in a lichao tree in $$$O(log ^2)$$$ time. In each lichao node, we need to store $$$s,m,e$$$, the start, middle and end indices and their corresponding $$$X$$$ values $$$X_s,X_m,X_e$$$ respectively. This way, we can support operations $$$1$$$ and $$$3$$$ already.
To support operation $$$2$$$, note that in a lichao tree, you can use $$$O(log^2)$$$ time to push down all lines that covers a certain point (refer to https://mirror.codeforces.com/blog/entry/86731). This way, all lines in the li chao tree are in $$$[1,j)$$$ or $$$[j,n]$$$, so you can do lazy updates on both the coordinates of the lines and the $$$X$$$ values.
The time complexity is $$$O(n \log^2)$$$.
There is another solution that works in $$$O(m \log^3)$$$ that we are unable to optimize further yet. Let us flip the array so that the condition is on the activated point with the smallest coordinate. Then we have $$$dp_j = \max\limits_{i < j}(dp_i + p_j \cdot S_{i,j})$$$ where $$$S_{i,j}$$$ counts the number of ranges $$$[l,r]$$$ such that $$$i< l \leq j \leq r$$$.
Now, we want to store the linear function $$$dp_i + x \cdot S_{i,j}$$$ in some sort of data structure so that we can evaluate the maximum value with $$$x=p_j$$$. Unfortunately, the value of $$$S_{i,j}$$$ can change by suffix additions, similar to above.
But since $$$S_{i,j}$$$ is non-increasing here, that means the optimal $$$i$$$ that maximizes $$$dp_i + x \cdot S_{i,j}$$$ decreases when $$$x$$$ increases. That is, for $$$l_1 \leq r_1< l_2 \leq r_2$$$ and we have two data structures that can get the maximum $$$f_j(x)=dp_i+x \cdot S_{i,j}$$$ for $$$l_j \leq i \leq r_j$$$ respectively. We can combine these $$$2$$$ data structures to make it for $$$[l_1,r_1] \cup [l_2,r_2]$$$ by binary searching the point $$$X$$$ where for all $$$x \leq X$$$, $$$f_1(x) \leq f_2(x)$$$ and for all $$$X \leq x,f_1(x) \geq f_2(x)$$$. Since querying this data structure takes $$$O(\log n)$$$, it takes $$$O(\log ^2)$$$ time to combine two such data structures. If we use a segment tree, we only need to rebuild $$$O(\log^3)$$$ different data structures (the rest can be handled using lazy tags), giving us a time complexity of $$$O(\log^3)$$$.
//もう布団の中から出たくない
//布団の外は寒すぎるから
//布団の中から出たくない
//布団の中はあたたかすぎるから
#include <bits/stdc++.h>
using namespace std;
#pragma GCC optimize("O3,unroll-loops")
#pragma GCC target("avx2,bmi,bmi2,lzcnt,popcnt")
#define int long long
#define ll long long
#define ii pair<ll,ll>
#define iii pair<ii,ll>
#define fi first
#define se second
#define endl '\n'
#define debug(x) cout << #x << ": " << x << endl
#define pub push_back
#define pob pop_back
#define puf push_front
#define pof pop_front
#define lb lower_bound
#define ub upper_bound
#define rep(x,start,end) for(int x=(start)-((start)>(end));x!=(end)-((start)>(end));((start)<(end)?x++:x--))
#define all(x) (x).begin(),(x).end()
#define sz(x) (int)(x).size()
mt19937 rng(chrono::system_clock::now().time_since_epoch().count());
const ll INF=1e18;
struct line{
ll m,c;
line (ll _m,ll _c){
m=_m,c=_c;
}
ll get(ll x){
return m*x+c;
}
};
struct node{
int s,e,m;
int vs=0,ve=0,vm=0;
line val={0,-INF};
int lazy=0;
node *l,*r;
node(int _s,int _e){
s=_s,e=_e,m=s+e>>1;
if (s!=e){
l=new node(s,m);
r=new node(m+1,e);
}
}
void propo(){
if (lazy==0) return;
val.c-=val.m*lazy;
vs+=lazy;
ve+=lazy;
vm+=lazy;
if (s!=e){
l->lazy+=lazy;
r->lazy+=lazy;
}
lazy=0;
}
void update(int i){
if (s==i) lazy++;
else{
if (val.c!=-INF){
l->propo(),r->propo();
l->update(val),r->update(val);
val={0,-INF};
}
if (m<i) r->update(i);
else l->update(i),r->update(m+1);
l->propo(),r->propo();
vs=l->vs,vm=l->ve,ve=r->ve;
}
}
void update(line i){
bool lo=i.get(vs)>val.get(vs);
bool mi=i.get(vm)>val.get(vm);
bool hi=i.get(ve)>val.get(ve);
if (mi) swap(i,val);
if (s==e || i.c==-INF || lo==hi) return;
l->propo(),r->propo();
if (lo!=mi) l->update(i);
else r->update(i);
}
ii query(ll i){
propo();
if (s==e) return {val.get(vs),vs};
else if (i<=m){
auto temp=l->query(i);
return {max(temp.fi,val.get(temp.se)),temp.se};
}
else{
auto temp=r->query(i);
return {max(temp.fi,val.get(temp.se)),temp.se};
}
}
} *root;
const int maxn = 1e6;
const int maxm = 1e6;
int n,m;
ii arr[maxn+1];
ii brr[maxm+1];
int ans[maxm+1];
signed main(){
ios::sync_with_stdio(0);
cin.tie(0);
cout.tie(0);
cin.exceptions(ios::badbit | ios::failbit);
int TC;
cin>>TC;
while (TC--){
cin>>n>>m;
rep(x,1,n+1) cin>>arr[x].fi>>arr[x].se;
rep(x,1,m+1){
cin>>brr[x].se;
brr[x].fi = x;
}
sort(arr+1,arr+n+1);
root=new node(0,m+1);
root->update({0,0});
int curr=1;
rep(x,0,m+2){
while (curr<=n && arr[curr].fi==x) root->update(arr[curr++].se+1);
auto temp=root->query(x);
root->update({brr[x].se,temp.fi-brr[x].se*temp.se});
}
cout<<root->query(m+1).fi<<endl;
}
}
#include <bits/stdc++.h>
#define all(x) (x).begin(),(x).end()
using namespace std;
using ll = long long;
using ld = long double;
//#define int ll
#define sz(x) ((int)(x).size())
using pii = pair<ll,ll>;
using tii = tuple<int,int,int>;
const int nmax = 5e5 + 5, qmax = 2e6 + 5;
const ll inf = 1e18 + 5;
struct line {
ll m = 0, b = -inf;
ll operator()(const ll& x) const {
return m * x + b;
}
bool operator ==(const line& x) const { return m == x.m && b == x.b; }
};
int M;
namespace AINT {
struct Node {
int vl = 0, vmid = 0, vr = 0;
line val = line{0, -inf};
Node() {;}
void operator += (const int &x) { vl += x, vmid += x, vr += x; val.b -= val.m * x; }
} aint[qmax * 4];
int lazy[qmax * 4];
void push(int node) {
lazy[2 * node] += lazy[node];
aint[2 * node] += lazy[node];
lazy[2 * node + 1] += lazy[node];
aint[2 * node + 1] += lazy[node];
lazy[node] = 0;
}
void pushline(line x, int node = 1, int cl = 0, int cr = M) {
bool l = x(aint[node].vl) > aint[node].val(aint[node].vl), m = x(aint[node].vmid) > aint[node].val(aint[node].vmid), r = x(aint[node].vr) > aint[node].val(aint[node].vr);
if(m) swap(x, aint[node].val);
if(cl == cr || x == line() || l == r) return;
int mid = cl + cr >> 1;
push(node);
if(m != l) pushline(x, 2 * node, cl, mid);
else pushline(x, 2 * node + 1, mid + 1, cr);
return;
}
void add(int l, int node = 1, int cl = 0, int cr = M) {
if(cl >= l) {
lazy[node]++;
aint[node] += 1;
return;
}
if(cr < l) return;
int mid = cl + cr >> 1;
push(node);
pushline(aint[node].val, 2 * node, cl, mid);
pushline(aint[node].val, 2 * node + 1, mid + 1, cr);
aint[node].val = line{0, -inf};
add(l, 2 * node, cl, mid);
add(l, 2 * node + 1, mid + 1, cr);
aint[node].vl = aint[2 * node].vl;
aint[node].vmid = aint[2 * node].vr;
aint[node].vr = aint[2 * node + 1].vr;
}
pii query(ll p, int node = 1, int cl = 0, int cr = M) {
if(cl == cr) return pii{aint[node].val(aint[node].vl), aint[node].vl};
int mid = cl + cr >> 1;
push(node);
auto [mxv, atrv] = p <= mid? query(p, 2 * node, cl, mid) : query(p, 2 * node + 1, mid + 1, cr);
return pii{max(mxv, aint[node].val(atrv)), atrv};
}
void init(int node = 1, int cl = 0, int cr = M) {
aint[node] = Node();
lazy[node] = 0;
if(cl == cr) return;
int mid = cl + cr >> 1;
init(2 * node, cl, mid);
init(2 * node + 1, mid + 1, cr);
return;
}
};
vector<int> atraddp[qmax];
ll val[qmax];
void testcase() {
int n, m;
cin >> n >> m;
M = m + 2;
for(int i = 0, l, r; i < n; i++) {
cin >> l >> r;
atraddp[l].emplace_back(r + 1);
}
for(int i = 1; i <= m; i++)
cin >> val[i];
AINT::init();
AINT::pushline({0, 0});
for(int i = 0; i < m + 2; i++) {
for(auto x : atraddp[i])
AINT::add(x);
auto [a, b] = AINT::query(i);
//cerr << a << ' ' << b << '\n';
AINT::pushline({val[i], a - val[i] * b});
}
for(int i = 0; i < m + 2; i++)
atraddp[i].clear();
cout << AINT::query(m + 1).first << '\n';
}
signed main() {
cin.tie(0) -> sync_with_stdio(0);
int t;
cin >> t;
while(t--) testcase();
}
/**
Anul asta se da centroid.
-- Surse oficiale
*/
If there is anything wrong or unclear in this editorial, feel free to ping me or any of the other authors in the comments.
"If there is anything wrong or unclear in this editorial, feel free to ping me or any of the other authors in the comments."
It's unclear how you publish these editorials so fast! :D
Can you explain how to do Div2 E (Hyperregular Bracket Strings). I can't understand the editorial. So, i understood the logic of splitting the overlapping intervals, then how did you proceed further? Also, if I have 3 intervals (1,20),(3,12),(4,9) -> (1,2),(3,12),(13,20),(4,9) -> (1,2),(3,3),(4,9),(10,12),(13,20) -> this should be impossible right? As (3,3) will never be a regular interval
They don't mean to split intervals like that. Imma illustrate their idea. As an example, let's say $$$n=10, k=3, (l_1, r_1) = (1, 8), (l_2, r_2) = (3, 4), (l_3, r_3) = (7, 10)$$$. I'll use the letter $$$x$$$ to denote an empty space. Let's write a sequence of $$$n=10$$$ empty spaces: $$$xxxxxxxxxx$$$. If a position is covered by the $$$1^{st}$$$ interval, put $$$1$$$. If a position is covered by $$$1^{st}$$$ and $$$2^{nd}$$$ intervals, put $$$2$$$. If a position is covered by $$$3^{rd}$$$ interval, put $$$3$$$. If a position is covered by $$$1^{st}$$$ and $$$3^{rd}$$$ intervals, put $$$4$$$. So the sequence of empty spaces turn into this: $$$1122114433$$$. Now look at subsequences containing only $$$1$$$, $$$2$$$, $$$3$$$, $$$4$$$: $$$1111$$$, $$$22$$$, $$$33$$$, $$$44$$$. We count ways to make RBS on these four subsequences and then multiply them together: $$$2 \cdot 1 \cdot 1 \cdot 1 = 2$$$,which is the correct output.
Thanks. Got it. Can you explain the step after that too? Regarding hashing?
The main idea is every time we input an interval, we modify a subarray (the array here is the sequence $$$xxxxxxxxxx$$$) according to the interval so that each position knows which intervals cover it. So like after inputting $$$l_1 = 1$$$ and $$$r_1 = 8$$$, we do something in the subarray $$$[1,8]$$$, so every positions from $$$1$$$ to $$$8$$$ knows they are covered by $$$(l_1, r_1)$$$. The editorial does this by assigning each interval a random number and XOR every element in the subarray $$$[l_i, r_i]$$$ by it, but some solutions I saw use addition instead of XOR.
But why they don't have to fix hash conflict?
There are 2^30000 subsets of intervals in total, hashed into a 64-bit integer. Won't that be lots of conflicts?
I don't really understand.
why cant we simply just sort the array in the a problem ?
If we will sort the array and then we will add corresponding elements with array a than there is chance that sum of two corresponding previous elements might be greater than upcoming element. For ex- (1+1) (2+2) (4+3) (5+4) (3+5).Here 5+4 became greater than 3+5.
i love this round
+1
I had almost solved Div-1C, I had the idea of overlapping and included intervals! but I couldn't find a way to implement it :(
same problem! kinda sad that I missed the chance to solve ABCd1
Can Div-1B be solved with smth like CHT or LiChao tree? We can reduce the problem to two operations:
Is there anything what can support these operations?
Tried thinking along that line too. But i don't think LiChao can do operation 2 due to overcounting?
Solved Div2 C in practice using BFS. Or is it also DP in disguise?
https://mirror.codeforces.com/contest/1831/submission/207670475
Yes. It will consider two cases using BFS. It actually corresponds to two cases of dp.
Please add this to the contest material of Div 2 :) currently it can only be seen from Div1.
Can someone try hacking https://mirror.codeforces.com/contest/1830/submission/207655131 , or explain why it’s good. It is solution for Div1C, and i am basically keeping a stack to calculate answer. But when two intervals partially intersect, i delete the latter , but add back a needed interval to my set of intervals.
Deleted
i dont think i understand how it explains the complexity.
Div1B is terrible. The goal of a problem should be figuring out the solution, and not trying to squeeze your code to pass after you know the solution.
The model solution runs in under 1.2s, which I think is fairly reasonable for a problem with a 4s TL.
It's not about the complexity of your solution. It's about the people who had the same idea as the editorial but failed to solve it because they used for instance hashmap against map, or some tiny constant.
Dear contestant,
Please don't worry if you had to make some formulas shorter in order to pass a time limit that was set as a triple of our solution's running time. In a programming competition, the way you express your ideas in code actually matters. Apologies if this does not fit your view.
Thanks for the feedback.
Why does the model solution use 500MB of memory out of the allowed 512MB?
+1. I resubmitted (with 2 extra WAs which were my fault) because I was afraid 518k KB would MLE systests. I don't think it's a bad problem but I feel allowing 1024MB would've been ok here. Especially since the model solution uses a similar amount of memory.
We also have solution with linear memory, but it's not posted there.
pls share.
Yeah, but why didn't you think that it is a good idea to set ML to 1024MB if one of the authors solutions uses $$$n \sqrt{n}$$$ memory?
Authors test 1000s (a bit of exaggerated) solns, its not necessary to allow all of them to pass.
Can someone explain time complexity of 207670551?
I do not understand why its fast enough (run in <350 ms), worst case I can think of will take $$$O(n*sqrt(n))$$$.
.
i felt the same
div1B killed
for me, honestly, finding solution for div1B was much easier than understanding problem statement for problem Div1A/Div2C.
I spent more than 30 minutes to understand problem statement and test case '1' in problem C.
I solved problem Div1B,Div2D within 28 minutes. xD
this is not hard to find solution, but it is hard to code it
Since there is lot of discussion going on about problem D with tight timelimit, I personally feel that time limit was sufficient to pass O(N * log N) + O ( N * sqrt(N) * log N ) , which is more than sufficient.
I would also like to share my approach, please go ahead and share your thoughts.
you need to use the fact that the values of a[i] and b[i] are less than or equal to "N".
So, for each pair (a[i],b[i]), we have to traverse till sqrt(N) at max, and see if the reverse pair exists or not.
first of all, shard the pairs(a[i],b[i]) based on the first value (a[i]). Also sort each shard for applying faster search operation using binary_search.
We also need to keep track that how many times particular pair has been encountered in past, to do so we need a map < pair <int, int > , int > where map[(x,y)] denotes, how many times particular pair (x,y) has been encountered yet.
Core logic part is here now, suppose (a[i] , b[i]) and (a[j] , b[j]) are expected pair, then we will
a[i] * a[j] - b[i] = b[j]
, for better understanding I will use (x,y) and (p,q) instead of (a[i],b[i]) and (a[j],b[j]) from now on. so, (x*p — y == q) must hold in order to count the pairs.if shard[x] has value 'y' in it, that means, we have a pair (x , y) somewhere. Now, any pair (x , y) and (p, q) to be in the answer, we must make sure that
x * p <= 2*n
, otherwise answer is never possible ( because of the restriction that 1 <= y <= n && 1 <= q <= n,,, their sum will be 2 <= y + q <= 2*n. ) , that's why x * p must be less than 2 * n ;So, now using above facts, please go through code,
I think that TL was enough but the great problem was in the ML. By the way you're code is very clean. got AC with this time complexity!! Take a look at my code which should have $$$O(n\sqrt n)$$$ with time complexity of
1123ms
207646321Where did shard[k] come from? Shouldn't it be shard[p]?
Thank you for your explanation you are a legend.
https://mirror.codeforces.com/contest/1830/submission/207653094
The code is giving runtime error with GNU G++ 17.No runtime error with GNU G++20.
Why is it happening ??
Yes the same happened to me...
thanks for the contest, solving D1D was very fun
Yesterday's abc_E tilted me towards a leaves-first idea for div2C, counting up the number of out-of-order edges on the way to the root. It's an approach that can work, but I seemingly stumbled into every possible way to make it break, even after getting a fake AC.
TLE: long path from root into many leaves risks repeatedly traversing that path
WA: attempts to bail early on lack of update worked, but (hindsight) since I was updating and testing on different vertices, I had to leave in the == no-update case... which meant it was still very possible to TLE as before
uphackTLE: in attempting to test vs. that worst case, I neglected to clear out all shuffling/randomizing code in my generator, so the 'far' root 1 was always randomly somewhere convenient and/or the number of passes really did need updating
technical-debt-TLE: (hindsight again) a bug-free version of my per-leaf approach was still vulnerable because there could be a tree that actually did have progressively better passcounts per traversal obviating my early-out flailing. What I really needed was the post/return half of a dfs/bfs, only proceeding rootward after all children were accounted for... OR not getting the leaf-first idea stuck in my head from yesterday.
tl;dr free uphack and maybe some giggles...
The idea of Div1B was fine, but the tests were so weak. Even the system tests cannot save it as most solutions got TLE on tests provided by hackers, and I don't even think that my solution is good enough to get AC :D
Good round, I like it.
The Memory limit Div1 B was soo tight, but it was fun to solve tho
This was my first Div. 1 contest. I solved A, but not very fast because I wanted to get accepted with one submission. Then I started to think about problem B. In about 1 hour and 20 minutes I got accepted (I was happy to increase my rating on my first Div. 1 contest). But, unfortunately, I got FST (TLE on test 15) during the system testing. And I want to share with you my solution and a bug, which was tricky to find.
The main idea of my solution is similar to many of the other solutions.
I keep the answer in $$$ans$$$. First, I define an array of vectors $$$c$$$, where I store all $$$b_i$$$-s for every element in $$$a$$$ (i. e. for every pair of $$$(a[i], b[i])$$$, I push $$$b[i]$$$ to $$$c[a[i]]$$$) and then I sort every $$$c[i]$$$.
Then I brute force every $$$i$$$ and $$$j$$$, such that $$$i*j <= 2*n$$$ and $$$i<j$$$. My goal is to find every pair $$$(x, y)$$$: $$$x$$$ from $$$c[i]$$$, $$$y$$$ from $$$c[j]$$$, such that $$$x+y=i*j$$$. Now, for every pair $$$(i, j)$$$, I go through all the elements of $$$c[i]$$$. For element $$$x$$$, I find $$$l$$$ and $$$r$$$ (with binary search): the borders of $$$i*j-x$$$ in $$$c[j]$$$. Then I update the answer with $$$r-l$$$ (or $$$r-l+1$$$, depending on how you choose the borders). After that I update the answer for $$$i=j$$$, and we are done!
But wait!, this solution gets TLE. And the problem is, that if there are many elements in some $$$c[i]$$$, we will brute force through them for every $$$j$$$. To avoid this, we find one with fewer elements than the other one (I mean $$$c[i]$$$ or $$$c[j]$$$) and go through the elements of that one. And at last here we are done.
Time complexity: $$$O(2n*\sqrt{2n} + C * log(n))$$$. Here $$$C$$$ is the maximal possible value of $$$\displaystyle \sum_{i<j}^\mathbb{}$$$ $$$min(size(c[i]), size(c[j]))$$$.
My code.
I hope this was helpful.
I did exactly what you did, but I can't prove the time complexity. Could you please explain it clearer how choosing the smaller between c[i] and c[j] results in that time complexity?
I apologize that I can't prove it formally, but intuitively, if there is one with many elements, there has to be another with few elements. Also, it is obviously better to brute force through the one with fewer elements.
Div2 D has an
O(nlog^2n)
solution. Letcnt(x, y)
be the number of indexesi
such thata[i] = x
andb[i] = y
.b[i] + b[j] = a[i] * a[j] <= 2 * n
, so for fixedi
there are2 * n / a[i]
possible values ofa[j]
. Thus, iterating over all possible pairs of values(a[i], a[j])
takesO(nlogn)
operations (2n/1 + 2n/2 + ... + 2n/n = 2n * (1 + 1/2 + 1/3 + ... + 1/n) = 2n * logn
). For given pair(x, y)
we can iterate over alli
such thata[i] = x
(totallyO(n)
time) and add to the answercnt(y, x * y - b[i])
.cnt(x, y)
can be calculated forO(logn)
using binary search.I don't think that would work. For example, when iterating $$$a[i] = 1$$$, based on my understanding, you find all $$$i$$$ such that $$$a[i] = 1$$$, then also iterate over all $$$y = 1, 2, ...2n$$$, and then add $$$cnt(y, 1 * y - b[i])$$$ to the answer. But this step is actually $$$O(n^2)$$$, because there could be $$$O(n)$$$ indices $$$i$$$ such that $$$a[i] = 1$$$.
But what if I apply a little optimization?
For a $$$(a_i, a_j)$$$ calculation, we get two (multi)sets $$$S = \{ b_k | a_k = a_i \}$$$ and $$$T = \{ b_k | a_k = a_j \}$$$.
Then if $$$|S| > |T|$$$, swap them, and iterate $$$S$$$.
It finally get passed(211334263), but I can't figure out it time complexity, or whether it's a truly correct approach. Could anyone tell me this?
https://mirror.codeforces.com/contest/1831/submission/207670314
Could someone point out mistake in my code been trying for very long ,dont see the mistake getting WA in test 2 ,i did a bfs like soln
did you try cleaning the global variables after solve()? Probably it's bugged when you have more testcases
i did that at the end of the code after cout
Sorry, i didn't see that.
Took a while but i found an actual counterexample:
1
5
2 3
1 2
4 5
2 4
You do some weird stuff with ok. You're looking for the minimum index edge, but i think you should set it while you do the bfs: something like
oh ok i see the mistake thanks a lot
Problem Div 2D has a good idea IMO. But my solution of $$$n \sqrt{n}$$$ was not passed the time limit because I used unordered_map to count the frequency (207642815)
I have used the unordered_map optimization using custom_hash function referring to this blog. Which got me to think :
Did my solution wrong? I used set to maintain unique elements only. The difference with author solution is I didn't skip when $$$a_i > \sqrt{n}$$$. But I did the break the loop tho
If assuming my implementation indeed is $$$n \sqrt{n}$$$ What kind of test case that breaks the unordered_map constant factor?
Your solution is $$$O(n^2)$$$,here is a sample. If $$$a[1] \dots a[\frac n 2]$$$ are $$$1$$$,the other $$$\frac n 2$$$ of $$$a[i]$$$ are large and different from each other,when you find the answer of $$$a[1] \dots a[\frac n 2]$$$,all of the elements will be found,then your solution become $$$O(n^2)$$$
Ah I guess that makes sense. Didn't cover that kind of test case. Thanks!
yeah i too tried many $$$n\sqrt{n}log(n)$$$ and then tried with hashing (not custom), and I'm of the opinion that either im a really poor implementor or the limits are too strict
Asking to remove the log factor of the map is somewhat strict imo. You had to use an array instead of a map to erase the log in the complexity. Usually if i can use a map to make it faster to implement i do it and for this reason i got a lot of TLEs before realizing that O(1) access gives ~1/4 time of map
if they didnt force to remove log, the problem would definitely be solvable by n^2 (already was)
i think they should have just forced linear memory, so it becomes clear this solution is out instead of being on edge
Fair enough. Effectively n² is close enough to n^(3/2)log n. Actually plugging n it's ~1.5e9, which is obviously too big
I have solved it with O ( N * log N) + O ( N * sqrt(N) * log N) .
I have used O(log N) complexity for searching element in sorted array with binary search. and my solution passes easily.
I dont understand why you guys couldn't pass the solution ?
Can you please share your solution which is getting TLE ?
I was using a vector s (s[a][b] = count of the pair a-b for a <sqrt(2n)). Actually some of the solution passed the time but gave wa for overflow. But they were very close. One was not in the time because i had min() in the loop condition. Moving it out made the code faster enough to get wa on another case. It was kind of a panic moment actually (still don't know if it would still get tle after fixing the overflow too)
207656497, if you wish
Is it any problem in 1C's editorial?if $$$l_1\leq l1_2\leq r_1\leq r_2$$$,the groups formed by two intervals are $$$[l_1,l_2-1],[l_2,r_1],[r_1+1,r_2]$$$?But it doesn't matter :D
Want to point out that, for Div1 C, to derive the number of regular bracket strings of length 2n, this is essentially equivalent to having a simple random walk that start at 0 and end at 0 while never touching -1. The reflection principle is commonly used to calculate it.
Can 1B exist any polylog solution?I guess the answer will no more than $$$O(n\log n)$$$ or $$$O(n)$$$ if all pairs of $$$a,b$$$ are distinct,but I have no idea how to find them in no more than $$$O(n\log^2n)$$$ :(
I'm wondering why I got TLE for div2 C.
I pretty much did the same thing as the editorial. I have an array to store the index of each edge, and then I did BFS. At each node, I compare the index of the edge that connects that node and the parent with the index of the edge that connects the parent and the parent of the parent.
nvm I figured out why, apparently I was creating vectors of size 2e5 for every testcase, so for some reason creating a vector of size n is O(n)??
"the sum of n over all testcases is 2e5" But if you have 10,000 testcases and make an array 2e5 long you have that the total complexity is 2e5 *1e4 = 2e9, which is a big overhead considering that to have t=1e4 you have average n = 20
207786285
Is this the reason why my solution gets a TLE too ? (I have tried to maintain an ans value (current.first in the submission) along with the edge-number for each node. And update the ans everytime a node is fully "explored").
It's possible. Just try and change resize(N) to resize(n+1) and see if it works
Holy shit, that worked. Thankyou so much. I would have wasted the entire week trying to find out why if it wasn't for your comment.
The TLE was happening because effectively the complexity was more like O(n*t) where t is the number of tests in one big-test-case ? Or is it that making such a long array for each test case is just inherently very "slow" to do ?
Your complexity is O(maxn*t) and if t = maxt you have the worst case. Btw you can check in the docs the complexity of resize (it's linear in the difference to the current size)
F can be solved with lazy propagation on kinetic segment tree: code
Really liked div 1, C and D. Also, I think there is a simpler way to implement div 1 C using segment tree instead of xor hashing, for the union part of solution. Was close but could not solve it in contest though :(
Would someone be able to check why my solution to Div 2 D TLEs on case 8?
It's basically the same as the official solution and runs in O(n*sqrt(n)), so I'm unsure why it TLEs.
Can somebody debug my problem B code where i'm doing mistake: Approach: I stored the max frequency of each number of
a
andb
inmpa
andmpb
. After that i calculated the ans as the sum of frequency froma
andb
.You gotta keep the counts of the longest subarray of similar integers, for every integer in a and b find out the maximum length of of all equal elements, do this for both array and in the end just store the sum of length of contiguous equal subarrays for all integers in both arrays. Why are you using stacks?
i used stack to count the max frequency of each number present in a array, btw i figured out the issue with minor changes my code got accepted. If you want to have a look
stacks + maps pretty heavy, I guess the runtime will stay around 500ms anyways
Only if I proved my solutions :(
What is the time complextity of my solution: 207651808?
First, I iterated through the values of
a[i]
, this part is $$$O(n)$$$Then, I iterated through the values of
a[i] * a[j]
, until now it's $$$O(n log n)$$$After that, I iterate through the indices
i
that have valuea[i]
and used binary search (this add another log) to count number of satisfied indicesj
. Here, I optimized it by iterate through indicesj
instead if there is fewer of them.Somehow this passed systest, (actually during the systest I saw it TLE on pretest 3, but AC when I checked again this morning).
.
I am having difficulties understanding the editorial, can you help me out a bit?? thanks!
.
yess, that makes sense. I understood the equation which you gave. But I am still confused with the equation given in the editorial for ai=aj and and ai>aj. Can you explain how those two equations work..
.
assuming you understood the fr[a[i]][a[i]^2 — b[i]] part of the equation, notice that we overcount such cases in the expression: let's say we got a[i] = 2 and b[i] = 2 for some i, then a[i] * a[i] = b[i] + b[i], so the pair (i, i) is counted as good, but since the problem only asks for good pairs where i < j, we need to subtract all such cases, and what do these cases look like? since we have a[i] * a[i] = b[i] + b[i] iff a[i]^2 = 2 * b[i] iff a[i]^2 / 2 = b[i], so we need to subtract fr[x][x^2 / 2] for all 1 <= x <= lim, and that's where the numerator of the equation comes from. now the reason we divide the numerator by 2 is because we're counting all good pairs twice since if (i, j) is a good pair then (j, i) is also a good pair, and we need to count only one of these.
.
Can anyone explain what's wrong with my submission for Div2B? https://mirror.codeforces.com/contest/1831/submission/207612809
I was so sad that i could not solve b in contest time . but i just realized that the range of array a and array b is <= 2*n .
that fact of a[i] , b[i] <= 2 * n doesnt matter? you can just coordinate compress it to that range anyways
In div2 D why this soln gives TLE-> Let's traverse on unique pairs of a(i),b(i) and for each a(i) we will traverse for all distinct values of a(j) and find the value of b(j) corresponding to the known a(i), b(i) and fixed value of a(j) and thus can then easily calculate the number of pairs formed using an unordered_map with custom hash in avg o(1) complexity..
Now for fixing the value of a(j) lets see two cases for a(i)-> 1. a(i) >sqrt(2*n)->
in this case since a[i]*a[j] should be less than on equal to 2*n (as shown in editorial), hence possible values of a[j] would be from [1, sqrt(2*n)]. hence for each a[i]> sqrt(2*n) we can traverse all possible values of a[j] in sqrt(n) complexity
in this case value of a[j] can be from [1, n] but since all the pairs of a[i] with a[j]>sqrt(2*n) is already covered in case 1 hence we need to iterate only on the values in range [1,sqrt(2*n)].
hence overall complexity is o(n*sqrt(2*n))
please help Gheal
Am i the only onr who solved div2 C using lazy segmenttree with O(nlog(n)) ??
In Div1D, the maximum size of a connected component is $$$O(\sqrt n)$$$. But what's the exact upper bound of it? Theoretically, it's $$$\sqrt{3n}\approx 774$$$, but my submission, where I suppose it doesn't exceed 400, got accepted. Can anyone prove it or hack it?
We have proof that it's 258.
I have been wondering for the past two days how the hell did my submission passed the tests in div.2 C and didn't get TLE: https://mirror.codeforces.com/contest/1831/submission/207638978
In a test like this:
1
200000
1 2
1 3
1 4
1 5
1 6
1 7
...
...
...
1 199999
1 200000
The only thing I realized is that this corner test case isn't in the test cases or there are some hidden info in the constrains I didn't notice. But I don't think so.
That case was just missing. Your solution is now hacked.
Thank you!
BTW how do I uphack a submission after the round is finished?
A "hack" button appears on the submission page, but only if you're 1900+ rated (or 2100+, I can't remember which)
I think that I have trouble understanding the first code of 1F.
The tutorial says that we should subtract the contribution of some lines, but in the code, the only update is adding new contribution.
Maybe I have made some stupid mistakes. Can anyone please explain it? QaQ
You can check my code 207890019
That helps a lot. Thank you!
nice contest
I'm new to hashing. In Div1 C, is there a chance of a wrong answer when using randomly generated numbers? And can we choose some numbers on our own to avoid that situation?
Do you know solution for Hyperregular Bracket Strings without random?
https://mirror.codeforces.com/contest/1830/submission/207644804
Maybe you can check this submission.
You can find all minimum ranges by scanning line and heuristic merge(?not sure about this name), and the rest can be solved as a bracket sequence. One implementation is 207718096. However, I think this is just a coincidence that there exists a non-random solution. I still recommend you learn the randomized solution, it's very general.
https://mirror.codeforces.com/contest/1831/submission/207616298 Can anyone help me with the runtime error in my submission. I am not able to figure out the what is the reason for this.
In the solution of Div1/C, for the case 2 (partially overlapping intervals), won't the groups formed should be [l1,l2-1],[l2,r1] and [r1+1,r2] instead of [l1,l2-1],[l2,r2] and [r2+1,r1]. Gheal
Thanks, fixed. :thumbsup:
In div1 C, I spent most of my time during the contest trying to use sweep line to construct a tree with hierarchy of intervals. I have read the hints in the tutorial, it's mentioned that it is difficult to construct that tree. Gheal, do you know an algorithm that constructs that tree? or if anyone could provide useful resources.
Sorry for the late reply. If I'm not mistaken, Vladithur had a deterministic solution while testing, I've reuploaded it here.
For the solution code of Div 2E/1C, if I change LLONG_MAX to LONG_MAX in
uniform_int_distribution<ll> rnd(0,LLONG_MAX)
, I get WA on test case 4, which is a big-number test case where n = 300000 and k = 300000 (207948273) (I submitted multiple times and it's always WA on test case 4). When I print the hash values, they all seem to be as random and large as the solution code.Can anyone explain why this happens? Thanks in advance.
oh nvm, it's due to compiler differences.
problem c
vector<pair<int,int>> edg[NMAX]; ... edg[u].push_back({v,i}); edg[v].push_back({u,i});
how does this work? why we create a new pair in the vector element?
There is an alternative solution for Div.2 C, where you essentially simulate the process. You keep a std::set of all the edges you can draw and a pointer of where you currently are in the edge list. Using the lower_bound function we can find the next edge we will encounter and when we do, we update the set of edges we can draw and update the pointer to the index of the currently processed edge + 1. In the case when the largest element in the set is smaller than the pointer we set the pointer to 0 and add 1 to the answer. We will process each edge exactly once so the complexity is O(n*log(n)). You can see my submission here: https://mirror.codeforces.com/contest/1831/submission/207936976
For div 2 c question I think the statement of each edge is not accurate, I ran the code I think it is checked the edge below this edge[problem:1831C]
May someone explain me why the computation of the dp in Div1 D is $$$O(n\sqrt n)$$$ ? it's hard to prove for me, even that I know how to prove that the time complexity of the "7-th trick" is $$$O(n^2)$$$.
In the "7-th trick" the time complexity is $$$\sum siz(v) \cdot siz(u) \le n^2$$$ over all dp merges. In our case, we have time complexity $$$\sum \sqrt{siz(v)} \cdot \sqrt{siz(u)}$$$ so we have the sum of square roots of these values. Let's define $$$\sqrt{siz(v) \cdot siz(u)}$$$ as just a value $$$a_i$$$. We know $$$\sum_{i=1}^n a_i^2 \le n^2$$$ (there are actually $$$n-1$$$ summands but it doesn't matter). Dividing both sides by $$$n$$$ and taking a square root we get $$$\sqrt{\frac{\sum_{i=1}^n a_i^2}{n}} \le \sqrt{n}$$$. But on the left-hand side we have the quadratic mean which is at least the arithmetic mean by the means inequality. So $$$\frac{\sum_{i=1}^n a_i}{n} \le \sqrt{\frac{\sum_{i=1}^n a_i^2}{n}} \le \sqrt{n}$$$. $$$\sum_{i=1}^n a_i \le n \sqrt{n}$$$ follows which is exactly what we wanted.
Thanks for your generous help! That helps a lot for me. The prove is definitely right and I know exactly how to realize it by code. However, the time complexity of the standard method is $$$\sum_{i=1}^n min(siz[v],lim)\times min(siz[u],lim)$$$, which $$$lim = \sqrt{3n}$$$. If possible, I hope you can provide relevant proof. Thank you very much!
Oh, wait, is it? I think in trick 7 it is just the product of sizes. But even if it's not, it is easy to see that just the sum of products of sizes is $$$O(n^2)$$$ because you can think about the product of sizes as uniting two sets and listing all pairs of elements where one element is from the first set and the other elements is from the second set. In this way, every pair of elements in total will be listed at most once (exactly at their LCA), so the total sum is at most $$$n \cdot (n-1) / 2$$$.
And well, taking min with $$$lim$$$ can only decrease the sum but it is actually irrelevant.
Is it irrelevant? I believe that if you merge in $$$min(sz_u,lim) \cdot min(sz_v,lim)$$$ the time complexity becomes $$$O(n \cdot lim)$$$. I don't know how to prove it tho but perhaps it's something similar to the $$$lim = \sqrt{n}$$$ case.
We merge in time $$$\sqrt{sz_u} \cdot \sqrt{sz_v}$$$.
Ok. Now I understand your question. I guess the solution in the editorial is a bit different from what I read there. Without loss of generality, we can assume that our tree is binary because we kinda assume it anyway: we merge all the children of a vertex one by one which is equivalent to having a bamboo in which we add them one by one to the tree. If you take $$$\sum \min(sz_v,lim) \cdot \min(sz_u, lim)$$$, then you can consider $$$v_1, v_2, \ldots, v_k$$$ to be all the vertices $$$u$$$ in the graph such that all children of $$$u$$$ have $$$sz_u \le lim$$$ but for the parent $$$pr_u$$$ of $$$u$$$ it is not true (so $$$pr_u$$$ has some child (either $$$u$$$ or not) that is large). Then $$$v_i$$$ can't be an ancestor of any $$$v_j$$$ (otherwise its parent would have a big child and then $$$v_i$$$ would also). It means that all subtrees of $$$v_i$$$ s are disjoint so $$$\sum sz_{v_i} \le n$$$. Inside every subtree of $$$v_i$$$ by trick 7 we work in time $$$sz_{v_i}^2$$$. But both children of $$$v_i$$$ have sizes $$$\le lim$$$, so $$$sz_{v_i} \le 2 lim+1$$$. Then Inside all these subtrees we work in time $$$\sum_{i=1}^k sz_{v_i}^2 \le \sum_{i=1}^k (sz_{v_i} \cdot (2 lim+1)) = (2 lim+1) \sum_{i=1}^k sz_{v_i} = O(lim \cdot n) = O(n \sqrt{n})$$$. We are left with all the bigger subtrees that do not lie inside $$$v_i$$$ subtrees (call all these vertices $$$u_i$$$s). There are two types of vertices $$$u_i$$$: they either have at least one of their child as one of the $$$v_l$$$s or they have both of their children as $$$u_j$$$ and $$$u_k$$$. In the first case, we work in time $$$\le v_l \cdot lim$$$ which sums up to $$$\sum v_l \cdot lim \le lim \cdot \sum v_l \le lim \cdot n = O(n \sqrt{n})$$$. In the second case, we work in time $$$lim^2$$$ per vertex but there are at most $$$O(n / lim)$$$ such vertices so we work in time $$$O(n \cdot lim) = O(n \sqrt{n})$$$ in total for them too.
That is right! Amazing! Thanks you!!! :)
Sorry for necroposting, but
Why is this true?
Take a look at this comment, it explains the exact dp formulation used in solution.
Complexity is $$$O(nk)$$$ where $$$k=\sqrt n $$$. Proof is based on iterating over all pairs such that their preorder differs by at most $$$2k$$$.
You can imagine it when you merge two subtrees, in the left subtree you "keep" only last $$$k$$$ nodes and in right subtree you "keep" first $$$k$$$ nodes, so their preorder number differs by at most $$$2k$$$, if subtree contains less than $$$k$$$ nodes just iterate through them. Because of that you go through $$$min(sz[u],lim)$$$.
For each $$$i$$$ we have at most $$$2k$$$ pairs of the form $$$(j,i)$$$ such that $$$j<i$$$ so for all $$$1\le i\le n$$$ it total we have $$$O(n*2k)$$$ = $$$O(n\sqrt n)$$$
Thank you! The method you mention to prove this task inspired me a lot! :)
I think the editorial of D D2/C D1 was wrong in case 2, when l 1 < l 2 ≤ r 1 < r 2
The three groups formed by these two intervals are:
Hi Gheal & Vladithur, Python solution in the editorial for problem Div2D (1830B — The BOSS Can Count Pairs) throws TLE. Can you please fix it? https://mirror.codeforces.com/contest/1831/submission/208150150
Never use puthon 3, use pypy3 64 instead. It is 5 times faster
Still cannot understand the description for 1830F. If there is just one activated point covered by two intervals, the cost of each interval should be the coefficient of that point.
For the first example, however, the last point (
30
) appears in two intervals ([2, 8] and [3, 8]), but30
is only added once to the result.Also, for the second example, why don't we activate the last point? It is covered by [1,6] interval.
2 8
in the first test case and1 6
in the second test case aren't intervals; they'ren m
, i.e. the number of intervals and number of coordinate points for the test case.Can someone explain what "connected component" means in Div1D please. If the graph is a tree then shouldn't the size of connected component be of size $$$N$$$?
A maximal region with nodes of the same color.
Can you further explain why the the "connected component" have $$$loss \geq \displaystyle\frac{(k)(k+1)}{2}$$$ please.
It's beacause every path in a connected component contains exactly one color, so the mex surely is not 2.
Got it! Thank you so much for the help
Sorry for necroposting.
I tried doing d1c, and I found a $$$O(klog^{*}k)$$$ and a linear solution, so I thought that maybe someone would find it interesting.
If you look at the prefix sums where open parentheses are $$$+1$$$, closed parentheses are $$$-1$$$, then a restriction $$$[l, r]$$$ says that $$$s_r = s_{l - 1}$$$, and every $$$s$$$ in between must be greater or equal than the $$$s_{l - 1}$$$ (that is, $$$\forall i, l - 1 \leq i \leq r, s_{l - 1} \leq s_i$$$). If we look at how ranges overlap, if they do partially, then we pretty much get the same as in the editorial: $$$s_{l_1 - 1} = s_{r_1} = s_{l_2 - 1} = s_{r_2}$$$. The equality between $$$s_{l_1 - 1} = s_{l_2 - 1}$$$ is obtained with a $$$a \leq b \land b \leq a \implies a = b$$$ argument.
The idea of the solution is to have the ranges, do a sweep line from left to right, and try to store the equivalence classes, while also trying to compute the answer.
The way that I store the equivalence classes is by storing a DSU (the linear solution is obtained from the one with DSU and figuring out that it's actually useless) and a stack. I will store all the equivalence classes, and for each one I will store the following: the position of the last element, the position of the first element, the number of open ranges (this is used to figure out when you're done with an equivalence class and you do not need to store it anymore) and the number of already processed elements (that is, the total length of the included sub-intervals). The classes in the stack should be ordered, from bottom to top, increasingly by the position of the last element. When you insert a range, you just add it as a standalone equivalence class. When you close an interval, you merge all the equivalence classes from the stack that can be merged by the current interval. That is, you pop from the stack as long as the last element of each class is included in the current range, and merge the popped elements from the stack. When popping elements from the stack, you also try update the answer: suppose that the last element in each class from the popped elements are $$$x_1, x_2, x_3, ..., x_n$$$, then you multiply by $$$f(x_2 - x_1 - e_1) * f(x_3 - x_2 - e_2) * ... * f(x_n - x_{n - 1} - e_{n - 1})$$$ where $$$f(N)$$$ is the number of bracket sequences of length $$$N$$$. $$$e$$$ is the number of erased elements from each of the equivalence class.
The count is used to see when you're done with an equivalence class for good. That is, after you merged all the classes and decreased the count with 1, you see if you push it back to the stack, or not, and then update the erased elements for the top element in the stack.
I think that this solution is correct, but I'm not 100% sure. When it comes to cases like "what if you have multiple ranges that share the same endpoints? Don't you want them sorted by the other end, or by their length, or something similar?", it doesn't matter, because if you insert the smaller element first, then you will merge it with the larger ones and they will get trimmed to the last element, and if you do it the opposite way, then you will add the smaller one to the answer and subtract it from the larger range.
Also, to precalculate the function $$$f$$$, you can just compute the factorials, do the inverse of the largest factorial, then do all the other inverses, everything in $$$O(maxN)$$$.
I feel like I rambled here about a stack and a DSU, because the underlying cases are cumbersome and I'm not entirely sure why my solution works, so if you understood what I was trying to say and have an easier way to explain, or any other cases that should be mentioned, please let me know.
Here are the two accepted solutions:
In the solution of "1830A — Copil Copac Draws Trees", the spelling of "Intially" in the head of third line is wrong, which should be "Initially".
Hey Everyone, Could anyone walk me through why the answer for this Testcase is expected to be 3 by the Jury and not 2. 6 2 2 1 2 2 1 1 2 1 1 2 1 This is the 4th case in hidden testcase-2