Sorry for the weak pretests on A :(. We hope you enjoyed the round nonetheless.
Idea: Apple_Method
Preparation: Apple_Method/oursaco
Analysis: lunchbox
There are at most $$$8$$$ positions of the knight that can attack a single cell. Therefore, we can find all $$$8$$$ positions that attack the king and the $$$8$$$ positions that attack the queen and count the number of positions that appear in both of these lists.
#include <bits/stdc++.h>
using namespace std;
int dx[4] = {-1, 1, -1, 1}, dy[4] = {-1, -1, 1, 1};
int main(){
int t; cin >> t;
for(int i = 0; i < t; i++){
int a, b; cin >> a >> b;
int x1, y1, x2, y2; cin >> x1 >> y1 >> x2 >> y2;
set<pair<int, int>> st1, st2;
for(int j = 0; j < 4; j++){
st1.insert({x1+dx[j]*a, y1+dy[j]*b});
st2.insert({x2+dx[j]*a, y2+dy[j]*b});
st1.insert({x1+dx[j]*b, y1+dy[j]*a});
st2.insert({x2+dx[j]*b, y2+dy[j]*a});
}
int ans = 0;
for(auto x : st1)
if(st2.find(x) != st2.end())
ans++;
cout << ans << '\n';
}
}
Idea: oursaco
Preparation: oursaco
Analysis: lunchbox
Let's sort array $$$a$$$. The answer for the largest element is $$$n-1$$$ because the score, which is $$$a_n$$$, cannot be smaller than any of the other elements. Now, consider the second largest element. The answer is at least $$$n-2$$$ because every element that is not greater than $$$a_{n-1}$$$ can be taken. Then, we check if the score is at least $$$a_n$$$. This inspires the following solution: first, we find the prefix sum $$$p$$$ of array $$$a$$$. We calculate the answer in decreasing order of $$$a_i$$$. To calculate the answer for an $$$a_i$$$, we find the largest $$$j$$$ such that $$$p_i\geq a_j$$$ and set the answer for $$$i$$$ equal to the answer of $$$j$$$.
#include <bits/stdc++.h>
#include <ext/pb_ds/assoc_container.hpp>
#include <ext/pb_ds/tree_policy.hpp>
using namespace __gnu_pbds;
using namespace std;
#define pb push_back
#define ff first
#define ss second
typedef long long ll;
typedef long double ld;
typedef pair<int, int> pii;
typedef pair<ll, ll> pll;
typedef pair<ld, ld> pld;
const int INF = 1e9;
const ll LLINF = 1e18;
const int MOD = 1e9 + 7;
template<class K> using sset = tree<K, null_type, less<K>, rb_tree_tag, tree_order_statistics_node_update>;
inline ll ceil0(ll a, ll b) {
return a / b + ((a ^ b) > 0 && a % b);
}
void setIO() {
ios_base::sync_with_stdio(0); cin.tie(0);
}
int main(){
setIO();
int T;
cin >> T;
for(int tt = 1; tt <= T; tt++){
int n;
cin >> n;
pii arr[n + 1];
for(int i = 1; i <= n; i++) cin >> arr[i].ff, arr[i].ss = i;
sort(arr + 1, arr + n + 1);
int nxt[n + 1];
ll sum[n + 1];
int ans[n + 1];
nxt[0] = sum[0] = 0;
for(int i = 1; i <= n; i++){
if(nxt[i - 1] >= i){
nxt[i] = nxt[i - 1];
sum[i] = sum[i - 1];
} else {
sum[i] = sum[i - 1] + arr[i].ff;
nxt[i] = i;
while(nxt[i] + 1 <= n && sum[i] >= arr[nxt[i] + 1].ff){
nxt[i]++;
sum[i] += arr[nxt[i]].ff;
}
}
ans[arr[i].ss] = nxt[i];
}
for(int i = 1; i <= n; i++) cout << ans[i] - 1 << " ";
cout << endl;
}
}
Idea: lunchbox
Preparation: lunchbox
Analysis: lunchbox
If $$$k\geq 3$$$, the answer is equal to $$$0$$$ since after performing an operation on the same pair $$$(i, j)$$$ twice, performing an operation on the two new values (which are the same) results in $$$0$$$.
Therefore, let's consider the case for $$$1\leq k\leq 2$$$.
For $$$k=1$$$, it is sufficient to sort the array and output the minimum between $$$a_i$$$ and $$$a_{i+1} - a_i$$$.
For $$$k=2$$$, let's brute force the first operation. If the newly created value is $$$v$$$, then it is sufficient to find the smallest $$$a_i$$$ satisfying $$$a_i\geq v$$$ and greatest $$$a_i$$$ satisfying $$$a_i\leq v$$$ and relax the answer on $$$|a_i - v|$$$. Also, remember to consider the cases of performing no operation or one operation. This runs in $$$O(N^2\log N)$$$. There also exists a solution in $$$O(N^2)$$$ using a two pointers approach.
#include <bits/stdc++.h>
using namespace std;
#define int long long
signed main() {
int t;
cin >> t;
while (t--) {
int n, k;
cin >> n >> k;
vector<int> a(n);
for (int i = 0; i < n; i++) cin >> a[i];
if (k >= 3) {
cout << 0 << endl;
continue;
}
sort(begin(a), end(a));
int d = a[0];
for (int i = 0; i < n - 1; i++) d = min(d, a[i + 1] - a[i]);
if (k == 1) {
cout << d << endl;
continue;
}
for (int i = 0; i < n; i++) for (int j = 0; j < i; j++) {
int v = a[i] - a[j];
int p = lower_bound(begin(a), end(a), v) - begin(a);
if (p < n) d = min(d, a[p] - v);
if (p > 0) d = min(d, v - a[p - 1]);
}
cout << d << endl;
}
}
1904D1 - Set To Max (Easy Version) / 1904D2 - Set To Max (Hard Version)
Idea: oursaco
Preparation: oursaco
Analysis: oursaco
Can we reduce the number of intervals we want to apply an operation on?
What is the necessary condition to perform an operation on an interval
If $$$b_i < a_i$$$ for any $$$i$$$, then it is clearly impossible.
In order for $$$a_i$$$ to become $$$b_i$$$, $$$i$$$ must be contained by an interval that also contains a $$$j$$$ where $$$a_j = b_i$$$. Note that if there is a triple $$$i \le j < k$$$ where $$$a_j = a_k = b_i$$$, then it is never optimal to apply the operation on interval $$$[i, k]$$$, since applying the operation on interval $$$[i, j]$$$ will be sufficient. Thus, for $$$i$$$ we only need to consider the closest $$$a_j = b_i$$$ to the right or left of $$$i$$$.
Lets find the necessary conditions for us to apply an operation on the interval $$$[i, j]$$$. First of all, $$$a_k \le b_i$$$ for $$$i \le k \le j$$$. Second, $$$b_k \ge b_i$$$ for all $$$i \le k \le j$$$. Turns out, these conditions are also sufficient, since we can apply these operations in increasing order of $$$b_i$$$ without them interfering with each other. If we check for every $$$i$$$ there exists an interval $$$[i, j]$$$ or $$$[j, i]$$$ that satisfies the necessary conditions, then there will exist a sequence of operations to transform $$$a$$$ into $$$b$$$. Checking for the conditions can be done with brute force for D1 or using monotonic stacks or segment trees for D2.
#include <bits/stdc++.h>
#include <ext/pb_ds/assoc_container.hpp>
#include <ext/pb_ds/tree_policy.hpp>
using namespace __gnu_pbds;
using namespace std;
#define pb push_back
#define ff first
#define ss second
typedef long long ll;
typedef long double ld;
typedef pair<int, int> pii;
typedef pair<ll, ll> pll;
typedef pair<ld, ld> pld;
const int INF = 1e9;
const ll LLINF = 1e18;
const int MOD = 1e9 + 7;
template<class K> using sset = tree<K, null_type, less<K>, rb_tree_tag, tree_order_statistics_node_update>;
inline ll ceil0(ll a, ll b) {
return a / b + ((a ^ b) > 0 && a % b);
}
void setIO() {
ios_base::sync_with_stdio(0); cin.tie(0);
}
int main(){
setIO();
int T;
cin >> T;
for(int tt = 1; tt <= T; tt++){
int n;
cin >> n;
int a[n + 1], b[n + 1];
for(int i = 1; i <= n; i++) cin >> a[i];
for(int i = 1; i <= n; i++) cin >> b[i];
bool val[n + 1];
memset(val, false, sizeof(val));
for(int t = 0; t < 2; t++){
int prvb[n + 1]; //prev smaller
int nxta[n + 1]; //next greater
stack<pii> s;
s.push({INF, n + 1});
for(int i = n; i >= 1; i--){
while(s.top().ff <= a[i]) s.pop();
nxta[i] = s.top().ss;
s.push({a[i], i});
}
while(!s.empty()) s.pop();
s.push({0, 0});
for(int i = 1; i <= n; i++){
while(s.top().ff >= b[i]) s.pop();
prvb[i] = s.top().ss;
s.push({b[i], i});
}
int m[n + 1];
memset(m, 0, sizeof(m));
for(int i = 1; i <= n; i++){
m[a[i]] = i;
if(a[i] <= b[i] && m[b[i]]) val[i] |= prvb[i] < m[b[i]] && nxta[m[b[i]]] > i;
}
reverse(a + 1, a + n + 1);
reverse(b + 1, b + n + 1);
reverse(val + 1, val + n + 1);
}
bool ans = true;
for(int i = 1; i <= n; i++) ans &= val[i];
cout << (ans ? "YES" : "NO") << endl;
}
}
Idea: Apple_Method
Preparation: oursaco
Editorial 1:
Analysis: oursaco
Thanks to errorgorn for the solution!
The solution doesn't involve virtual tree
What is an easy way to represent the tree?
Consider the Euler tour of the tree where $$$in[u]$$$ is the entry time of each node and $$$out[u]$$$ is the exit time. The interval $$$[in[u], out[u]]$$$ corresponds to the subtree of $$$u$$$.
Removing a node is equivalent to blocking some intervals on the Euler tour. There are two cases when $$$a$$$ is blocked with respect to query node $$$x$$$. If $$$a$$$ is an ancestor of $$$x$$$, then the set of reachable nodes is reduced to the interval $$$[in[nxt_a], out[nxt_a]]$$$, where $$$nxt_a$$$ is the first node on the path from $$$a$$$ to $$$x$$$. This is equivalent to blocking the intervals $$$[0, in[nxt_a])$$$ and $$$(out[nxt_a], n - 1]$$$. If $$$a$$$ is not an ancestor, then the interval $$$[in[a], out[a]]$$$ is blocked.
Lets build a lazy segment tree on the Euler tour of the tree. Each leaf node will correspond to the depth of a node on the tree. We can re-root the tree from $$$a$$$ to $$$b$$$ by subtracting one from all nodes the range $$$[in[b], out[b]]$$$ and adding one to all other nodes. Thus, we can traverse the tree while re-rooting and process our queries offline. When the query node $$$x$$$ is the root, block all the necessary intervals and then find the maximum value in the segment tree.
#include <bits/stdc++.h>
#include <ext/pb_ds/assoc_container.hpp>
#include <ext/pb_ds/tree_policy.hpp>
using namespace __gnu_pbds;
using namespace std;
#define pb push_back
#define ff first
#define ss second
typedef long long ll;
typedef long double ld;
typedef pair<int, int> pii;
typedef pair<ll, ll> pll;
typedef pair<ld, ld> pld;
const int INF = 1e9;
const ll LLINF = 1e18;
const int MOD = 1e9 + 7;
template<class K> using sset = tree<K, null_type, less<K>, rb_tree_tag, tree_order_statistics_node_update>;
inline ll ceil0(ll a, ll b) {
return a / b + ((a ^ b) > 0 && a % b);
}
void setIO() {
ios_base::sync_with_stdio(0); cin.tie(0);
}
const int MAXN = 2e5;
const int MAXQ = 2e5;
int seg[4*MAXN + 5];
int tag[4*MAXN + 5];
int tim;
vector<int> g[MAXN + 5];
int in[MAXN + 5], out[MAXN + 5];
void push_down(int cur){
if(!tag[cur]) return;
for(int i = cur*2 + 1; i <= cur*2 + 2; i++){
seg[i] += tag[cur];
tag[i] += tag[cur];
}
tag[cur] = 0;
}
void update(int l, int r, int v, int ul = 0, int ur = tim - 1, int cur = 0){
if(l <= ul && ur <= r){
seg[cur] += v;
tag[cur] += v;
return;
}
push_down(cur);
int mid = (ul + ur)/2;
if(l <= mid) update(l, r, v, ul, mid, cur*2 + 1);
if(r > mid) update(l, r, v, mid + 1, ur, cur*2 + 2);
seg[cur] = max(seg[cur*2 + 1], seg[cur*2 + 2]);
}
int query(int l, int r, int ul = 0, int ur = tim - 1, int cur = 0){
if(l <= ul && ur <= r) return seg[cur];
push_down(cur);
int mid = (ul + ur)/2;
if(r <= mid) return query(l, r, ul, mid, cur*2 + 1);
if(l > mid) return query(l, r, mid + 1, ur, cur*2 + 2);
return max(query(l, r, ul, mid, cur*2 + 1), query(l, r, mid + 1, ur, cur*2 + 2));
}
void dfs1(int x, int p = 0){
in[x] = tim++;
for(int i : g[x]){
if(i == p) continue;
dfs1(i, x);
}
out[x] = tim - 1;
}
vector<pair<int, vector<int>>> que[MAXN + 5];
int nxt[MAXN + 5];
int ans[MAXQ + 5];
int n, q;
void dfs2(int x, int p = 0){
for(auto &i : que[x]){
vector<pii> skip;
bool found = false;
for(int j : i.ss){
if(j == x){
found = true;
break;
}
if(in[j] <= in[x] && in[x] <= out[j]){
skip.pb({0, in[nxt[j]] - 1});
skip.pb({out[nxt[j]] + 1, tim - 1});
} else {
skip.pb({in[j], out[j]});
}
}
if(found) continue;
sort(skip.begin(), skip.end());
int prv = 0;
for(pii j : skip){
if(prv < j.ff) ans[i.ff] = max(ans[i.ff], query(prv, j.ff - 1));
prv = max(prv, j.ss + 1);
}
if(prv <= tim - 1) ans[i.ff] = max(ans[i.ff], query(prv, tim - 1));
}
update(0, tim - 1, 1);
for(int i : g[x]){
if(i == p) continue;
update(in[i], out[i], -2);
nxt[x] = i;
dfs2(i, x);
update(in[i], out[i], 2);
}
update(0, tim - 1, -1);
}
int main(){
setIO();
cin >> n >> q;
for(int i = 0; i < n - 1; i++){
int a, b;
cin >> a >> b;
g[a].pb(b);
g[b].pb(a);
}
tim = 0;
dfs1(1);
for(int i = 0; i < q; i++){
int x, k;
cin >> x >> k;
vector<int> v(k);
for(int j = 0; j < k; j++) cin >> v[j];
que[x].pb({i, v});
}
for(int i = 2; i <= n; i++) update(in[i], out[i], 1);
dfs2(1);
for(int i = 0; i < q; i++){
cout << ans[i] << endl;
}
}
Editorial 2:
Analysis: willy108
In a tree, one of the farthest nodes from some node $$$x$$$ is one of the two endpoints of the diameter.
Let's try to find the diameter of the connected subgraph node $$$x$$$ is in after the nodes $$$a_{1 \dots n}$$$ are removed.
Consider an euler tour of the tree and order the nodes by their inorder traversal. When $$$k$$$ nodes are removed, the remaining nodes form $$$O(k)$$$ contiguous intervals in the tour.
Let's build a segtree/sparse table where each node stores the diameter (as a pair of nodes) for the nodes with $$$in$$$ values in the range $$$[l, r]$$$. To merge two diameters, we can enumerate all $$$4 \choose 2$$$ ways to pick the new diameter and take the best one.
To answer a query, we can first generate a list of banned intervals (just like solution 1) and use that list to generate the list of unbanned intervals. Then we can query our segtree for the diameter of each of ranges. Finally, we can combine the answers of the seperate queries to obtain the diameter of the connected subgraph. We know the farthest node from node $$$x$$$ is one of the two endpoints, so it suffices to just manually check the distance of those two nodes.
Final complexity is $$$O(n \log^2 n + \sum k \log n)$$$.
#include <bits/stdc++.h>
#define sz(x) ((int)(x.size()))
#define all(x) x.begin(), x.end()
#define pb push_back
#define eb emplace_back
const int MX = 2e5 +10, int_max = 0x3f3f3f3f;
using namespace std;
//lca template start
vector<int> dep, sz, par, head, tin, tout, tour;
vector<vector<int>> adj;
int n, ind, q;
void dfs(int x, int p){
sz[x] = 1;
dep[x] = dep[p] + 1;
par[x] = p;
for(auto &i : adj[x]){
if(i == p) continue;
dfs(i, x);
sz[x] += sz[i];
if(adj[x][0] == p || sz[i] > sz[adj[x][0]]) swap(adj[x][0], i);
}
if(p != 0) adj[x].erase(find(all(adj[x]), p));
}
void dfs2(int x, int p){
tour[ind] = x;
tin[x] = ind++;
for(auto &i : adj[x]){
if(i == p) continue;
head[i] = (i == adj[x][0] ? head[x] : i);
dfs2(i, x);
}
tout[x] = ind;
}
int k_up(int u, int k){
if(dep[u] <= k) return -1;
while(k > dep[u] - dep[head[u]]){
k -= dep[u] - dep[head[u]] + 1;
u = par[head[u]];
}
return tour[tin[u] - k];
}
int lca(int a, int b){
while(head[a] != head[b]){
if(dep[head[a]] > dep[head[b]]) swap(a, b);
b = par[head[b]];
}
if(dep[a] > dep[b]) swap(a, b);
return a;
}
int dist(int a, int b){
return dep[a] + dep[b] - 2*dep[lca(a, b)];
}
//lca template end
//segtree template start
#define ff first
#define ss second
int dist(pair<int, int> a){
return dist(a.ff, a.ss);
}
pair<int, int> merge(pair<int, int> a, pair<int, int> b){
auto p = max(pair(dist(a), a), pair(dist(b), b));
for(auto x : {a.ff, a.ss}){
for(auto y : {b.ff, b.ss}){
if(x == 0 || y == 0) continue;
p = max(p, pair(dist(pair(x, y)), pair(x, y)));
}
}
return p.ss;
}
pair<int, int> mx[MX*4];
#define LC(k) (2*k)
#define RC(k) (2*k +1)
void update(int p, int v, int k, int L, int R){
if(L + 1 == R){
mx[k] = {tour[p], tour[p]};
return ;
}
int mid = (L + R)/2;
if(p < mid) update(p, v, LC(k), L, mid);
else update(p, v, RC(k), mid, R);
mx[k] = merge(mx[LC(k)], mx[RC(k)]);
}
void query(int qL, int qR, vector<pair<int, int>>& ret, int k, int L, int R){
if(qR <= L || R <= qL) return ;
if(qL <= L && R <= qR){
ret.push_back(mx[k]);
return ;
}
int mid = (L + R)/2;
query(qL, qR, ret, LC(k), L, mid);
query(qL, qR, ret, RC(k), mid, R);
}
//segtree template end
int query(vector<int> arr, int x){
vector<pair<int, int>> banned, ret;
for(int u : arr){
if(lca(u, x) == u){
u = k_up(x, dep[x] - dep[u] - 1);
banned.push_back({0, tin[u]});
banned.push_back({tout[u], n});
}else{
banned.push_back({tin[u], tout[u]});
}
}
sort(all(banned), [&](pair<int, int> a, pair<int, int> b){
return (a.ff < b.ff) || (a.ff == b.ff && a.ss > b.ss);
});
vector<pair<int, int>> tbanned; //remove nested intervals
int mx = 0;
for(auto [a, b] : banned){
if(b <= mx) continue;
else if(a != b){
tbanned.pb({a, b});
mx = b;
}
}
banned = tbanned;
int tim = 0;
for(auto [a, b] : banned){
if(tim < a)
query(tim, a, ret, 1, 0, n);
tim = b;
}
if(tim < n)
query(tim, n, ret, 1, 0, n);
pair<int, int> dia = pair(x, x);
for(auto p : ret) dia = merge(dia, p);
int ans = max(dist(x, dia.ff), dist(x, dia.ss));
return ans;
}
void solve(){
cin >> n >> q;
dep = sz = par = head = tin = tout = tour = vector<int>(n+1, 0);
adj = vector<vector<int>>(n+1);
for(int i = 1; i<n; i++){
int a, b;
cin >> a >> b;
adj[a].push_back(b);
adj[b].push_back(a);
}
dfs(1, 0);
head[1] = 1;
dfs2(1, 0);
for(int i = 1; i<=n; i++){
update(tin[i], dep[i], 1, 0, n);
}
for(int i = 1; i<=q; i++){
int x, k;
cin >> x >> k;
vector<int> arr(k);
for(int& y : arr) cin >> y;
cout << query(arr, x) << "\n";
}
}
signed main(){
cin.tie(0) -> sync_with_stdio(0);
int T = 1;
//cin >> T;
for(int i = 1; i<=T; i++){
//cout << "Case #" << i << ": ";
solve();
}
return 0;
}
Idea: Apple_Method
Preparation: oursaco
Analysis: Apple_Method
Can we represent the conditions as a graph?
Lets rewrite the condition that node $$$a$$$ must be smaller than node $$$b$$$ as a directed edge from $$$a$$$ to $$$b$$$. Then, we can assign each node a value based on the topological sort of this new directed graph. If this directed graph had a cycle, it is clear that there is no way to order the nodes.
With this in mind, we can try to construct a graph that would have these properties. Once we have the graph, we can topological sort to find the answer.
For now, let's consider the problem if it only had type 1 requirements (type 2 requirements can be done very similarly).
Thus, the problem reduces to "given a path and a node, add a directed edge from the node to every node in that path." To do this, we can use binary lifting. For each node, create $$$k$$$ dummy nodes, the $$$i$$$th of which represents the minimum number from the path between node $$$a$$$ and the $$$2^i$$$th parent of $$$a$$$. Now, we can draw a directed edge from the the $$$i$$$th dummy node of $$$a$$$ to the $$$i-1$$$th dummy node of $$$a$$$ and the $$$i-1$$$th dummy node of the $$$2^{i-1}$$$th parent of $$$a$$$.
Now, to add an edge from any node to a vertical path of the tree, we can repeatedly add an edge from that node to the largest node we can. This will add $$$O(\log n)$$$ edges per requirement.
The final complexity is $$$O((n+m)\log n)$$$ time and $$$O((n+m)\log n)$$$.
#pragma GCC optimize("O3,unroll-loops")
#pragma GCC target("avx,avx2,fma")
#pragma GCC target("sse4,popcnt,abm,mmx,tune=native")
#include <bits/stdc++.h>
#include <ext/pb_ds/assoc_container.hpp>
#include <ext/pb_ds/tree_policy.hpp>
using namespace __gnu_pbds;
using namespace std;
#define pb push_back
#define ff first
#define ss second
typedef long long ll;
typedef long double ld;
typedef pair<int, int> pii;
typedef pair<ll, ll> pll;
typedef pair<ld, ld> pld;
const int INF = 1e9;
const ll LLINF = 1e18;
const int MOD = 1e9 + 7;
template<class K> using sset = tree<K, null_type, less<K>, rb_tree_tag, tree_order_statistics_node_update>;
inline ll ceil0(ll a, ll b) {
return a / b + ((a ^ b) > 0 && a % b);
}
void setIO() {
ios_base::sync_with_stdio(0); cin.tie(0);
}
const int MAXN = 200'000;
const int LG = 18;
const int MAXM = 200'000;
vector<int> g[MAXN + 5];
int sz[MAXN + 5], in[MAXN + 5], par[MAXN + 5], depth[MAXN + 5], head[MAXN + 5], tim;
int n, m;
void dfs1(int x, int p){
sz[x] = 1;
for(int &i : g[x]){
if(i == p) continue;
dfs1(i, x);
sz[x] += sz[i];
if(g[x][0] == p || sz[i] > sz[g[x][0]]) swap(g[x][0], i);
}
}
void dfs2(int x, int p){
in[x] = tim++;
par[x] = p;
depth[x] = depth[p] + 1;
for(int i : g[x]){
if(i == p) continue;
head[i] = (i == g[x][0] ? head[x] : i);
dfs2(i, x);
}
}
const int MAXSZ = MAXN + 2*MAXN*LG;
int down[LG][MAXN + 5];
int up[LG][MAXN + 5];
vector<int> dag[MAXSZ+ 5];
int lg[MAXN + 5];
void upd(int l, int r, int x, int t){
if(l <= in[x] && in[x] <= r){
if(l < in[x]) upd(l, in[x] - 1, x, t);
if(in[x] < r) upd(in[x] + 1, r, x, t);
} else {
int sz = lg[r - l + 1];
if(t == 2){
dag[up[sz][l]].pb(x);
dag[up[sz][r - (1 << sz) + 1]].pb(x);
} else {
dag[x].pb(down[sz][l]);
dag[x].pb(down[sz][r - (1 << sz) + 1]);
}
}
}
//1 is down, 2 is up
void draw(int a, int b, int c, int t){
while(head[a] != head[b]){
if(depth[head[a]] > depth[head[b]]) swap(a, b);
upd(in[head[b]], in[b], c, t);
b = par[head[b]];
}
if(depth[a] > depth[b]) swap(a, b);
upd(in[a], in[b], c, t);
}
bool vis[MAXSZ + 5], stk[MAXSZ + 5];
vector<int> ord;
bool fail;
int ind;
void dfs3(int x){
if(fail) return;
vis[x] = stk[x] = true;
for(int i : dag[x]){
if(i == x) continue;
if(!vis[i]){
dfs3(i);
} else if(stk[i]){
fail = true;
break;
}
}
stk[x] = false;
if(x <= n) ord.pb(x);
}
int main(){
setIO();
cin >> n >> m;
lg[1] = 0;
for(int i = 2; i <= n; i++) lg[i] = lg[i/2] + 1;
for(int i = 0; i < n - 1; i++){
int a, b;
cin >> a >> b;
g[a].pb(b);
g[b].pb(a);
}
tim = 0;
dfs1(1, 1);
head[1] = 1;
dfs2(1, 1);
for(int i = 1; i <= n; i++) down[0][in[i]] = up[0][in[i]] = i;
ind = n + 1;
for(int i = 1; i < LG; i++){
for(int j = 0; j + (1 << i) <= n; j++){
down[i][j] = ind++;
up[i][j] = ind++;
dag[down[i][j]].pb(down[i - 1][j]);
dag[down[i][j]].pb(down[i - 1][j + (1 << (i - 1))]);
dag[up[i - 1][j]].pb(up[i][j]);
dag[up[i - 1][j + (1 << (i - 1))]].pb(up[i][j]);
}
}
for(int i = 0; i < m; i++){
int t, a, b, c;
cin >> t >> a >> b >> c;
draw(a, b, c, t);
}
fail = false;
for(int i = 1; i <= n; i++){
if(!vis[i]){
dfs3(i);
if(fail) break;
}
}
if(fail){
cout << -1 << endl;
return 0;
}
reverse(ord.begin(), ord.end());
int ans[n + 1];
for(int i = 0; i < ord.size(); i++){
ans[ord[i]] = i + 1;
}
for(int i = 1; i <= n; i++) cout << ans[i] << " ";
cout << endl;
}
i love oursaco
i love oursaco
Great Idea to express that in the username xd
after this contest i realised, that chess tasks are not for me)) i solved B easily, mb im stupid for A idk
Lol, I could say the same about math tasks. My suggestion would be for you to go onto chess.com and practice in your free time :)
i don't think the problem is that you are not good at chess enough, this is really just a brain teaser, i might suggest instead to get familiar with those type of problems, a good book about this is "The art and craft of mathematical problem solving", an interesting one full of problems, although it barely talks about math, just skip the math ones since the book structure is designed for so :3
and who can explain why this 236632847 doesn`t work ?
Try this sample:
Correct answer would be
1
instead of2
(from your solution)The reason is that when you use
upper_bound
, it give you the first element that is considered greater than your number. You should get theprev
of that element for calculating difference. In the case above,1006 - 1000
give you6
andupper_bound
give you8
, but you should look for5
thanks you man !
can you provide and explain your B solution?
i resigned from the chess
Got TLE in B on system testing. Did two wrong submissions also on B. I was hoping to be blue today. But missed the chance.
Downvotes shows hate or what?
What's wrong with this solution for A? 236554300
236585544 is my solution with same approach
Don't do that, if you check all cases by it makes your code so long and sometimes missing cases lead to WA. Instead of using you can make 2 vectors Move_x, Move_y then For from 0->7 (a knight can move 8 cases) from the Queen and King, If a cell can be moved by King and Queen your Ans++. You can refer to my code: 236529329
Yes, solved it like that later but thought casework would work as well. I missed a few cases in brute force case solution which I have to check
Pretests were weak for problem D too.
https://mirror.codeforces.com/blog/entry/123079?#comment-1092107
I liked D2 on how simple its implementation really was. Even though I implemented sparse table and range update segment trees, realizing it only required monotonic stack was pretty cool.
so i think you'll be likely to dislike problem E and F.(hard and long code)
Not really. In general, I like problems that have harder logic and lesser implementation.
However even if the implementation is long, if the question is very unique and beautiful, I love it regardless.
Can someone hack my O(n^2) solution?
https://mirror.codeforces.com/contest/1904/submission/236581716
can anyone tell me why did I get TLE here ? https://mirror.codeforces.com/contest/1904/submission/236554358.
I think the complexity here is O(N*N*log(N)). Am I wrong?
You used set and map there. So on the worst case time complexity is O(N*N*log(N)*log(N))
I used them separately.
Quite strange that commenting out the map gets it accepted because the map itself should run in O(N*logN)
thats why I'm asking. I really didnt understand the issue in that part of the code.
Should the time complexity not be O(N^2 logN). As logN is required to find the upper/lower bound in the set for an element. Why there is an additional N factor in TC. Could you please explain?
oh my bad, i thought he runs the pointer back to the starting point after using lower_bound. Anyway, his code is correct, just need a few minor optimizations. Here the fixed code: 236638928
Thanks
Thanks for contest
Can someone try hacking this solution to D2?
This test will TLE, courtesy of porker2008
porker2008 be like: Your wish, my command mate!!
nice problems
Can Someone explain where my code is wrong for Problem B https://mirror.codeforces.com/contest/1904/submission/236571395
Loved this competition! My solution to problem C first failed on system testing, but then suddenly the verdict changed) Also, my friend’s solution failed on the same test, but it changed the verdict after the final results! Can someone please explain how this could be, because I just don't understand how system testing works on Codeforces. Sorry for my poor language...)
Something like that happened to me as well. Not sure what happened there
can somebody help me with the reason of tle on d2(hard version) on test case 25 .here's my submission — https://mirror.codeforces.com/contest/1904/submission/236593367
that is n^2
Cool Contest!
For D1/D2, I had a solution I'd like to share.
Firstly, if any position exists such that a[i] > b[i], then obviously there's no solution.
Then, I iterated over the numbers from 1 to n, and for each i, I tried to find all the intervals centered around a position j such that a[j] = i. I then tried to extend it as much to the left and right as possible without messing any conditions up. I found that it you did this for all numbers from 1 to n in increasing order, then you'll either construct a valid solution or find that it's impossible to do so. You can optimize this a bit for D2 by binary searching how far to the left and right you'll extend your current position to and using range assignment and range query operations to check if the intervals you're using are okay (You can do that with a lazy segment tree).
In essence, it's similar to the official solution except you actually construct the solution in this case instead of determining if it's possible. Here's my submission: 236596139
It barely passes in time lol
i solved D1/D2 in this way, but my code is shorter than yours
It will be better if F offers stronger samples.
I finished coding and passed samples in 15 min but got WA on #3, and failed to debug in remaining 30 min.
Anyway, A~D are enjoyable.
Nice problems. D2 systests were pretty weak though.
In problem E, how are we handling the case when path is outside of current subtree?
If you reroot the tree to $$$x$$$ correctly, then each leaf in the segment tree will store the distance of a node to $$$x$$$ and this isn't limited to just $$$x$$$'s subtree. Doing a max query on all the non blocked intervals will then find the maximum distance to all reachable nodes.
Was the TL and ML for problem F a little tight in this contest? I see a lot of submissions here which have taken > 3 seconds.
I have roughly the same solution as the editorial, except that my graph is built with edges denoting >= relationships (instead of > relationships), then I find the SCCs of the graph, and the remainder of the construction is roughly the same. Or is there something else extremely slow about my code? I could get it to pass very narrowly with pragmas though.
Maybe to solve the memmory problem,you should use tree chain dissection.
But the time complexity is O(log^2n) However
I ran into a similar problem. I tried implementing the editorial's idea and got TLE (later cloned the contest and checked that it passed in 4788ms), then changed how i did toposort and barely got AC.
What seems to have happened is that they only had 1 AC solution that passed in 2s (1793ms in my testing) and used the standard "make the TL 2x the model solution", but the way they get the nodes to draw the edges is particularly smart, so it's very easy to make something inneficient compared to it and end up with double or triple the constant factor.
By the way I did some testing with your code, it passed in 5038ms and 567200KB. Changing the bound for g from MXN * MXK * 3 to MXN * MXK * 2+20 makes it pass the memory limit (482900KB) and changing its type from vector to basic_string also makes it easily pass TL (2761ms).
Thanks a lot for looking into this!
I had no idea that there was such a substantial improvement possible with basic_string instead of vectors, thanks for pointing that out too!
E is a great problem, thank you for such a masterpiece!
https://mirror.codeforces.com/contest/1904/submission/236574245
why my solution give wrong answer for B
236621743 can someone please give me any idea to improve this code as iam getting time limit exceeded for test3 of B.
orz
Can someone help me understand the solution for D.Why does there always exist a valid sequence of operations if we can satisfy the conditions for each element independently
How to solve problem E if the queries given online?
The solution described in "Editorial 2" is online.
Nice Solutions.
Can someone help me in C? The first test gives me MLE and WA but in my computer gives me a correct answer 236711122
Maybe you can try resubmit.
I tested your solution and it is working fine (there is a bug in your solution though)
sorry to bother you sir but can you explain why my code is giving wrong answer 281355704 edit :: got it thank you!
In Problem F, can anyone explain how they are achieving this exactly: "Given a path and a node, add a directed edge from the node to every node in that path"? I can't understand how they can add edges to (or from) all the nodes in the path, won't that take linear time?
Also, in this sentence "we can repeatedly add an edge from that node to the largest node we can" what does "largest" node mean?
Imagine creating a binary lifting on the tree. You have $$$bin[i][0] = par[i]$$$, $$$bin[i][1] = par[par[i]]$$$, etc. Here, $$$bin[i][0]$$$ is an auxiliary node that has edges to $$$i$$$ and $$$par[i]$$$. $$$bin[i][1]$$$ is an auxiliary node that has edges to $$$bin[i][0]$$$ and $$$bin[par[i]][0]$$$. Note that this is equivalent to $$$bin[i][1]$$$ having edges to $$$i$$$, $$$par[i]$$$, and $$$par[par[i]]$$$. Similar to how you always jump up the largest power of 2 that fits when you perform binary lifting, you will always add an edge to the auxiliary node with the largest power of 2 that fits. This will always require at most $$$log N$$$ edge additions.
There is also a $$$O(nlogn)$$$ online sol for E (although with high constant).
Take the euler tour of the tree (the version where you insert again every time you go to a new node, exactly like this image taken from https://cp-algorithms.com/graph/lca.html ):
Lemma: the length of the diameter of a tree is $$$\max(height[a] - 2\cdot height[b] + height[c])$$$ for $$$a \leq b \leq c$$$, being $$$height[i]$$$ the height of the $$$i-th$$$ node in the euler tour. In that case, if $$$v[i]$$$ is the $$$i-th$$$ node in the euler tour, the endpoints of the diameter will be $$$v[a]$$$ and $$$v[c]$$$ and their lca will be $$$v[b]$$$. Proof is an exercise to the reader ;)
Using that lemma, we can ban segments like the solution 1 and if $$$x$$$ is the source node of our query, the answer will be $$$\max(height[a] - 2\cdot height[b] + height[c])$$$ for $$$a \leq b \leq c = x$$$ or for $$$x = a \leq b \leq c$$$, which can be done with a segtree. You just need to remember to rollback your updates after every query. Code: https://mirror.codeforces.com/contest/1904/submission/236721097
Hi can someone help me in figuring out why my two pointer approach for problem C is failing?
https://mirror.codeforces.com/contest/1904/submission/236751570
Ive sorted the array and finding the sum which is closest to the ith element and then min of all their difference is the answer.. can anyone pointout the issue in my logic?
You can try this case.
The correct answer is
2
.The first operation is to get
501
from1000 - 499
.The second operation is to get
2
from501 - 499
.Thanks!! i missed this case
In problem C should I pick different (i,j) each time?
No there is no any condition on taking different (i,j) each time.
Or can someone share their two pointer approach for problem c?
Your two pointer approach is mostly correct except two bugs.
Check out 236828774 for references.
Direction of the edges in the explanation for the solution for F are backwards
can someone please explain what does
|a|
notation mean in problem C? is it the length of array?if so, why sorting array in this problem doesn't violate the condition of
must pick some (i,j) such that 1≤i<j≤|a|
?You are correct.
|a|
means the length of the arraya
.Essentially, in each operation, you can pick two different elements (can be equal) from the array and compute the absolute differences and put the result back to the array.
So the order of the elements in the array does not matter and you are free to reorder the array (including sorting it).
got it, thank you very much for clarification!
Can anyone explain me how edges are created in "problem F" in simple way and why are they created in that way(thought process)?
why do we need to find smallest ai satisfying ai≥v and greatest ai satisfying ai≤v in C why can't we go over all combinations of dif[i]-a[j] (where dif is the difference array of a of length n-1)
failed submission : https://mirror.codeforces.com/contest/1904/submission/236901057
Hi thanks for this contest and I hope this is not necro-posting but for $$$B$$$, don't you think it is enough to check if $$$p_i \geq a_{i+1}$$$ and then set the answer of $$$i$$$ equal to the answer of $$${i + 1}$$$? I have an implementation which you can check out 237039030.
My submission to problem F. Getting a TLE, unable to figure out why. Its similar to others', like MridulAhi (link). Can someone help?
good D2 and E
Could someone help to find what's wrong with my code https://mirror.codeforces.com/contest/1904/submission/237279390?
Can someone tell me why did we multiply 4001 in this solution
https://mirror.codeforces.com/contest/1904/submission/236534459
It's not "multiply $$$4001$$$". As the official solution we need
set<pair<int,int>>
. But in fact we can useset
, because every point with small $$$x, y$$$ can be "hashed" by $$$1000x + y$$$. Why it is true? Ok, let $$$(1000x + y)$$$ = $$$(1000z + t)$$$. Then $$$(y - t) = 0 \bmod 1000$$$. But if $$$y, t \leq 100$$$, then $$$y = t$$$ and so $$$x = z$$$. So in fact we just need to use $$$(x, y) <-> (k * x + y)$$$ for a big $$$k$$$.why does this solution gives TLE on problem C 237622851
Can someone tell , in problem 3 why this code is not passing the test case 35 of test case 2 . https://mirror.codeforces.com/contest/1904/submission/237695706
I think it is because your
num
might not be initialized for some test cases and it is reading the old value from the last test case.Removing it will now TLE on test 4: 237886642
This is probably because you need to sort and erase elements from
vector
for each iteration ofb[]
Slightly change your code without changing your algorithm can get it accepted: 237887308
Thanks! a lot porker2008 for solving my query .
Problem B
Can anybody explain me why is the last sentence of tutorial is correct? If we have a [3, 8, 4, 5], then sorted it gonna be [3, 4, 5, 8] p — prefix sum [3, 7, 12, 20]. Lets calculate answer for i = 2, pi = 7 and largest aj (pi >= aj) is 5 and j = 2. But the correct answer is 3 because we can collect all 3 numbers if started with 4 -> 3, 5, 8
The tutorial is correct. The answer is not
j
, but answer toj
.If you start with
5
, you can collect all otherthree
. So if you start with4
, since prefix sum is7
, and it is greater than5
, you will have the same answer as if you start with5
, which is alsothree
.So it is better to calculate the answer to larger
j
, then smallerj
.o
Please! Could someone explain me why this 240992164 gives a wrong answer
Whats' wrong with my solution for D1?
https://mirror.codeforces.com/contest/1904/submission/244601635
Take a look at Ticket 17327 from CF Stress for a counter example.
Task E is really cool!
Both of the intended solutions are really cute, the continuous interval is also really cute. Thank you for this task!
I got stuck on problem C for a while. For case k=2, after sorting A, I looked for closest elements of
D[i] = A[i + 1] - A[i]
in the array to check for the second minimum absolute difference. Well.. my mistake was choosingD[i] = A[i + 1] - A[i]
(same as k=1) in the first place thinking that it helps to minimize, but i and j could actually be any pair. Of course, I didn't realize it and checked the editorial to finally understand my stupidity.For problem C, case when k=1 why are we checking for minimum among a[i]-a[j] and a[i]s. Shouldn't it be the minimum among the former only as that's how the operation was defined?
consider the case: 2 5 10 here according to you the answer will be 3 but the min of the whole array will be 2 therefore we need min(a[i], a[i]-a[j]).
Can someone explain the soln to B? I didn't understand the tutorial's soln.
help please, what is wrong? Why tle? https://mirror.codeforces.com/contest/1904/submission/269838822
B has totally messed up with my brain.
Can someone tell me where it might be failing 279271278
CodeSniperMan 281079206 ~~~~~~~~~~~~ void solve() { int a,b; cin >> a >> b; int x,y; cin >> x >> y; int p, q; cin >> p >> q;
} ~~~~~~~~~~~~~~