Thank you for participation and we hope you enjoy this round :)
Additionally, we sincerely invite you to join the unofficial round of TheForces Community tomorrow. Compete alongside many high-rated participants and earn your own rating in a our own rating system(TheForces Rating System)!
For more details, pls read https://mirror.codeforces.com/blog/entry/131733.
A — Submission Bait
For Alice, what if choosing the maximum value doesn't work?
Consider parity.
Case $$$1$$$: When all values appear an even number of times, Alice will lose. This is because no matter which number Alice chooses, Bob can mimic Alice's move.
Case $$$2$$$: When at least one value appears an odd number of times, Alice will win. Alice only needs to choose the maximum value that appears an odd number of times, which will force Bob into Case $$$1$$$.
Time complexity: $$$O(n)$$$.
Sort the array $$$a$$$ in non-increasing order. We can observe that any state can be represented as a prefix of the array $$$a$$$. Consider dynamic programming: let $$$dp_i$$$ denote whether the first player is guaranteed to win when the state is $$$a_{1...i}$$$ ($$$dp_i=1$$$ indicates a guaranteed win for the first player). We have the following formula:
$$$dp_1=1$$$;
$$$dp_i= \sim (dp_{i-1}$$$ & $$$dp_{id_1-1}$$$ & $$$dp_{id_2-1} \ldots)$$$, where $$$id_j$$$ satisfies $$$a_{id_j} \neq a_{id_j+1}$$$ and $$$id_j < i$$$.
Time complexity: $$$O(n^2)$$$.
#include <bits/stdc++.h>
using namespace std;
#define m_p make_pair
#define all(x) (x).begin(),(x).end()
#define sz(x) ((int)(x).size())
#define fi first
#define se second
typedef long long ll;
mt19937 rnd(chrono::steady_clock::now().time_since_epoch().count());
mt19937 rnf(2106);
const int N = 55;
int n;
int q[N];
void solv()
{
cin >> n;
for (int i = 1; i <= n; ++i)
q[i] = 0;
for (int i = 1; i <= n; ++i)
{
int x;
cin >> x;
++q[x];
}
for (int i = 1; i <= n; ++i)
{
if (q[i] % 2 == 1)
{
cout << "YES\n";
return;
}
}
cout << "NO\n";
}
int main()
{
#ifdef SOMETHING
freopen("input.txt", "r", stdin);
//freopen("output.txt", "w", stdout);
#endif // SOMETHING
ios_base::sync_with_stdio(false), cin.tie(0);
int tt = 1;
cin >> tt;
while (tt--)
{
solv();
}
return 0;
}
B — Array Craft
Starting from a trival construction.
Utilize the condition $$$x > y$$$.
First, we consider making $$$presum_x > presum_j$$$ for all $$$x < i \le n$$$, and similarly for $$$y$$$. We can think of a trivial construction: $$$a[r, \ldots ,l]=[1, \ldots ,1], a[1, \ldots...,r-1]=[-1, \ldots, -1]$$$ and $$$a[l+1,\ldots,n]=[-1, \ldots, -1]$$$.
The construction doesn't works when $$$presum_x<0$$$, but we are close to the correct solution. Next, we will make a little adjustment: $$$a[r, \ldots, l]=[1,\ldots,1], a[1, \ldots...,r-1]=[\ldots,1, -1]$$$ and $$$a[l+1,\ldots,n]=[-1,1, \ldots]$$$.
It is not hard to see $$$presum_x \ge presum_j$$$ for all $$$x < i \le n$$$, and for $$$1 \le i \le y$$$, $$$\max(presum_i)-min(presum_i)\le 1$$$. Thus, we get $$$presum_x \ge 2+presum_{y-1} \ge 2+min(presum_i) \ge 1+max(presum_i)$$$. The same applies to the suffix sum as well. Therefore, this construction is valid.
Time complexity: $$$O(n)$$$.
Solve the problem when $$$x \le y$$$.
#include<bits/stdc++.h>
using namespace std;
int main(){
ios::sync_with_stdio(false);
cin.tie(nullptr);
int t;
cin >> t;
while(t>0){
t--;
int n,x,y;
cin >> n >> x >> y;
x--; y--;
vector<int> a(n,1);
int e;
e=-1;
for(int i=x+1;i<n;i++){
a[i]=e;
e*=-1;
}
e=-1;
for(int i=y-1;i>=0;i--){
a[i]=e;
e*=-1;
}
for(int i=0;i<n;i++){
if(i){cout << " ";}
cout << a[i];
}cout << "\n";
}
return 0;
}
C — Mad MAD Sum
After one operation, the array becomes non-decreasing.
Consider $$$a_1=a_2=\ldots=a_n$$$, the operation seems to have shifted the array right.
When does the "right shift" parttern happen?
Read hints first.
Let's consider only non-decreasing arrays. Observe a continuous segments $$$a[l...r]=x(l<r,x>0)$$$, after one operation, we get $$$a[l+1,min(r+1,n)]=x$$$ holds. We can conclude that if, for all non-zero contiguous segments in the array (except the last one), their lengths are all greater than $$$1$$$, then the array follows the "right shift" parttern. Let's say this kind of array "good".
The last problem is when will the array become "good". Let's assume we get the array $$$b$$$ after one operation on array $$$a$$$, we get $$$b_i<b_{i+1}<b_{i+2}$$$. We can infer that $$$b_i=a_i,b_{i+1}=a_{i+1}$$$ and there is at least $$$a_j=a_{i+1}(j \le i)$$$, which shows that $$$a$$$ is not non-decreasing. In other words, after two operations, we can always get a "good" array. Then the calculating is trival.
Time complexity: $$$O(n)$$$.
#include <bits/stdc++.h>
using namespace std;
#define m_p make_pair
#define all(x) (x).begin(),(x).end()
#define sz(x) ((int)(x).size())
#define fi first
#define se second
typedef long long ll;
mt19937 rnd(chrono::steady_clock::now().time_since_epoch().count());
mt19937 rnf(2106);
const int N = 200005;
int n;
int a[N];
bool c[N];
void doit()
{
for (int i = 1; i <= n; ++i)
c[i] = false;
int maxu = 0;
for (int i = 1; i <= n; ++i)
{
if (c[a[i]])
maxu = max(maxu, a[i]);
c[a[i]] = true;
a[i] = maxu;
}
}
void solv()
{
cin >> n;
for (int i = 1; i <= n; ++i)
cin >> a[i];
ll ans = 0;
for (int i = 1; i <= n; ++i)
ans += a[i];
doit();
for (int i = 1; i <= n; ++i)
ans += a[i];
doit();
for (int i = 1; i <= n; ++i)
ans += (n - i + 1) * 1LL * a[i];
cout << ans << "\n";
}
int main()
{
#ifdef SOMETHING
freopen("input.txt", "r", stdin);
//freopen("output.txt", "w", stdout);
#endif // SOMETHING
ios_base::sync_with_stdio(false), cin.tie(0);
int tt = 1;
cin >> tt;
while (tt--)
{
solv();
}
return 0;
}
The title "Mad MAD Sum" is not only a pun on the words Mad and MAD (Maximum Appearing Duplicate), but also implies the solution to the problem :)
D Grid Puzzle
Obviously, when $$$a_i$$$ is large enough, we will definitely use operation $$$2$$$ on the $$$i$$$-th row. What is its specific value?
It is $$$5$$$(try to prove it). Now we can only consider $$$a_i \le 4$$$ cases.
From left to right, consider a greedy solution or a DP solution.
Read hints first.
For $$$a_i \ge 5$$$, we will definitely use operation $$$2$$$ on the $$$i$$$-th row because at least three $$$2 \times 2$$$ subgrid are needed to cover it, which is not better then do three times operation $$$2$$$ in the $$$i-1,i$$$ and $$$i+1$$$ row.
Now we can only consider $$$a_i \le 4$$$ cases. Let's consider from left to right.
In the left most row, there are $$$3$$$ cases:
there is no black cells Do nothing;
there is $$$\le 2$$$ black cells. We can put one $$$2 \times 2$$$ subgrid greedily;
there is $$$> 2$$$ black cells. We just use one time operation $$$2$$$ in this row(try to prove it).
We can summarize that for the $$$i$$$-th row, there are only three situations where it is affected by the $$$i-1$$$-th row:
it is not affected;
the cells in the third and fourth columns have been colored white;
the cells in the first and second columns have been colored white.
We can greedily process this process from left to right, or use DP.
Time complexity: $$$O(n)$$$.
#include<bits/stdc++.h>
using namespace std;
#define MOD 1000000007
#define MODD 998244353
long long int bigmod(long long int a,long long int b){
long long int ans=1,powa=a;
while(b>0){
if(b%2==1){
ans*=powa;
ans%=MOD;
}
powa=powa*powa;
powa%=MOD;
b/=2;
}
return ans;
}
void func(){
int n;
cin >> n;
int arr[n];
for(int i=0;i<n;i++){
cin >> arr[i];
}
bool b1=0,b2=0;
int ans=0;
for(int i=0;i<n;i++){
if((!b1)&&(!b2)){
if(arr[i]==0) continue;
ans++;
if(arr[i]<=2) b1=1;
}
else if(b1){
b1=0;
if(arr[i]<=2) continue;
ans++;
if(arr[i]<=4) b2=1;
}
else{
b2=0;
if(arr[i]==0) continue;
ans++;
if(arr[i]<=4) b1=1;
}
}
cout << ans << '\n';
return;
}
int main(){
ios_base::sync_with_stdio(false);
cin.tie(NULL); cout.tie(NULL);
int t;
cin >> t;
while(t--){
func();
}
}
//i added the bigmod func to my code cause yes
#include <bits/stdc++.h>
using namespace std;
#define m_p make_pair
#define all(x) (x).begin(),(x).end()
#define sz(x) ((int)(x).size())
#define fi first
#define se second
typedef long long ll;
mt19937 rnd(chrono::steady_clock::now().time_since_epoch().count());
mt19937 rnf(2106);
const int N = 200005;
int n;
int a[N];
int dp[N];
void minh(int& x, int y)
{
x = min(x, y);
}
void solv()
{
cin >> n;
for (int i = 1; i <= n; ++i)
cin >> a[i];
int minu[2] = {N, N};
for (int i = 1; i <= n; ++i)
{
dp[i] = dp[i - 1] + 1;
if (a[i] == 0)
minh(dp[i], dp[i - 1]);
if (a[i] <= 2)
minh(dp[i], i + minu[1 - i % 2]);
if (a[i] <= 2)
minh(minu[i % 2], dp[i - 1] - i);
else if (a[i] > 4)
minu[0] = minu[1] = N;
}
cout << dp[n] << "\n";
}
int main()
{
#ifdef SOMETHING
freopen("input.txt", "r", stdin);
//freopen("output.txt", "w", stdout);
#endif // SOMETHING
ios_base::sync_with_stdio(false), cin.tie(0);
int tt = 1;
cin >> tt;
while (tt--)
{
solv();
}
return 0;
}
E Catch the Mole
Consider querying a subtree. If the jury returns $$$0$$$, we can delete the entire subtree.
We can query any leaf node. If the jury returns $$$1$$$, we are done; otherwise, the mole will move up.
Consider querying a subtree. If the jury returns $$$1$$$, we can "drive" it out of this subtree using the method in hint2.
Combime the idea in hint $$$1$$$ and $$$3$$$.
Read hints first.
For easy version, we can pick a vertex $$$v$$$ such that $$$maxdep_v-dep_v+1=50$$$:
if the jury returns $$$0$$$, we can delete the entire subtree;
otherwise, make queries until it is out and answer $$$fa_v$$$(a special case is $$$v=1$$$);
- if there is no such vertex, then make $$$100$$$ queries and answer the root node $$$1$$$.
This costs about $$$200$$$ queries, which can not pass hard version. But we can optimize it.
The bottleneck is the step $$$2$$$, when we drive away this mole once, we need to immediately check if it is still in the subtree $$$v$$$, which can be optimized. In fact, we can drive the mole 50 times and then perform a binary search on the chain from $$$v$$$ to the root node.
Number of queries: $$$O(2sqrt(n)+log(n))$$$
Query any leaf $$$70$$$ times, then delete all nodes $$$x$$$ such that $$$maxdep_v-dep_v+1<=70$$$. Then the tree can be split into $$$\le 70$$$ chains. Do binary search for these chains.
Solve the problem in $$$100$$$ queries.
#include<bits/stdc++.h>
using namespace std;
#define all(a) a.begin(),a.end()
#define pb push_back
#define sz(a) ((int)a.size())
using ll=long long;
using u32=unsigned int;
using u64=unsigned long long;
using i128=__int128;
using u128=unsigned __int128;
using f128=__float128;
using pii=pair<int,int>;
using pll=pair<ll,ll>;
template<typename T> using vc=vector<T>;
template<typename T> using vvc=vc<vc<T>>;
template<typename T> using vvvc=vc<vvc<T>>;
using vi=vc<int>;
using vll=vc<ll>;
using vvi=vc<vi>;
using vvll=vc<vll>;
#define vv(type,name,n,...) \
vector<vector<type>> name(n,vector<type>(__VA_ARGS__))
#define vvv(type,name,n,m,...) \
vector<vector<vector<type>>> name(n,vector<vector<type>>(m,vector<type>(__VA_ARGS__)))
template<typename T> using min_heap=priority_queue<T,vector<T>,greater<T>>;
template<typename T> using max_heap=priority_queue<T>;
// https://trap.jp/post/1224/
#define rep1(n) for(ll i=0; i<(ll)(n); ++i)
#define rep2(i,n) for(ll i=0; i<(ll)(n); ++i)
#define rep3(i,a,b) for(ll i=(ll)(a); i<(ll)(b); ++i)
#define rep4(i,a,b,c) for(ll i=(ll)(a); i<(ll)(b); i+=(c))
#define cut4(a,b,c,d,e,...) e
#define rep(...) cut4(__VA_ARGS__,rep4,rep3,rep2,rep1)(__VA_ARGS__)
#define per1(n) for(ll i=((ll)n)-1; i>=0; --i)
#define per2(i,n) for(ll i=((ll)n)-1; i>=0; --i)
#define per3(i,a,b) for(ll i=((ll)a)-1; i>=(ll)(b); --i)
#define per4(i,a,b,c) for(ll i=((ll)a)-1; i>=(ll)(b); i-=(c))
#define per(...) cut4(__VA_ARGS__,per4,per3,per2,per1)(__VA_ARGS__)
#define rep_subset(i,s) for(ll i=(s); i>=0; i=(i==0?-1:(i-1)&(s)))
template<typename T, typename S> constexpr T ifloor(const T a, const S b){return a/b-(a%b&&(a^b)<0);}
template<typename T, typename S> constexpr T iceil(const T a, const S b){return ifloor(a+b-1,b);}
template<typename T>
void sort_unique(vector<T> &vec){
sort(vec.begin(),vec.end());
vec.resize(unique(vec.begin(),vec.end())-vec.begin());
}
template<typename T, typename S> constexpr bool chmin(T &a, const S b){if(a>b) return a=b,true; return false;}
template<typename T, typename S> constexpr bool chmax(T &a, const S b){if(a<b) return a=b,true; return false;}
template<typename T, typename S> istream& operator >> (istream& i, pair<T,S> &p){return i >> p.first >> p.second;}
template<typename T, typename S> ostream& operator << (ostream& o, const pair<T,S> &p){return o << p.first << ' ' << p.second;}
#ifdef i_am_noob
#define bug(...) cerr << "#" << __LINE__ << ' ' << #__VA_ARGS__ << "- ", _do(__VA_ARGS__)
template<typename T> void _do(vector<T> x){for(auto i: x) cerr << i << ' ';cerr << "\n";}
template<typename T> void _do(set<T> x){for(auto i: x) cerr << i << ' ';cerr << "\n";}
template<typename T> void _do(unordered_set<T> x){for(auto i: x) cerr << i << ' ';cerr << "\n";}
template<typename T> void _do(T && x) {cerr << x << endl;}
template<typename T, typename ...S> void _do(T && x, S&&...y) {cerr << x << ", "; _do(y...);}
#else
#define bug(...) 777771449
#endif
template<typename T> void print(vector<T> x){for(auto i: x) cout << i << ' ';cout << "\n";}
template<typename T> void print(set<T> x){for(auto i: x) cout << i << ' ';cout << "\n";}
template<typename T> void print(unordered_set<T> x){for(auto i: x) cout << i << ' ';cout << "\n";}
template<typename T> void print(T && x) {cout << x << "\n";}
template<typename T, typename... S> void print(T && x, S&&... y) {cout << x << ' ';print(y...);}
template<typename T> istream& operator >> (istream& i, vector<T> &vec){for(auto &x: vec) i >> x; return i;}
vvi read_graph(int n, int m, int base=1){
vvi adj(n);
for(int i=0,u,v; i<m; ++i){
cin >> u >> v,u-=base,v-=base;
adj[u].pb(v),adj[v].pb(u);
}
return adj;
}
vvi read_tree(int n, int base=1){return read_graph(n,n-1,base);}
template<typename T, typename S> pair<T,S> operator + (const pair<T,S> &a, const pair<T,S> &b){return {a.first+b.first,a.second+b.second};}
template<typename T> constexpr T inf=0;
template<> constexpr int inf<int> = 0x3f3f3f3f;
template<> constexpr ll inf<ll> = 0x3f3f3f3f3f3f3f3f;
template<typename T> vector<T> operator += (vector<T> &a, int val){for(auto &i: a) i+=val; return a;}
template<typename T> T isqrt(const T &x){T y=sqrt(x+2); while(y*y>x) y--; return y;}
#define ykh mt19937 rng(chrono::steady_clock::now().time_since_epoch().count())
//#include<atcoder/all>
//using namespace atcoder;
//using mint=modint998244353;
//using mint=modint1000000007;
struct Tree{
int n;
vector<vector<int>> adj,anc;
vector<int> par,tl,tr,dep,ord,siz,ch,head;
Tree(int _n=0){
n=_n;
adj.resize(n);
}
void add_edge(int u, int v){
adj[u].push_back(v);
adj[v].push_back(u);
}
void init(){
par.resize(n),tl.resize(n),tr.resize(n),dep.resize(n),anc.resize(n),siz.resize(n),ch.resize(n),head.resize(n);
int cur=-1,m=0;
while((1<<m)<=n) m++;
for(int i=0; i<n; ++i) anc[i].resize(m),ch[i]=-1;
auto dfs1=[&](auto &self, int u, int fa) -> void{
par[u]=fa;
siz[u]=1;
for(int v: adj[u]) if(v!=fa) self(self,v,u),siz[u]+=siz[v];
for(int v: adj[u]) if(v!=fa&&(ch[u]==-1||siz[v]>siz[ch[u]])) ch[u]=v;
};
auto dfs2=[&](auto &self, int u, int fa) -> void{
ord.push_back(u);
tl[u]=++cur;
anc[u][0]=fa;
if(fa==-1) dep[u]=0;
else dep[u]=dep[fa]+1;
for(int i=1; i<m; ++i){
if(anc[u][i-1]==-1) anc[u][i]=-1;
else anc[u][i]=anc[anc[u][i-1]][i-1];
}
if(ch[u]!=-1) self(self,ch[u],u);
for(int v: adj[u]) if(v!=fa&&v!=ch[u]) self(self,v,u);
tr[u]=cur;
};
dfs1(dfs1,0,-1);
dfs2(dfs2,0,-1);
head[0]=0;
for(int u: ord){
for(int v: adj[u]) if(v!=par[u]){
if(v==ch[u]) head[v]=head[u];
else head[v]=v;
}
}
}
bool is_anc(int u, int v){return tl[u]<=tl[v]&&tr[u]>=tr[v];}
int get_anc(int u, int x){
for(int i=anc[0].size()-1; i>=0; --i) if(u!=-1&&(x>>i&1)) u=anc[u][i];
return u;
}
int lca(int u, int v){
if(is_anc(u,v)) return u;
for(int i=anc[0].size()-1; i>=0; --i) if(anc[u][i]!=-1&&!is_anc(anc[u][i],v)) u=anc[u][i];
return par[u];
}
};
void ahcorz(){
int lft=150;
auto query=[&](int u){
assert(lft>0);
cout << "? " << u+1 << endl;
lft--;
int res; cin >> res; return res;
};
int n; cin >> n;
Tree tree(n);
rep(n-1){
int u,v; cin >> u >> v; u--,v--;
tree.add_edge(u,v);
}
tree.init();
vi vis(n);
auto fill_vis=[&](auto &self, int u){
if(vis[u]) return;
vis[u]=1;
for(int v: tree.adj[u]) if(v!=tree.par[u]) self(self,v);
};
vi ord(n); iota(all(ord),0);
sort(all(ord),[&](int i, int j){return tree.dep[i]<tree.dep[j];});
per(n){
int u=ord[i];
if(vis[u]) continue;
int v=u;
rep(_,(lft-1)/2-1){
if(v==0) break;
v=tree.par[v];
}
if(!query(v)){
fill_vis(fill_vis,v);
continue;
}
int x=tree.ord.back();
int m=tree.dep[u]-tree.dep[v]+1;
rep(_,m){
if(query(x)){
cout << "! " << x+1 << endl;
return;
}
if(!query(v)){
int y=tree.par[v];
if(y!=-1) y=tree.par[y];
if(y==-1) y=0;
cout << "! " << y+1 << endl;
return;
}
}
cout << "! 1" << endl;
return;
}
assert(0);
}
signed main(){
ios_base::sync_with_stdio(0),cin.tie(0);
cout << fixed << setprecision(20);
int t=1;
cin >> t;
while(t--) ahcorz();
}
F Polygonal Segments
Consider using a segment tree to solve this problem. We need to merge the information of two intervals. First, we take the maximum value of the answers in the two intervals, and then calculate the polygon segment spanning the two intervals.
We notice that a local maximal polygon segment $$$[l, r]$$$ must satisfy $$$min(a_{l-1}, a_{r+1}) >= 2(a_l + ... + a_r)$$$(or $$$l=1$$$,$$$r=n$$$).
Define the suffix $$$[i, r]$$$ of the interval $$$[l, r]$$$ as a special suffix if and only if $$$a_i+ \ldots +a_r$$$ $$$\le 2 \cdot a_i$$$. The same principle applies to special prefixes.
We can observe that there are at most $$$O(log(maxa))$$$ special suffixes, because the sum of one special suffix is at least twice the sum of the previous special suffix. Therefore, we can merge the special suffixes of the left interval and the special prefixes of the right interval (using the two-pointer algorithm) to obtain the answer spanning the two intervals.
Number of queries: $$$O(nlog^2(n))$$$
Consider the following process:
query(l,r)
if r-l+1<3 return -1
mxp=getmaxpos(l,r)
if 2*a[mxp]<getsum(l,r) return r-l+1
else return max(query(l,mxp-1),query(mxp+1,r))
This looks $$$O(n^2)$$$ when $$$a=[1,2,1,4,1,2,1,8,1,2,1,4,1,2,1,16,... ]$$$. However, if we store all the ranges which are already calculated, it becomes $$$O(n log n)$$$.
Why? Let's call a range $$$[l,r]$$$ "key range" when $$$sum(a[l...r])<=2min(a[l-1],a[r+1])$$$. We can see:
1.There're only $$$O(n log n)$$$ different key ranges;
2.We only visit at most $$$2$$$ ranges which are not key range in each query.
Assume we have changed $$$a_p$$$. We need to clear all the answer of key ranges which pass through $$$p$$$. We can use interval tree to achieve it.
#include <map>
#include <set>
#include <cmath>
#include <ctime>
#include <queue>
#include <stack>
#include <cstdio>
#include <cstdlib>
#include <vector>
#include <cstring>
#include <algorithm>
#include <iostream>
#include <bitset>
using namespace std;
typedef double db;
typedef long long ll;
typedef unsigned long long ull;
const int N=1000010;
const int LOGN=28;
const ll TMD=0;
const ll INF=2147483647;
int T,n,q;
ll a[N];
//----------------------------------------------------------
struct INFO
{
int ans,l,r;
ll sum;
vector<int> lpos,rpos;
vector<ll> lsum,rsum;
};
struct nod
{
INFO info;
nod *lc,*rc;
};
struct Segtree
{
nod *root;
Segtree()
{
build(&root,1,n);
}
void newnod(nod **p,int l,int r)
{
*p=new nod;
(*p)->info.l=l;
(*p)->info.r=r;
(*p)->info.sum=(*p)->info.ans=0;
(*p)->lc=(*p)->rc=NULL;
}
INFO merge_info(INFO x,INFO y)
{
INFO z;
z.l=x.l;
z.r=y.r;
z.sum=x.sum+y.sum;
z.ans=max(x.ans,y.ans);
int px=(int)x.lpos.size()-1,py=0;
ll mx=0;
//
//return z;
//
//
while(1) //two pointer
{
int len=0;
ll sum=0;
if(px<0)
{
sum+=x.sum;
len+=(x.r-x.l+1);
}
else
{
sum+=x.lsum[px]-a[x.lpos[px]];
len+=(x.r-x.lpos[px]);
}
if(py<y.rpos.size())
{
sum+=y.rsum[py]-a[y.rpos[py]];
len+=(y.rpos[py]-y.l);
}
else
{
sum+=y.sum;
len+=(y.r-y.l+1);
}
if(sum>2*mx) z.ans=max(z.ans,len);
if(px<0&&py>=y.rpos.size()) break;
else if(px<0) mx=max(mx,a[y.rpos[py]]),py++;
else if(py>=y.rpos.size()) mx=max(mx,a[x.lpos[px]]),px--;
else if(a[x.lpos[px]]<a[y.rpos[py]]) mx=max(mx,a[x.lpos[px]]),px--;
else mx=max(mx,a[y.rpos[py]]),py++;
}
//
/*
z.lpos.resize(1);
z.rpos.resize(1);
z.lsum.resize(1);
z.rsum.resize(1);
*/
//
if(z.ans<3) z.ans=-1;
for(int i=0;i<x.lpos.size();i++)
if(2*a[x.lpos[i]]>=x.lsum[i]+y.sum) z.lpos.push_back(x.lpos[i]),z.lsum.push_back(x.lsum[i]+y.sum);
for(int i=0;i<y.lpos.size();i++)
z.lpos.push_back(y.lpos[i]),z.lsum.push_back(y.lsum[i]);
for(int i=0;i<x.rpos.size();i++)
z.rpos.push_back(x.rpos[i]),z.rsum.push_back(x.rsum[i]);
for(int i=0;i<y.rpos.size();i++)
if(2*a[y.rpos[i]]>=y.rsum[i]+x.sum) z.rpos.push_back(y.rpos[i]),z.rsum.push_back(y.rsum[i]+x.sum);
//
return z;
}
void build(nod **p,int L,int R)
{
newnod(p,L,R);
if(L==R)
{
(*p)->info.sum=a[L];
(*p)->info.ans=-1;
(*p)->info.lpos.push_back(L);
(*p)->info.rpos.push_back(L);
(*p)->info.lsum.push_back(a[L]);
(*p)->info.rsum.push_back(a[L]);
return ;
}
int M=(L+R)>>1;
build(&(*p)->lc,L,M);
build(&(*p)->rc,M+1,R);
(*p)->info=merge_info((*p)->lc->info,(*p)->rc->info);
}
void modify(int pos,ll val)
{
_modify(root,pos,val);
}
void _modify(nod *p,int pos,ll val)
{
if(p->info.l==p->info.r)
{
p->info.sum=val;
p->info.lpos[0]=pos;
p->info.rpos[0]=pos;
p->info.lsum[0]=val;
p->info.rsum[0]=val;
return ;
}
int M=(p->info.l+p->info.r)>>1;
if(pos<=M) _modify(p->lc,pos,val);
else _modify(p->rc,pos,val);
p->info=merge_info(p->lc->info,p->rc->info);
}
ll getans(int L,int R)
{
return _getans(root,L,R).ans;
}
INFO _getans(nod *p,int L,int R)
{
if(p->info.l==L&&p->info.r==R) return p->info;
int M=(p->info.l+p->info.r)>>1;
if(R<=M) return _getans(p->lc,L,R);
else if(L>M) return _getans(p->rc,L,R);
else return merge_info(_getans(p->lc,L,M),_getans(p->rc,M+1,R));
}
};
//----------------------------------------------------------
int main()
{
//freopen("test.in","r",stdin);
scanf("%d",&T);
while(T--)
{
scanf("%d%d",&n,&q);
for(int i=1;i<=n;i++) scanf("%I64d",&a[i]);
Segtree TR;
for(int i=1;i<=q;i++)
{
int t;
scanf("%d",&t);
if(t==1)
{
int l,r;
scanf("%d%d",&l,&r);
printf("%I64d\n",TR.getans(l,r));
}
else
{
int p;
ll x;
scanf("%d%I64d",&p,&x);
a[p]=x;
TR.modify(p,x);
}
}
}
//fclose(stdin);
return 0;
}
// #pragma GCC optimize("O3,unroll-loops")
// #pragma GCC target("avx2,bmi,bmi2,lzcnt,popcnt")
#include "bits/stdc++.h"
using namespace std;
using ll = long long int;
mt19937_64 RNG(chrono::high_resolution_clock::now().time_since_epoch().count());
/**
* Point-update Segment Tree
* Source: kactl
* Description: Iterative point-update segment tree, ranges are half-open i.e [L, R).
* f is any associative function.
* Time: O(logn) update/query
*/
template<class T, T unit = T()>
struct SegTree {
T f(T a, T b) {
a[0] += b[0];
if (a[1] < b[1]) a[1] = b[1], a[2] = b[2];
return a;
}
vector<T> s; int n;
SegTree(int _n = 0, T def = unit) : s(2*_n, def), n(_n) {}
void update(int pos, T val) {
for (s[pos += n] = val; pos /= 2;)
s[pos] = f(s[pos * 2], s[pos * 2 + 1]);
}
T query(int b, int e) {
T ra = unit, rb = unit;
for (b += n, e += n; b < e; b /= 2, e /= 2) {
if (b % 2) ra = f(ra, s[b++]);
if (e % 2) rb = f(s[--e], rb);
}
return f(ra, rb);
}
};
#include <ext/pb_ds/assoc_container.hpp>
struct splitmix64_hash {
static uint64_t splitmix64(uint64_t x) {
// http://xorshift.di.unimi.it/splitmix64.c
x += 0x9e3779b97f4a7c15;
x = (x ^ (x >> 30)) * 0xbf58476d1ce4e5b9;
x = (x ^ (x >> 27)) * 0x94d049bb133111eb;
return x ^ (x >> 31);
}
size_t operator()(uint64_t x) const {
static const uint64_t FIXED_RANDOM = std::chrono::steady_clock::now().time_since_epoch().count();
return splitmix64(x + FIXED_RANDOM);
}
};
template <typename K, typename V, typename Hash = splitmix64_hash>
using hash_map = __gnu_pbds::gp_hash_table<K, V, Hash>;
template <typename K, typename Hash = splitmix64_hash>
using hash_set = hash_map<K, __gnu_pbds::null_type, Hash>;
int main()
{
ios::sync_with_stdio(false); cin.tie(0);
int t; cin >> t;
while (t--) {
int n, q; cin >> n >> q;
vector<ll> a(n);
for (auto &x : a) cin >> x;
map<ll, int> cache;
struct Node {
int L, R, M; // [L, R)
set<array<int, 2>> by_l, by_r;
void ins(int l, int r) {
by_l.insert({l, r});
by_r.insert({r, l});
}
void del(int l, int r) {
by_l.erase({l, r});
by_r.erase({r, l});
}
};
vector<Node> ITree(4*n);
auto build = [&] (const auto &self, int node, int l, int r) -> void {
int mid = (l + r)/2;
ITree[node].L = l;
ITree[node].R = r;
ITree[node].M = mid;
if (l+1 == r) return;
self(self, 2*node + 1, l, mid);
self(self, 2*node + 2, mid, r);
};
build(build, 0, 0, n);
auto insert = [&] (const auto &self, int node, int l, int r) -> void { // Insert interval [l, r) into the tree
if (ITree[node].L+1 == ITree[node].R) {
ITree[node].ins(l, r);
return;
}
if (l >= ITree[node].M) self(self, 2*node+2, l, r);
else if (r <= ITree[node].M) self(self, 2*node+1, l, r);
else ITree[node].ins(l, r);
};
auto erase = [&] (const auto &self, int node, int x) -> void { // Delete all intervals covering point x
if (x < ITree[node].M) {
// Delete some prefix
auto &st = ITree[node].by_l;
while (!st.empty()) {
auto [l, r] = *begin(st);
if (l <= x) {
ITree[node].del(l, r);
ll id = 1ll*(n+1)*l + r;
cache.erase(id);
}
else break;
}
}
else {
// Delete some suffix
auto &st = ITree[node].by_r;
while (!st.empty()) {
auto [r, l] = *rbegin(st);
if (r > x) {
ITree[node].del(l, r);
ll id = 1ll*(n+1)*l + r;
cache.erase(id);
}
else break;
}
}
if (ITree[node].L+1 == ITree[node].R) return;
if (x < ITree[node].M) self(self, 2*node+1, x);
else self(self, 2*node+2, x);
};
constexpr array<ll, 3> unit = {0, 0, -1};
SegTree<array<ll, 3>, unit> seg(n);
for (int i = 0; i < n; ++i) seg.update(i, array<ll, 3>{a[i], a[i], i});
auto solve = [&] (const auto &self, int L, int R) -> int {
if (R-L <= 2) return -1;
ll id = 1ll*L*(n+1) + R;
if (cache.find(id) != cache.end()) return cache[id];
auto [sm, mx, pivot] = seg.query(L, R);
insert(insert, 0, L, R);
if (mx < sm - mx) {
return cache[id] = R-L;
}
else {
int ret = self(self, L, pivot);
ret = max(ret, self(self, pivot+1, R));
return cache[id] = ret;
}
};
while (q--) {
int type; cin >> type;
if (type == 1) {
int l, r; cin >> l >> r;
cout << solve(solve, l-1, r) << '\n';
}
else {
ll pos, val; cin >> pos >> val; --pos;
erase(erase, 0, pos);
if (pos) erase(erase, 0, pos-1);
if (pos+1 < n) erase(erase, 0, pos+1);
seg.update(pos, array<ll, 3>{val, val, pos});
}
}
}
}
first
Can someone please tell me what the bait was for A? Was it that people thought the $$$\ge$$$ was $$$>$$$ so the answer is always yes?
Nvm, I think I got it. Did people only count the frequency of the max number?
I bit the bait.Got a WA just in first few minutes......(Yes,Only counting the largest number)
same here, at first only considered frequency of max number (;
oh wow! This was my first contest ever and I felt stupid after just ensuring that the max appears odd times. Same thing happened with B. Got the initial idea but couldn't combine the prefix and suffix sum construction. Is it down to just lack of practice? Any resources for me to start on :) ?
Practice more and Think more:)
LeetCode should learn to set contests like the way CodeForces do. 0 solved by chatGPT :)
anyone done C ...
Can you tell where 271597372 nd 271626656 is wrong ??
N<=2e5, your solution repeats at most N times the code in while-loop and you make 1 for-loop inside while-loop and another for-loop inside the first for-loop, so the time complexity is O(N^3) which is obviously too slow.
but in the second one its giving wrong answer not even TLE or I would have think of some other logic . also in second one I think the TC is good no ?? Can you tell me where the logic is wrong in second one ??
The same numbers don't have to be next to each other and this one has time complexity O(N^2) so it would get TLE anyway.
Thanks brooo..
You are welcome
okay, i accept defeat
In the case that the sum becomes <= -1 at point x or y. Then the first time the sum is equal to the maximum will be at 1 or n instead of x or y. That's why you need to alternate between -1 and 1.
yeah got it, i had the intuition and had implemented it in one of my soln but my way of implementation was just unnecessarily complex and i am dumb so got it WA. anw bad day it was. thank you for your explanation.
Nice brain teasers. Now where are the problems?
buddy, you got — ve contribution, 0 contests, has 1 problem solution, wtf? Brain teasers are the one for you (of course if you got one (brain)).
you are green after 42 contests
Yeah its my original acc.
Thanks for editorial! I really liked problem C :)
same here, C was good one, but can you please simply explain to me why we cannot get the answer if we perform the operation on array only once and then using one for loop and maths we calculate the answer.
The main reason is that we can only directly calculate the answer when the MAD operation is performed on a sorted array, since performing it on unsorted arrays leads to things like this:
(as is evident, it doesn't work out)
The main reason for this is because when we perform the MAD operation on an unsorted array, the resulting array may not follow the "rules" of the calculations (mainly that each value appears at least twice), whereas it will definitely follow the rules with a sorted array:
It intuitively makes sense why this is, since in order for a new value to appear in any MAD array, it must appear at least twice in the original array. Since the input array is sorted, a new value must only appear strictly after its first appearance in the original array. So, each number in the MAD array must appear twice, which requires that the input array be sorted. Only once we know that each value appears twice can we use the trick to calculate the answer.
Hope this helps! :)
Thanks mate, got it now.
For E solution 2, I think it should say
$$$\leq 70$$$ chains instead of $$$\geq 70$$$?
Here is another solution for E (actually, I think there are a lot of ways to solve it):
The goal will be to find a path from the root to some leaf such that the mole definitely lies on this path. After that, you can binary search the location of the mole. Also, fix a constant $$$S$$$ and after constructing the path make some dummy queries such that the mole moved up at least $$$S$$$ times. Let's say we have constructed some of the path and the last node we added is $$$u$$$ — we will look at the children of $$$u$$$:
The final number of queries will be $$$max(S, \dfrac{N}{S}) + log_2 n$$$. By choosing $$$S = \sqrt n$$$ it becomes $$$\sqrt n + log_2 n$$$.
Edit: After some thinking, the above calculation is a bit incorrect in some limit cases ( so it is slightly more than 100 :( ), but I think with randomization and choosing $$$S$$$ a bit lower than $$$\sqrt n$$$ it could be better. Anyways, there is still an upper bound of $$$2\sqrt n + log_2 n$$$.
For D, I have a binary search solution: 271626468
Binary search works for problem E as well
I had a different approach to E which I don't have proof for, but it managed to pass tests with $$$\le 160$$$ queries.
Let all nodes be "alive" or "dead" — initially, all are alive. Start at the root, and while the current node has exactly one alive child, go to that child. If it has no alive children, terminate this process. Otherwise, use dfs to find the subtree of the current node with the fewest alive children and query that subtree. If the mole is there, set the current node to that subtree, otherwise mark that subtree and all leaves in all other subtrees as dead (since the mole cannot be in any of those). I don't have proof that this guarantees a low number of queries (something like $$$\sqrt{n}$$$), but it seemed intuitively correct.
At the end you are left with a path of possible locations for the mole, so it can then be caught with binary search.
Isn't it right since $$$ d \times w \le n $$$ where d = depth, w = width, and your algorithm would be $$$O(min(d, w))$$$ which is less than $$$\sqrt n$$$?
when you say using dfs to find the subtree of the current node with the fewest alive children you mean the subtree of a direct child right?
For Problem B, Can anyone please help me understand why my construction is wrong for the case 5 2 1
my construction : 1 1 -1 -1 -1. Prefix max is at 2 and suffix max is at 1.
Thank you.
Suffix sum is max at 5 and 1 both so it will consider 5
Thank you so much. :facepalm:
the suffix max is -1
and it occurs at indices 1, 5
y should be the greatest index that satisfies $$$a_i+\ldots +a_n=\max_{j=1}^{n}(a_j+\ldots+a_n)$$$
but the index 5 is the greatest here, that is why your solution is wrong
Yes I got it. Thank you for the explanation.
you're welcome ^_^
Suffix is max at 1 and 5 also, and since we need to return maximum such index, so index 5 will be considered not 1.
Hope you Understands
I had the same doubt, and I think I did the same implementation as you, making the index >x and index < y all -1 and remaining other 1; Idk what was the issue until I saw this comment
Can someone tell me why is this wrong for B? 271591444
For 5 2 1 you print 1 1 -1 -1 -1 Suffix max should be at 1 but here it's at 5
consider this test case (from the above comment)
n = 5, x = 2, y = 1
your code outputs:
1 1 -1 -1 -1
and this is wrong because the maximum suffix occurs on indices 1 and 5, while it should be occurs only in the index 1
Oh gotcha
I find it interesting that a theforces organizer was 2nd place in div2. What are the odds some of the problems were proposed or seen for some theforces rounds? Not accusing anyone though. Just an observation.
In question A, how is it possible that the maximum number does not get condisered ?
Foe Example if the numbers were 1 2 2 2 3 3. Then, Alice won cause there are 3 2s. But what if Alice chose 2 and then Bob chose 3. In the operations mentioned, we can choose a[i]>= what the opponent chose.
Why is that Bob only chooses equal to Alicw?
Alice chose 2 then Bob chose 3 Then again Alice chose 3 See?
justice for bob xD.
if every number occurs even number of times then, any number Alice will choose, Bob can mimic it and choose it again
if there is a number that appears odd number of times, then Alice can choose it, forcing Bob to choose a number that occurs even number of times, now Alice can mimic Bob choices and win
Yeah but why should Bob even mimic. I mean its not even mentioned anywhere that Bob plays optimally. There DOES exist a winning strategy for Alice
well, there does not exist a winning strategy for Alice, if she cannot win using this strategy somehow.
let's say that bob won't play optimally, but he still can mimic Alice choices, right ?
while there is a chance to Bob to win in Alice's winning strategy, then it won't be considered as a winning strategy
I don't know why would Bob not to play optimally btw :)
Yeah I mean fair enough to assume that, but a special case IS a possible one too :)
exactly. this thought kept me from trying what actually the question wanted. maybe i'm thinking too much
just another case 1 2 2 2 3.
bob could have won in this but alice will win lmao!!
I'm not trying to oppose anyone, just a curious doubt!!
No bob can't win cause alice will pick 3 directly.
What about this approach for E:
Query centroid. If you get 1, solve recursively with centroid as root. If you get 0, remove subtree of centroid, and solve recursively from parent of current node (because mole could move up from where you are). You can also remove all current leaves (don't know if necessary).
Won't this use approx log(n) queries?
Consider a star graph, you never make progress
Hmm true. Then I got AC by having bad centroid code
I solved E by deleting a half of tree each query, so there is $$$O(\log(n))$$$ queries.
In each query find a subtree with closest size to a half of the current tree. From current root, go to subtree with maximum size while the current subtree size is greater than a half. Query this subtree.
Response 1. Set this subtree as a current tree.
Response 0. Delete this subtree and all leaves from the current tree. Then add a parent of the root to the tree. This can be done in $$$O(n)$$$, so total complexity is $$$O(n\log(n))$$$.
CODE
Yes I realized this ended up being my solution, but in my head I thought of the subtree you find as the centroid. I wonder what kind of bound they potentially could have set for the number of queries.
I think the problem is unsolvable in less than $$$O(\sqrt{n})$$$ queries. Consider the tree with the root having $$$\sqrt{n}$$$ chains of length $$$\sqrt{n} + 1$$$ attached to it. You have to at least know in which chain the mole is in, so you have to query each chain at least once, since the mole won't get to the root in $$$\sqrt{n}$$$ queries.
this code works in sqrt(n) queries in the worst case
this case is 4901 nodes
adj[1]={2,3,...,71}
adj[i]={i-70,i+70} for all i in [72,4831]
adj[i]={i-70} for all in [4832,4901]
and at each query you ask for a child between 2 and 71 and the response is 0 at the 70'th query you get the response
you guess the correct answer in the some testcases in the 25-th test at 70 queries
My solution 271646773 for E passes tests in $$$\le 100$$$ queries but I don't know why. I would appreciate it if anyone could prove some upper bound on the number of queries or provide a counter-example.
The idea is to greedily choose a node that results in the minimum number of candidates in the worst case. Specifically, querying $$$x$$$ results in
size[x]
candidates if the answer is $$$1$$$ and(size[0] - leaf[0]) - (size[x] - leaf[x])
candidates if the answer is $$$0$$$. Heresize[x]
is the number of candidates in the subtree of $$$x$$$ andleaf[x]
is the number of leaf candidates in the subtree of $$$x$$$.It's easy to compute these numbers for all $$$x$$$ with a simple dfs, after which I pick $$$x$$$ with the minimum value of
max(size[x], (size[0] - leaf[0]) - (size[x] - leaf[x]))
. Finally, I repeat this process until only one candidate is remaining,Lemma 1: if
B <= size[x] <= n - B
for some $$$B$$$ then querying $$$x$$$ removes at least $$$B$$$ candidates, where $$$n$$$ is the current number of candidates. To prove it formally, note thatLemma 2: if
size[x] < B
orsize[x] > n - B
for all $$$x$$$ then there exists a node $$$p$$$ withsize[p] > n - B
andsize[c] < B
for every child $$$c$$$ of $$$p$$$. To prove it, note that the subtree size of the root is $$$n > n - B$$$ and the subtree size of any leaf is $$$1 < B$$$, so there must be a transition point in between.Lemma 3: if
size[x] < B
orsize[x] > n - B
for all $$$x$$$ then the tree has at least $$$\lceil\frac{n}{B}-1\rceil$$$ leaves. To prove it, consider node $$$p$$$ from lemma 2. It has at least $$$\frac{n-B}{B-1}$$$ children because its subtree size is large but all children have small subtree sizes so there must be quite a few of them. $$$\frac{n-B}{B-1} > \frac{n-B}{B} = \frac{n}{B} - 1$$$. All numbers are integers so we can take $$$\lceil \cdot \rceil$$$ of the last expression.Lemma 4: if there are at least $$$C$$$ leaves then querying one of them removes at least $$$C$$$ candidates. If the response is $$$1$$$ then we found the mole, and if the response is $$$0$$$ then all $$$C$$$ leaves are no longer valid candidates.
Choosing $$$B = \sqrt{n}$$$ means that we remove roughly $$$\sqrt{n}$$$ nodes per query (where $$$n$$$ is the current number of candidates), and hence the greedy process results in $$$O(\sqrt{n})$$$ queries overall.
Can anyone explain me how to solve B plzz :(, i might have lost my mind after today's contest
x is the first time we have sum as max from start and y is first time we have sum as max from end .so after x and before y keep -1 to ensure max sum wont increase if you make sum as 0 or -1 for indices which are not in x,y then your done
Oh thanks now i got it
https://mirror.codeforces.com/blog/entry/131716?#comment-1173135
look onto my submission for B
Personal Editorial that focuses mostly on intuition
A:Basically, Alice can always pick the largest element a_i such that count[a_i]%2==1. No matter what Bob picks, Alice still wins. If that element doesn't exist Alice loses. This was a pretty hard Div 2 A in my opinion.
B: I started by thinking that we want to limit the maximum sum or in other words we don't want the prefix sum to get any larger after X or the suffix sum to get any larger before Y. The obvious construction is [-1,-1,...,y,1,1,...,x,-1,-1,...] The problem is that it might be the case that the prefix sum of -1 (the first element) is the larger than the first x elements (the same might be true for the suffix sum). To avoid that, we notice that we can alternate between -1 and 1 at the ends. The construction is as follows: [...,-1,1,-1,Y,1,1,...,1,1,X,-1,1,-1,...] This solves both problems.
C: We try testing the operation on the array to see what happens. The two key observations are that the first operation makes the array increasing, and the second operation shifts the array to the right by 1. The solution then becomes pretty much just math. We use the operation once, and then we use math to calculate the answer.
D: I didn't have time to write the AC code during the contest, but the approach is actually very simple — too simple in my opinion for a Div 2 D. The key observation is realizing that if a[i] is large the only operation that makes sense is to fill the whole row. If it takes more than two operations to fill a row with white with 2x2 squares, we might as well just fill both the ith and the ith+1 row with white. We also notice that there are two cases where using 2x2 squares are beneficial.
Case 1: a[i] <=2 and a[i+1]<=2 Case 2: a[i]<=2 and there is an even number of 3 <= a_x[i] <=4 followed by an a_y[i] <=2. In both cases our answer goes up by one.
Why the negative contribution? I'm genuinely curious. I'm gonna write them either way because it helps me get better at explaining things, an important life and interview skill, and also because it might help some people.
I genuinely thank u for sharing the way you think. I am suck at thinking constructive problems and your comment really helps me to build a certain way to think. Keep contributing! Don't let these downvotes upset u.
In problem B, actually i came up with only [-1,-1,...,y,1,1,...,x,-1,-1,...] this and can't able to think further. Btw now understood completely thanks :)
For B problem if test is 5 2 1
is [ 1 1 -1 -1 -1] consider to be wrong answer ?
yes, because the maximum suffix position of the array that you have given is 5
Yes, but this test case satisfies the condition for maximum subarray sum, doesn't it?
Thanks it was annoying me
For F solution2, I think the pseudocode should have 2a[mxp] <= instead of 2a[mxp] >=.
I am here to provide yet another solution to E that I cannot prove the query ammount.
If you choose a node and the answer is 0 you can ignore that node, its subtree and after that all the leaves. If the answer is 0 then in the end the mole will be either in its subtree or the path to the root, so you can remove all the other nodes.
This idea would lead to searching for a centroid like node that reduces the tree as much as possible. Intuitively there should always be a node that reduces the tree to about $$$\frac12$$$ of the original size.
I think it comes up to $$$O(\log N)$$$ queries, and $$$O(N)$$$ time if implemented correctly. (edit: After checking other comments I can see why $$$O(\log N)$$$ queries is impossible)
Yay, I had the same solution =)
But I added an additional condition: if you can't find the centroid — 'the node with the smallest subtree size that contains at least half of the remaining nodes of the current node's subtree and is not equal to the current node' — I chose the biggest child of the current node.
current node — the last node that contained the answer in it's subtree.
E2 with $$$100$$$ queries
You can cut down (at least) $$$101-i$$$ vertices in the $$$i$$$-th query. Then, $$$\sum^{100}_{k=1} k = 5050 \ge n$$$.
You can cut down a subtree with the depth of $$$100-i$$$ in the $$$i$$$-th query.
Choose a subtree with the depth of $$$100-i$$$ (at least $$$101-i$$$ vertices in there) in the $$$i$$$-th query. If you found, ask the same root until not found. If not, cut down the subtree. If there are no such trees, you must choose the root and ask them. Finally, the mole reachs the root before using up of the query.
How less queries can we go?
i solved the problem in at most 70 queries (sqrt(n))
if we denote by size[node] the number of possible location in the subtree of "node" and we denote by size_valid the number of possible locations if we ask about the node that minimizes abs(size[node]-size_valid/2)
in the worst case childs of the lca of the possible nodes has the same size: size[node] in the worst case size[node] will be sqrt(size_valid) for sqrt(size_valid) nodes and size_valid after each query will decrease by at least 2*sqrt(size_valid)-1 (sqrt(size_valid) for the subtree of the choosen node and sqrt(size_valid)-1 for the nodes that hasn't any child that can be possible location else it will decrease by x+size_valid/x-1 with the same logic ) so if n can be solved in sqrt(n) than n-2*sqrt(n)+1 can be solved in sqrt(n-2*sqrt(n)+1)=sqrt(n)-1 so for n=5000 only 70 queries can be enough
How do you do this? If the answer to a query is $$$1$$$ then the mole doesn't move
sorry, please consider subtree as strictly subtree
UPD: I was wrong, fixed my solution is under the reply tree.
Could you explain in detail that how we can 'ask the same root until not found'? Thanks! My question is the mole doesn't just stop at the root you pick, so we have to use binary search for the real position.
During asking the same node, if the result changes 1 to 0, the mole will be in the parer of the node, so answer there.
But the mole doesn't move if the result is 1. So how can the result become 0 if we keep asking the same node?
Sorry, I answered as the inverted version(originally the outcome was inverted, but during testing the outcome is current style).
If the answer is 0, cutting down the subtree is the same.(In this step, we can also cut all the leaves) If the answer is 1, now we have a tree atmost depth $$$100-i$$$, rooted by $$$v$$$.
Now, ask sons in the order of decreasing the depth. If we receive 1, do the same thing. Note that, if we receive 0, all leaves are disappeared so we shouldn't care about small subtrees and we must look up atmost $$$O(\sqrt{n})$$$ sons.
Finally, if we gets lost(no such subtree found), we can identify the path the mole is in(from $$$v$$$ to the parents), so do binary search.
I think this fits in $$$100$$$ queries, but sorry if I'm wrong.
I'm still traumatized from A
Could anyone explain why 271582281 fails on the 3rd test case?
Never mind, I got it.
I am really confused. I have used the even and odd approaches. If the n is odd then it is clear that Alice will win.
example: 1 2 3 4 5 5 6 A B A B A B A
If n is even, then there would be 2 cases:
Case 1: The frequency of maximum numbers is odd. example: 1 2 3 5 5 5 A B A Hence, Alice will win.
Case 2: The frequency of the maximum number is even. example: 1 2 3 4 5 5 A B Hence, Alice will lose.
This is the approach I applied but I got the wrong answer. Could you guys please tell me, what is wrong with my approach?
Your reasoning for case 1 is correct.
Your reasoning is faulty in case 2.
Take the test case:
1 2 3 3 3 3
You would say that Alice would lose because N is even and the frequency of the maximum element is even. However, if Alice chooses 2 in her first move she can win because then Bob goes first and they alternate picking 3s.
Alice loses iff the frequency of all elements is even because as explained in the editorial Bob can mirror all her moves.
In any other case Alice can force Bob into this situation if there is at least one odd frequency element by selecting the last odd element as her first move.
Thank you!
I should have practiced more.
orz round
C, D is brilliant!
Got baited by A, thought that only the parity of frequency of maximum element mattered.
Can anyone please explain why this code 271651820 for problem C giving wrong answer. I tried to do it with one operation and after this if mad value appears more than once , it will right shift in further operation otherwise if mad value appears only once, it will add once to the answer.
If a mad value appears only once, it will not only add once to the answer but also turn into its neighbor on the left during the consequent operations.
hey, i'm facing the same problem as him, can you provide testcase for this mad value only once case please. I didn't understood.
Try this testcase : 4 3 1 1 1 3 4 5
yeah, i got it. Thank you very much!
I have another solution for E1. my solution in chinese :)
In problem F, solution 1:
shouldn't it be:
Is the extra "2" a typo?
Can you please explain this formula? How can we transform: a(l) + a(l + 1) + .. + a(r) > 2 * max(a[l..r]) (original condition to form a polygon I guess?) to that? Thanks
I am so confused about this, it that $$$\min(a_l-1,a_r+1)$$$ but not $$$\min(a_{l-1},a_{r+1})$$$? The author writes the latter. However, I still think both of them are confusing. I think $$$(a_l+\cdots +a_r)>2\max{a_l,\cdots,a_r}$$$ works.
The condition in the editorial is neccesary but not sufficient. However, since there are only $$$\log V$$$ "key points", it is quite enough for us to use.
I think so. The necessary and sufficient condition for judging whether $$${a_1,\ldots,a_n}$$$ is a "polygonal segment" is $$$\forall 1\le i\le n, a_i< \left( \sum_{j=1}^{n}a_j \right) -a_i$$$. We can obtain the necessary and sufficient condition for the "polygonal segment" $$$[l, r]$$$ to not be further expanded as $$$\left[\left(l=1\right) \vee \left(a_{l-1}\ge \sum_{i=l}^{r} a_i\right)\right] \wedge \left[\left(r=n\right) \vee \left(a_{r+1}\ge \sum_{i=l}^{r} a_i\right)\right]$$$.
In problem B, I don't understand why the first construction doesn't work. i.e. 1 in the common range, and -1 in the remaining positions.
The condition was simply that the prefix sum should be maximum at position x and not any position greater than x, and the suffix sum should be maximum at position y and not any position smaller than y. This construction satisfies it perfectly. What is the need for alternating -1 and 1?
Ah nevermind, the sum might not ever become positive, in which case the condition isn't satisfied.
I need help in C, What am i missing 271697251
LeetCode should learn to set contests like the way CodeForces do. 0 solved by chatGPT :)
done
can someone explain the funfact on c?
Maybe it's referring to doing the naive 'mad' simulation twice, and then just summing up once.
can someone please tell me why is this code giving tle for problem c? link
How to solve B if
I think I've solved E in 100 queries,heres my submission:submission.
Amazing Fun Fact about Task3
Can anyone explain the solution 1 of F which store some information in one node in Segment Tree? I don't really understand what we have to store, and how we use it to get the answer.
Hello! I just finished upsolving it, here's my code: 274583614. Search for
prefix_t
to skip the template.Let's call an element prefix peak if it is greater than a corresponding prefix sum. Define suffix peaks similarly. Pad the subarray with positive infinities on both ends, they are also peaks. Store a tuple (prefix length, sum, maximum, peak) for each peak. Let's call a vector of such tuples prefix info. Define suffix info similarly.
You need a few operations:
extend prefix info of the left segment with prefix info of the right segment to form prefix info of the combined segment;
extend suffix info of the right segment with suffix info of the left segment to form suffix info of the combined segment (same as 1);
combine suffix info of the left segment with prefix info of the right segment to update the answer.
Operation 3 can be tricky to implement in linear time ($$$log^2$$$ overall) with two pointers, so I implemented it naively in quadratic time ($$$\log^3$$$ overall) to stress test but it passed comfortably in 3 seconds :| Disappointing preparation from the setter considering that most solutions are within a factor of 4 of my execution time whereas $$$\log = 40$$$.
For Problem B:
Question says : It can be proven that such an array always exists under the given conditions. How do we prove that answer always exists ? Can someone help me understand this ? Thanks in Advance.
Can someone please explain the alternate solution for A using dp? Also, what does the author mean when he writes =∼? Thanks!
For dp[i], Alice is faced with the array a[1:i], where a is non-increasing.
Hope I didn't complicate it too much. And, =~ just means we are taking negation of RHS, and assigning it to the LHS.
Brilliant!
can someone please answer this.... for Problem D, In shayan's video solution, dp[i][j], j={0,1,2,3} what is the j here ? , is it {00,01,10,11} where each boolean value tell us about the use of the 1st and 2nd 2x2 block in i_th row ?
I was a bit confused for C where i did one operation and did the same contribution idea (n-i)*a[i]. I am not sure why it is failing there but getting accepted if we do 2 operations. while one operation is sufficient for making the array increasing acquiring required elements.
can anyone help me to understand? Thank You
While the array may be increasing, some of the elements might only have a frequency of 1 which means they will be erased in subsequent operations. 2 operations ensures that the count of every element is >=2
can you give an example.. i tried hard for finding a case where this assumption fails
Sure.
[2,3,1,1,2,3]
With 1 operation it is [0,0,0,1,2,3]
with 2 operations it is [0,0,0,0,0,0].
Thank you mate. I got it now since after the second operation only it becomes a right shift for the next iterations until all becomes zero. I missed that and only considered one operation might be enough.
What do we mean by "left most row" is editorial of problem D. Rows run horizontally shouldn't they be top or bottom. Can anyone help i am confused.
"left most row" --> means absolutely left of any row (or from beginning of any row) example: if [a1,a2,.....an] representing a row , then a1 is leftmost, a2 is next leftmost,....
For bonus in B, the array exists for
y <= x
case or not??brainlet solution on problem D. 271773223 no dp, no extra variable. proof: AC
In problem D in which testcase my code is failing , please help! 271787469
C > B > D gg contest the most stupid D problem of all contest
271787469 Can you please help me in which testcase my code is failing :(
Oh thanks man :)
can someone explain dp transition on dp solution for D? plz
I have a few doubts in the solution 1 of the editorial for E. Firstly, why do we choose 50? It seems that we should be taking $$$\sqrt{n}$$$, so 70 should be ideal right? Is there something special about 50, because I saw another comment below which says that using a number slightly smaller than $$$\sqrt{n}$$$ is better and I didn't understand the reason behind that either.
Additionally, in the 3rd case, when there's no vertex with $$$maxdep_v - dep_v = 50$$$, then we only need at most 50 moves right, why does it say 100 moves then?
I figured taking Sqrt(n) is ideal for solving E2, because you'll need N / 70 + 70 + logN operations(though due to weird constants you still need to optimize binary search by cutting down to r = mid — 2), Then I believe it is optimal to choose K=50 as a constant only in E1 cuz choosing 70 exceeds allowed operations: N / 70 + 2*70 > 200
As to the 3rd case, yeah you only up to 50 moves after optimization
When we drive away this mole once, we need to immediately check if it is still in the subtree v.
So it is $$$50 \times 2$$$ moves.Can someone explain this part of sol2 for E?
Then the tree can be split into ≤70 chains.
We query any leaf $$$70$$$ times, then delete all nodes $$$x$$$ such that $$$maxdep_v-dep_v+1<=70$$$. So if a chain is still alive after the deleting, there must be at least 70 deleted nodes underneath it.
An approach for bonus of E
Consider all the nodes that the mole can be one. These lie on a single path from leaf to the initial node of mole. If we can find this path then we can find the mole in O(log n) operations. To find this path consider the following: 1. We know that root node (node 1) lies on this path. 2. For each node that lies on this path exactly one of its children lies on the path.
Let numJumps be the number of operations where jury has returned 0, that is number of times mole has jumped up.
Now start tracing the path from root. Let's call the set of all children where maxDepth — depth >= numJumps candidates for our path extension. If candidates is empty, we are at end of our path. Otherwise while candidates has more than one node, remove the node with minimum maxDepth — depth value and query it. If judge return 1, we move to this node and proceed tracing our path recursively. If judge returns 0, we discard this node, increment numJumps and remove all nodes that have maxDepth — depth < numJumps.
At the end if we are left with exactly one node in candidates we move to this node.
To calculate the number of operations required, notice that first time we query at least 1 node is discarded, second time atleast 2, and so on.
If judge returns 0, then chain is discarded and size of numJumps increases meaning next chain to be discarded must have maxDepth — depth atleast 1 more that cur node's maxDepth — depth
If judge returns 1, notice that there must be atleast one more node with maxDepth — depth at least equal to this node's maxDepth — depth. Meaning at least half of nodes under consideration were discarded, considering single chains with no branching.
Thus nodes under consideration either reduce as 1, 2, 3, ... or half. Either way after k moves, atleast k * (k + 1) / 2 nodes will be removed from nodes under consideration. Meaning after atmost 71 moves we will have completed our path.
Binary search on path will take at most log n moves, that is around 13.
Thus we take atmost 71 + 13 = 84 moves.
Submission: Submission for the above approach
For solution 2 on F, I'm pretty sure you can store a max segtree for the last time each position was updated, and in the map store also store when the range was added. Then when you visit a range just check whether you need to recompute it in log N and then recompute if you have to. I think this doesn't increase the complexity because an interval tree is already n log^2 n to update...?
This solution of mine is getting WA for problem D. Can anyone point out the reason? 271964225
I'll explain my approach. I have followed greedy method. I have initialized the minimum no. of operations 'n' since it is always possible.
Now iteratively I am checking for each row the no. black cells, if it is >4 continue else if it's ==0 operation req decreases by one.
My approach is like we can decrease the operations req only when the count of black cells encountered is <=2 && !=0 if so, and the next row is <= 2 && != 0 decrease the operation req by one and move to i+2, if not then minimum of next 2rows should >2 && <= 4 and the next row(i+3 here) should >=2 && != 0 then we can decrease operation req by one as we can use only 1st option in these rows to make them all white and move to i+4.
Uneven problems
I had a general question for interactive problems, I've seen a lot of problems mention whether the interactive is adaptive or not, I wanted to understand if this makes any difference to the solution. In this contest for example, for E1 and E2 it said that the interactor is not adaptive, but from what I understand, even if it was adaptive, that wouldn't have changed the solution in any way right? What's exactly the purpose of mentioning this then?
Can anyone explain, please, why we cant solve problem C simply implementing the algorithm?
My observation was that number of 0's after each operation grows exponentially, so there will be no more log(n) iteration. Recalculating next iteration of the array will be O(n), times std::map operation to find the most frequent element is O(log(n)) times O(log(n)) total operation result in O(n*log^2(n)) opearions that should pass TL.
Also for some reason my implementation gives WA 272048767. Why so?
I read the problem worng. Nevermind
can anyone tell me why this code doesn't work for E1? https://mirror.codeforces.com/contest/1990/submission/273122031
I can't tell whether it is an implementation issue, or my algorithm is incorrect. Thanks (see find() function for the main logic)
someone tell me whats wrong with this solution for C?
273115595
For the problem 1990B - Array Craft, I am setting all the elements in the segment $$$a[y,...,x]$$$ equal to $$$1$$$ and all the remaining elements as $$$-1$$$. In this way, the $$$presum_{y-1}$$$ is $$$negative$$$ as well as the $$$postsum_{x+1}$$$ is also $$$negative$$$. Therefore, the $$$\textbf{maximum presum}$$$ will be at the index $$$x$$$, because all the elements after it are $$$-1$$$, which will decrease the presum.
Similarly, the $$$\textbf{maximum postsum}$$$ will be at the index $$$y$$$.
wuhudsm What is wrong with this approach?
For problem E, I just query the subtree that has the most possible nodes less than half of the amount every time. https://mirror.codeforces.com/contest/1990/submission/273813620
It was really helpful thnx.
My submission for problem C shows TLE for TC 15, I basically tried to simulate the process given in the problem statement using maps but am unable to come up with a mathematical equation in order to not run an unnecessary "while(mp.size()!=0)". Here's my submission to the problem : 274692750. Can someone please help me out? Tia
can someone please explain , where my solution for c went wrong?. My_sollution
what I did was after first process if there are some consecutive numbers in array with same value of length greater than 1. they are going to exist in array till they get completely shifted out of the array.
so I added it's value to sum as length*(number of rounds this length will be sustained-1)*value + ((length)*(length+1)/2)*value (because after reaching end length will start to decrease by 1).
and for length ==1 I just added to sum only once because they will be removed after next process.
please explain solution one
for the submission bait: what if I take xor of all the elements when n is even and if result of XOR is not equal to 0 then my ans is "yes" else in all cases my answer will be "no". what is the issue in this approach (I am getting wrong ans on 3rd testcase) 278515326
can anyone make tell what does that prefix_sum till x should be >0, in B
I solved E2 in q=log(n) (optimally) and O(n) time complexity. The idea is to find a node that divides the nodes of the tree in two. I cannot prove the worst number of queries is less than 160 though. Can someone hack it or explain its mechanisms in a mathematical/logical manner? Thank you very much!