Thank you for participation and we hope you enjoy this round :)
D2A Submission is All You Need
Do you know the source of the title?
Assuming that choosing $$$\text{mex(S')}$$$ is optimal, what conditions does $$$S'$$$ need to satisfy?
Are there duplicate numbers in $$$S'$$$? Is there a number greater than $$$\text{mex(S')}$$$ in $$$S'$$$?
Assuming that choosing $$$\text{mex(S')}$$$ is optimal, there are no duplicate numbers in $$$S'$$$. And thers is not a number greater than $$$\text{mex(S')}$$$. Otherwise, we can easily construct a better answer.
Consider $$$\text{S={0,1,...,x}}$$$. We can see $$$\text{mex(S')} \gt \text{max(S')}$$$ when $$$x$$$ is $$$0$$$ or $$$1$$$. And $$$\text{mex({0,1})}=\text{mex({0})}+\text{sum({1})}$$$. Thus, we only use $$$\text{mex}$$$ operation for each single $$$0$$$.
The answer is $$$\text{sum(S)}$$$ add the number of $$$0$$$ in $$$S$$$.
#include <bits/stdc++.h>
using namespace std;
void solve(){
int n, tmp, ans = 0;
cin >> n;
for(int i=0; i<n; i++){cin >> tmp; ans += tmp + (tmp == 0);}
cout << ans << '\n';
}
int main(){
ios::sync_with_stdio(false);
cin.tie(NULL);
int t;
cin >> t;
while(t--) solve();
}
D2B Pathless
Casework. Consider two easy cases first: $$$\Sigma a \gt s$$$ and $$$\Sigma a=s$$$.
What if there is an index $$$i$$$ such that $$$a_i+a_{i+1}=1$$$?
To "block" $$$a_i+a_{i+1}=1$$$, what should Bob do?
Consider cases of $$$\Sigma a=s+1$$$ and $$$\Sigma a \gt s+1$$$.
The minimum path sum is the sum of the simple path from $$$1$$$ to $$$n$$$ (i.e., $$$1 \rightarrow 2 \rightarrow \dots \rightarrow n$$$), denoted as $$$\text{base_sum} = \sum_{i=1}^{n} a_i$$$.
The path can include back-and-forth movements (e.g., from $$$i$$$ to $$$i+1$$$ and back to $$$i$$$), which increases the path length and the path sum. Specifically, each back-and-forth movement between adjacent positions $$$(i, i+1)$$$ adds $$$a_i + a_{i+1}$$$ to the path sum.
Thus, the total path sum can be expressed as $$$\text{base_sum} + d$$$, where $$$d$$$ is the additional sum contributed by zero or more back-and-forth movements. Here, $$$d$$$ can be represented as a non-negative integer linear combination of the sums of adjacent pairs.
Casework:
If $$$s \lt \text{base_sum}$$$, no path sum can equal $$$s$$$ (since the base sum already exceeds $$$k$$$, and any additional movements will further increase the sum).
If $$$s = \text{base_sum}$$$, the simple path ($$$1 \rightarrow 2 \rightarrow \dots \rightarrow n$$$) already satisfies the condition, so Alice cannot be stopped.
If $$$s \gt \text{base_sum}$$$, let $$$d = s - \text{base_sum}$$$:
- If $$$d = 1$$$, we can rearrange the array to avoid adjacent pairs summing to $$$1$$$ (i.e., avoid $$$0$$$ and $$$1$$$ being adjacent). Specifically, place all $$$0$$$s first, followed by all $$$2$$$s, and then all $$$1$$$s (e.g., $$$[0, 0, \dots, 0, 2, 2, \dots, 2, 1, 1, \dots, 1]$$$). This ensures no adjacent pair sums to $$$1$$$, making $$$d = 1$$$ unachievable.
- If $$$d \neq 1$$$, no rearrangement can prevent Alice from achieving the path sum $$$s$$$ (since sums of adjacent pairs can cover all $$$T \geq 2$$$ through linear combinations).
#include <bits/stdc++.h>
using namespace std;
void solve(){
int n, k, tmp;
cin >> n >> k;
vector<int> a(3);
for(int i=0; i<n; i++){cin >> tmp; a[tmp]++;}
if(k == a[1] + 2*a[2] || k >= a[1] + 2*a[2] + 2){cout << -1 << '\n'; return;}
for(int i=0; i<a[0]; i++) cout << 0 << ' ';
for(int i=0; i<a[2]; i++) cout << 2 << ' ';
for(int i=0; i<a[1]; i++) cout << 1 << ' ';
cout << '\n';
}
int main(){
ios::sync_with_stdio(false);
cin.tie(NULL);
int t;
cin >> t;
while(t--) solve();
}
D1A Double Perspective
Assuming that choosing $$$S$$$ is optimal, what is the value of $$$g(S)$$$?
In the case of $$$g(S)=0$$$, make $$$f(S)$$$ the length of the union of the entire set.
Assuming that choosing $$$S$$$ is optimal, $$$g(S)$$$ must be $$$0$$$ because removing an edge on the cycle will not change $$$f(S)$$$.
In the case of $$$g(S)=0$$$, we need to make $$$f(S)$$$ the length of the union of the entire set. There are many approaches.
An easiest $$$O(n^2)$$$ approach is to remove all segments contained by other segments. Afterwards, it can be proven that there are no cycles with a length greater than or equal to $$$3$$$ in the graph. If there is a cycle, note it as $$$x_1 \rightarrow x_2 \rightarrow x_3 \rightarrow ... \rightarrow x_1$$$, we can always find $$$x_{i-1} \lt x_i \gt x_{i+1}$$$, where either $$$[x_{i-1},x_i] \subseteq [x_i,x_{i+1}]$$$ or $$$[x_i,x_{i+1}] \subseteq [x_{i-1},x_i]$$$. At this point, we have deduced the contradiction, so there are no cycles with a length greater than or equal to $$$3$$$ in the graph.
Another approach is to use DSU to maintain the generate forest.
#include <map>
#include <set>
#include <cmath>
#include <ctime>
#include <queue>
#include <stack>
#include <cstdio>
#include <cstdlib>
#include <vector>
#include <cstring>
#include <algorithm>
#include <iostream>
#include <bitset>
#include <unordered_map>
#define fastio ios_base::sync_with_stdio(false);cin.tie(0);cout.tie(0);
using namespace std;
typedef double db;
typedef long long ll;
typedef unsigned long long ull;
const int N=1000010;
int T,n;
int a[N],b[N];
void init()
{
cin>>n;
for(int i=1;i<=n;i++)
cin>>a[i]>>b[i];
}
void solve()
{
vector<int> tag(n+4,1),ans;
for(int i=1;i<=n;i++)
{
for(int j=1;j<=n;j++)
{
if(i==j) continue;
if(a[j]<=a[i]&&b[i]<=b[j]) tag[i]=0;
}
if(tag[i]) ans.push_back(i);
}
cout<<ans.size()<<"\n";
for(int i=0;i<ans.size();i++)
cout<<ans[i]<<(i+1==ans.size()?"\n":" ");
}
int main()
{
fastio;
cin>>T;
while(T--)
{
init();
solve();
}
return 0;
}
D1B Stay or Mirror
Consider $$$p_i=1$$$. If $$$a_i=1$$$, what are inversions containing $$$i$$$? If $$$a_i=2n-1$$$, what are inversions containing $$$i$$$?
Attempt to generalize the conclusion to any pair $$$(i, j)$$$.
Consider $$$p_i=1$$$ and inversions containing $$$i$$$. If $$$a_i=1$$$, it is the minimum value, and the inversions containing $$$i$$$ are $$$(1, i), (2, i),... (i-1, i)$$$; If $$$a_i=2n-1$$$, it is the maximum value, and the inversions containing $$$i$$$ are $$$(i, i+1),... (i, n)$$$. The key observation is the number of inversions containing $$$i$$$ is only related to $$$a_i$$$, and completely unrelated to other elements.
At this point, the problem has actually been solved. We can greedily choose $$$\min(i-1,n-i)$$$ and remove $$$1$$$ from $$$p$$$, and the remaining part is a completely identical subproblem with a size of $$$n-1$$$.
It can be implemented as follows: for each index $$$i$$$, calculate the number of elements to the left that are bigger than $$$p_i$$$ (note it as $$$lbigger$$$), and the number of elements to the right that are bigger than $$$p_i$$$ (note it as $$$rbigger$$$). Then greedily add $$$\min(lbigger,rbigger)$$$ to the answer.
#include <bits/stdc++.h>
using namespace std;
void solve(){
int n, ans = 0;
cin >> n;
vector<int> p(n);
for(int i=0; i<n; i++) cin >> p[i];
for(int i=0; i<n; i++){
int left = 0, right = 0;
for(int j=0; j<i; j++) if(p[j] > p[i]) left++;
for(int j=i+1; j<n; j++) if(p[j] > p[i]) right++;
ans += min(left, right);
}
cout << ans << '\n';
}
int main(){
ios::sync_with_stdio(false);
cin.tie(NULL);
int t;
cin >> t;
while(t--) solve();
}
D1C1 Interactive RBS (Easy Version), D1C2 Interactive RBS (Medium Version) and D1C3 Interactive RBS (Hard Version)
Try to Perform binary search to find a $$$\texttt{'('}$$$ and a $$$\texttt{')'}$$$ using $$$O(\log n)$$$ queries. It is best to complete it within $$$10$$$ operations (to pass the hard version).
Consider the easy version. Constraint implies that we use one query to determine $$$2$$$ characters.
Construct a string $$$x$$$, insert $$$s_i$$$ and $$$s_j$$$ into certain positions of $$$x$$$ to obtain string $$$y$$$, so that $$$f(y)$$$ can produce $$$4$$$ different values.
Consider the medium version. Effectively utilizing the conditions of $$$k \le 1000$$$.
Consider binary code.
Consider the hard version. The number of substrings in a string of length $$$l$$$ is $$$O(l ^ 2)$$$. How to utilize this?
At first, we need to find $$$s_x=\texttt{’(’}$$$ and $$$s_y=\texttt{’)’}$$$. Do binary search to find the smallest $$$i$$$ such that $$$f(s_1s_2...s_i) \gt 0$$$. Then we get $$$s_{i-1}=\texttt{’(’}$$$ and $$$s_i=\texttt{’)’}$$$. A corner case is $$$f(s)=0$$$. In this case, $$$s_n=\texttt{’(’}$$$ and $$$s_1=\texttt{’)’}$$$.
Ask $$$f(\texttt{"s1)s2)()"})$$$:
If $$$f(\texttt{"s1)s2)()"})=6$$$, $$$s_1=$$$ $$$\texttt{'('}$$$ and $$$s_2=$$$ $$$\texttt{'('}$$$.
If $$$f(\texttt{"s1)s2)()"})=2$$$, $$$s_1=$$$ $$$\texttt{'('}$$$ and $$$s_2=$$$ $$$\texttt{')'}$$$.
If $$$f(\texttt{"s1)s2)()"})=3$$$, $$$s_1=$$$ $$$\texttt{')'}$$$ and $$$s_2=$$$ $$$\texttt{'('}$$$.
If $$$f(\texttt{"s1)s2)()"})=1$$$, $$$s_1=$$$ $$$\texttt{')'}$$$ and $$$s_2=$$$ $$$\texttt{')'}$$$.
In this way, we can determine two characters with one query.
Read the tutorial in easy version first.
Note $$$f_0$$$ as the $$$f$$$ value of the following string:
$$$\underbrace{()()\ldots()}_{128 pairs}$$$ $$$)$$$ $$$\underbrace{()()\ldots()}_{64 pairs}$$$ $$$)$$$ $$$\dots$$$ $$$)()$$$
Then ask for the $$$f$$$ value of the following string (note it as $$$f_1$$$):
$$$\underbrace{s_8)()\ldots()}_{128 pairs}$$$ $$$)$$$ $$$\underbrace{s_7)()\ldots()}_{64 pairs}$$$ $$$)$$$ $$$\dots$$$ $$$)s_1)$$$
By checking the binary bits of $$$f_0-f_1$$$, we can determine $$$8$$$ positions with one query. For example, if $$$f_0-f_1=(10000101)_2$$$, we can get $$$s_1=s_3=s_8=')'$$$.
Read the tutorial of the medium version first.
We can optimize
$$$\underbrace{s_i)()\ldots()}_{2^k pairs}$$$
to
$$$\underbrace{()()\ldots()}_{2^{\lfloor \frac{k}{2} \rfloor}-1\ pairs}$$$ $$$s_i)$$$ $$$\underbrace{()()\ldots()}_{2^{\lceil \frac{k}{2} \rceil}-1\ pairs}$$$
With this optimization, we can determine $$$12$$$ locations in one query.
Another construction also works:
$$$\underbrace{s_1)s_1)\ldots s_1)}_{g(1)\ pairs}$$$ $$$)$$$ $$$\underbrace{s_2)s_2)\ldots s_2)}_{g(2)\ pairs}$$$ $$$)$$$ $$$\underbrace{s_3)s_3)\ldots s_3)}_{g(3)\ pairs}$$$ $$$)$$$ $$$\ldots$$$ $$$\underbrace{s_{12})s_{12})\ldots s_{12})}_{g(12)\ pairs}$$$
where $$$g(1)=1$$$ and $$$\frac{g(i+1) \cdot (g(i+1)+1)}{2} \gt \frac{g(1) \cdot (g(1)+1)}{2}+\frac{g(2) \cdot (g(2)+1)}{2}+\ldots+\frac{g(i) \cdot (g(i)+1)}{2}$$$.
#include <bits/stdc++.h>
using namespace std;
typedef long long ll;
typedef long long ll;
typedef long double ld;
typedef pair<ll, ll> pll;
typedef pair<int, int> pii;
typedef vector<int> vi;
typedef vector<ll> vll;
const long long kk = 1000;
const long long ml = kk * kk;
const long long mod = ml * kk + 7;
const long long inf = ml * ml * ml + 7;
#define T(x, i) get<i>(x)
#define F first
#define S second
#define all(v) v.begin(), v.end()
#define forn(i, n) for (int i = 0; i < n; i++)
template<class T, class U> inline istream& operator>>(istream& str, pair<T, U> &p) { return str >> p.F >> p.S; }
template<class T, class U> inline ostream& operator<<(ostream& str, const pair<T, U> &p) { str << p.F << ' ' << p.S << ' '; return str; }
template<class T> inline istream& operator>>(istream& str, vector<T> &a) { for (auto &i : a) str >> i; return str; }
template<class T> inline ostream& operator<<(ostream& str, const vector<T> &a) { for (auto &i : a) str << i << ' '; return str; }
void flush() { cout << flush; }
void flushln() { cout << endl; }
template<class T> void read(T &x) { cin >> x; }
template<class T, class ...U> void read(T &x, U&... u) { read(x); read(u...); }
template<class T> void print(const T &x) { cout << x; }
template<class T, class ...U> void print(const T &x, const U&... u) { print(x); print(u...); }
template<class ...T> void println(const T&... u) { print(u..., '\n'); }
template<class T> void eprint(const T &x) { cerr << x; }
template<class T, class ...U> void eprint(const T &x, const U&... u) { eprint(x); eprint(u...); }
template<class ...T> void eprintln(const T&... u) { eprint(u..., '\n'); }
#ifdef DEBUG
mt19937 rng(1033);
#define var(x) cerr << #x << ": " << x << '\n';
#define range(a, b) cerr << #a << ", " << #b << ": "; for (auto _it = a; _it != b; ++_it) cerr << *_it << ' '; cerr << '\n';
#else
mt19937 rng(chrono::steady_clock::now().time_since_epoch().count());
#define cerr if (false) cerr
#define var(x)
#define range(a, b)
#define eprintln if(false)eprintln
#define eprint if(false)eprint
#endif
int rnd(int mod) { return uniform_int_distribution<int>(0, mod - 1)(rng); }
int n;
map<vector<int>, int> mem;
int ask(vector<int> ind) {
if (!mem.count(ind)) {
cout << "? ";
cout << ind.size() << ' ';
for (auto i : ind)
cout << i + 1 << ' ';
cout << endl;
int res;
cin >> res;
mem[ind] = res;
}
return mem[ind];
}
bool same(vector<int> ind) {
if (ask(ind))
return false;
ind.push_back(ind[0]);
ind.erase(ind.begin());
if (ask(ind))
return false;
return true;
}
pii search(vector<int> ind) {
if (ind.size() == 2) {
if (ask(ind))
return {ind[0], ind[1]};
return {ind[1], ind[0]};
}
vector<int> inda, indb;
int sz = (ind.size() + 1) / 2;
for (int i = 0; i < sz; i++) {
inda.push_back(ind[i]);
indb.push_back(ind[ind.size() - 1 - i]);
}
if (!same(inda))
return search(inda);
if (!same(indb))
return search(indb);
inda.push_back(indb[0]);
return search(inda);
}
pii find_reg2() {
vector<int> ind(n);
iota(all(ind), 0);
shuffle(all(ind), rng);
return search(ind);
}
void solve() {
mem.clear();
auto [po, pc] = find_reg2();
vector<int> to_ask(n);
iota(all(to_ask), 0);
if (to_ask.size() & 1)
to_ask.push_back(0);
string ans = string(n, '.');
ans[po] = '(';
ans[pc] = ')';
for (int i = 0; i < to_ask.size(); i += 2) {
int a = to_ask[i], b = to_ask[i + 1];
vector<int> q = {po, pc, po, a, b, pc};
int res = ask(q);
// ()(ab) - x ab
// ()(()) - 4 ()
// ()((() - 2 ((
// ()())) - 3 ))
// ()()() - 6 )(
map<int, pair<char, char>> results = {
{4, {'(', ')'}},
{2, {'(', '('}},
{3, {')', ')'}},
{6, {')', '('}},
};
assert(results.count(res));
auto [resa, resb] = results[res];
ans[a] = resa;
ans[b] = resb;
}
cout << "! " << ans << endl;
}
signed main() {
#ifdef DEBUG
// freopen("input.txt", "r", stdin);
// freopen("output.txt", "w", stdout);
#endif
ios_base::sync_with_stdio(0); cin.tie(0);
cout << fixed << setprecision(20);
int t = 1;
cin >> t;
for (int i = 0; i < t; i++) {
cin >> n;
solve();
}
#ifdef DEBUG
cerr << "Runtime is: " << clock() * 1.0 / CLOCKS_PER_SEC << endl;
#endif
}
/*
()(ab)
()(()) - 4 ()
()((() - 2 ((
()())) - 3 ))
()()() - 6 )(
2
3
)((
2
()
*/
#include <vector>
#include <list>
#include <map>
#include <set>
#include <deque>
#include <vector>
#include <list>
#include <map>
#include <set>
#include <deque>
#include <queue>
#include <stack>
#include <bitset>
#include <algorithm>
#include <functional>
#include <numeric>
#include <utility>
#include <sstream>
#include <iostream>
#include <iomanip>
#include <cstdio>
#include <cmath>
#include <cstdlib>
#include <cctype>
#include <string>
#include <cstring>
#include <ctime>
#include <random>
#include <chrono>
using namespace std;
#define _int64 long long
#define mo 998244353
char tar[110000];
int cnt;
int doask(vector<int>& a)
{
int i,ret;
/*
{
int v,tmp,j;
string ss;
cnt++;
ss="";
for (i=0;i<a.size();i++)
{
ss.push_back(tar[a[i]]);
}
tmp=0;
for (i=0;i<ss.length();i++)
{
v=0;
for (j=i;j<ss.length();j++)
{
if (ss[j]=='(') v++;
else v--;
if (v<0) break;
if (v==0) tmp++;
}
}
return tmp;
}
*/
printf("? %d",(int)a.size());
for (i=0;i<a.size();i++)
printf(" %d",a[i]+1);
printf("\n");
fflush(stdout);
scanf("%d",&ret);
return ret;
}
char ans[1100];
void doanswer()
{
printf("! %s\n",ans);
cerr<<"T "<<tar<<endl;
cerr<<"cnt:"<<cnt<<endl;
if (string(ans)==string(tar))
{
cerr<<"good"<<endl;
}
else cerr<<"bad"<<endl;
fflush(stdout);
}
int main()
{
int i,j,k,n,l,t,m,x,y,o,b1,indl,indr,ll,rr,mid,tmp,v,st;
vector<int> a;
scanf("%d",&t);
for (l=0;l<t;l++)
{
scanf("%d",&n);
for (i=0;i<n;i++)
{
x=rand()%2;
if (x==0) tar[i]='(';
else tar[i]=')';
}
tar[n]='\0';
cnt=0;
st=0;ans[n]='\0';
indl=-1;
indr=-1;
// while (st<n)
// {
ll=st;rr=n;
while (rr-ll>1)
{
mid=(ll+rr)/2;
a.clear();
for (i=st;i<=mid;i++)
a.push_back(i);
tmp=doask(a);
if (tmp==0) ll=mid;
else rr=mid;
}
if (rr==n)
{
indr=st;
/*
if ((indl==-1)&&(indr==-1))
{
indr=st;
}
ll=st-1;rr=n;
while (rr-ll>1)
{
mid=(ll+rr)/2;
a.clear();
a.push_back(mid);
a.push_back(indr);
tmp=doask(a);
if (tmp==0) ll=mid;
else rr=mid;
}
for (i=st;i<=ll;i++)
ans[i]=')';
for (i=rr;i<n;i++)
ans[i]='(';
st=n;
break;
*/
}
else
{
indr=rr;
/*
v=rr;
ans[v]=')';
ll=st-1;rr=v-1;
while (rr-ll>1)
{
mid=(ll+rr)/2;
a.clear();
a.push_back(mid);
a.push_back(v);
tmp=doask(a);
if (tmp==0) ll=mid;
else rr=mid;
}
for (i=st;i<=ll;i++)
ans[i]=')';
for (i=rr;i<v;i++)
ans[i]='(';
st=v+1;
indl=v-1;
indr=v;
*/
}
// }
//cerr<<"indr:"<<indr<<endl;
for (i=0;i<n;i+=8)
{
a.clear();
for (j=0;j<8;j++)
{
if (i+j>=n) break;
for (k=0;k<(1<<j);k++)
{
a.push_back(i+j);
a.push_back(indr);
a.push_back(indr);
}
}
tmp=doask(a);
for (j=0;j<8;j++)
{
if (i+j>=n) break;
if (((1<<j)&tmp)!=0) ans[i+j]='(';
else ans[i+j]=')';
}
}
doanswer();
}
}
#include <vector>
#include <list>
#include <map>
#include <set>
#include <deque>
#include <vector>
#include <list>
#include <map>
#include <set>
#include <deque>
#include <queue>
#include <stack>
#include <bitset>
#include <algorithm>
#include <functional>
#include <numeric>
#include <utility>
#include <sstream>
#include <iostream>
#include <iomanip>
#include <cstdio>
#include <cmath>
#include <cstdlib>
#include <cctype>
#include <string>
#include <cstring>
#include <ctime>
#include <random>
#include <chrono>
using namespace std;
#define _int64 long long
#define mo 998244353
char tar[110000];
int cnt;
int doask(vector<int>& a)
{
int i,ret;
/*
{
int v,tmp,j;
string ss;
cnt++;
ss="";
for (i=0;i<a.size();i++)
{
ss.push_back(tar[a[i]]);
}
tmp=0;
for (i=0;i<ss.length();i++)
{
v=0;
for (j=i;j<ss.length();j++)
{
if (ss[j]=='(') v++;
else v--;
if (v<0) break;
if (v==0) tmp++;
}
}
return tmp;
}
*/
printf("? %d",(int)a.size());
for (i=0;i<a.size();i++)
printf(" %d",a[i]+1);
printf("\n");
fflush(stdout);
scanf("%d",&ret);
return ret;
}
char ans[1100];
void doanswer()
{
printf("! %s\n",ans);
cerr<<"T "<<tar<<endl;
cerr<<"cnt:"<<cnt<<endl;
if (string(ans)==string(tar))
{
cerr<<"good"<<endl;
}
else cerr<<"bad"<<endl;
fflush(stdout);
}
int main()
{
int i,j,k,n,l,t,m,x,y,o,b1,indl,indr,ll,rr,mid,tmp,v,st,sum,len;
vector<int> a;
vector<int> p;
sum=0;len=0;
for (i=1;i<=130;i++)
{
tmp=i*(i+1)/2;
if (tmp>sum)
{
p.push_back(i);
//cerr<<"i:"<<i<<endl;
sum+=tmp;
len+=i+i;
}
}
//cerr<<"p.sz:"<<p.size()<<" "<<len<<endl;
scanf("%d",&t);
for (l=0;l<t;l++)
{
scanf("%d",&n);
for (i=0;i<n;i++)
{
x=rand()%2;
if (x==0) tar[i]='(';
else tar[i]=')';
}
tar[n]='\0';
cnt=0;
st=0;ans[n]='\0';
indl=-1;
indr=-1;
// while (st<n)
// {
ll=st;rr=n;
while (rr-ll>1)
{
mid=(ll+rr)/2;
a.clear();
for (i=st;i<=mid;i++)
a.push_back(i);
tmp=doask(a);
if (tmp==0) ll=mid;
else rr=mid;
}
if (rr==n)
{
indr=st;
/*
if ((indl==-1)&&(indr==-1))
{
indr=st;
}
ll=st-1;rr=n;
while (rr-ll>1)
{
mid=(ll+rr)/2;
a.clear();
a.push_back(mid);
a.push_back(indr);
tmp=doask(a);
if (tmp==0) ll=mid;
else rr=mid;
}
for (i=st;i<=ll;i++)
ans[i]=')';
for (i=rr;i<n;i++)
ans[i]='(';
st=n;
break;
*/
}
else
{
indr=rr;
/*
v=rr;
ans[v]=')';
ll=st-1;rr=v-1;
while (rr-ll>1)
{
mid=(ll+rr)/2;
a.clear();
a.push_back(mid);
a.push_back(v);
tmp=doask(a);
if (tmp==0) ll=mid;
else rr=mid;
}
for (i=st;i<=ll;i++)
ans[i]=')';
for (i=rr;i<v;i++)
ans[i]='(';
st=v+1;
indl=v-1;
indr=v;
*/
}
// }
//cerr<<"indr:"<<indr<<endl;
for (i=0;i<n;i+=p.size())
{
a.clear();
for (j=0;j<p.size();j++)
{
if (i+j>=n) break;
for (k=0;k<p[j];k++)
{
a.push_back(i+j);
a.push_back(indr);
}
a.push_back(indr);
}
tmp=doask(a);
for (j=p.size()-1;j>=0;j--)
{
if (i+j>=n) continue;
v=p[j]*(p[j]+1)/2;
if (tmp>=v)
{
tmp-=v;ans[i+j]='(';
}
else ans[i+j]=')';
}
}
doanswer();
}
}
D1D Permutation Blackhole
Consider enumerating $$$p_i = 1$$$. After that, will the cells to the left of $$$p_i$$$ affect the cells to the right of $$$p_i$$$?
Consider range DP.
Can you find the upper bound of $$$s_i$$$?
Consider range DP.
Note $$$dp[l][r][x][y]$$$ as the number of scheme ensureing $$$s[l-1]+=x$$$ and $$$s[r+1]+=y$$$.
Transform:
dp[l][r][x][y] = \left\{ \begin{array}{l} \Sigma_{\text{i is nearer to l}} \binom{r-l}{i-l} \cdot dp[l][i-1][x-1][j] \cdot dp[i+1][r][k][y] \\ + \Sigma_{\text{i is nearer to r}} \binom{r-l}{i-l} \cdot dp[l][i-1][x][j] \cdot dp[i+1][r][k][y-1]\ (s_i = -1) \\ \Sigma_{\text{i is nearer to l},\ j+k=s_i} \binom{r-l}{i-l}\cdot dp[l][i-1][x-1][j] \cdot dp[i+1][r][k][y] \\ + \Sigma_{ \text{i is nearer to r},\ j+k=s_i} \binom{r-l}{i-l}\cdot dp[l][i-1][x][j] \cdot dp[i+1][r][k][y-1]\ (s_i \neq -1) \end{array} \right.
$$$
For simplicity, this formula omits some edge cases. Readers are encouraged to fill in the details themselves.
The complexity seems high, but the problem can be solved with one key observation: the upper bound of $$$s_i$$$ is $$$O( \log n)$$$.
Proof: let's fix a position $$$i$$$ and only consider the cells $$$c_1, c_2, ..., c_k$$$ $$$(c_1 \lt c_2 \lt ... \lt c_k)$$$ to the left of $$$i$$$ that cause $$$s_i$$$ to increase. We can see that $$$i - c_j \gt 2(i - c_{j+1})$$$. Otherwise, $$$c_{j+1}$$$ would cause $$$s_{c_j}$$$ to increase by $$$1$$$ instead of $$$s_i$$$. Thus, We can get $$$k$$$ is $$$O( \log n)$$$.
Complexity: $$$O(n^3 \log^3n)$$$.
#include <map>
#include <set>
#include <cmath>
#include <ctime>
#include <queue>
#include <stack>
#include <cstdio>
#include <cstdlib>
#include <vector>
#include <cstring>
#include <algorithm>
#include <iostream>
#include <bitset>
#include <unordered_map>
#define fastio ios_base::sync_with_stdio(false);cin.tie(0);cout.tie(0);
using namespace std;
typedef double db;
typedef long long ll;
typedef unsigned long long ull;
const int N=110;
const int LOGN=10;
const ll TMD=998244353;
const ll INF=2147483647;
int T,n;
ll s[N],fac[N],inv[N];
ll dp[N][N][LOGN][LOGN];
void init_C()
{
fac[0]=1;
for(int i=1;i<N;i++) fac[i]=(fac[i-1]*i)%TMD;
inv[N-1]=pw(fac[N-1],TMD-2);
for(int i=N-2;i>=0;i--) inv[i]=(inv[i+1]*(i+1))%TMD;
}
ll C(int n,int m)
{
if(n<m||n<0||m<0) return 0;
return fac[n]*inv[m]%TMD*inv[n-m]%TMD;
}
void init()
{
cin>>n;
for(int i=1;i<=n;i++) cin>>s[i];
for(int i=1;i<=n+4;i++)
for(int j=1;j<=n+4;j++)
for(int ii=0;ii<LOGN;ii++)
for(int jj=0;jj<LOGN;jj++)
dp[i][j][ii][jj]=0;
dp[1][1][0][1]=(s[1]<1);
for(int i=2;i<=n;i++)
dp[i][i][1][0]=(s[i]<1);
for(int i=1;i<=n;i++) //auxiliary value
dp[i][i-1][0][0]=dp[i+1][i][0][0]=1;
}
int ABS(int x)
{
return x<0?-x:x;
}
void solve()
{
for(int i=2;i<=n;i++)
{
for(int j=1;j+i-1<=n;j++)
{
for(int ii=0;ii<LOGN;ii++)
{
for(int jj=0;jj<LOGN;jj++)
{
int l=j,r=j+i-1;
for(int k=l;k<=r;k++)
{
int tagl=0,tagr=0;
if(l==1&&r==n) tagl=tagr=0;
else if(l==1) tagr=1;
else if(r==n) tagl=1;
else
{
if(ABS(l-1-k)<=ABS(k-(r+1))) tagl=1;
else tagr=1;
}
if(ii-tagl<0||jj-tagr<0) continue;
if(s[k]==-1)
{
ll suml=0,sumr=0;
for(int kk=0;kk<LOGN;kk++) suml=(suml+dp[l][k-1][ii-tagl][kk])%TMD;
for(int kk=0;kk<LOGN;kk++) sumr=(sumr+dp[k+1][r][kk][jj-tagr])%TMD;
dp[l][r][ii][jj]=(dp[l][r][ii][jj]+suml*sumr%TMD*C(r-l,k-l))%TMD;
}
else
{
for(int kk=0;kk<LOGN;kk++)
{
if(s[k]-kk<0||s[k]-kk>=LOGN) continue;
dp[l][r][ii][jj]=(dp[l][r][ii][jj]+dp[l][k-1][ii-tagl][kk]*dp[k+1][r][s[k]-kk][jj-tagr]%TMD*C(r-l,k-l))%TMD;
}
}
}
}
}
}
}
cout<<dp[1][n][0][0]<<'\n';
}
int main()
{
fastio;
init_C();
cin>>T;
while(T--)
{
init();
solve();
}
return 0;
}
D1E Induced Graph Queries
Come up with an O(n log^4 n) solution using a persistent multidimensional segment tree with Trie that supports rectangle update XOR.
Find the nature of the problem and recall the data structures and algorithms learned. What can we utilize?
Consider a standard Mo's algorithm. Why it doesn't work in this problem?
Recalling the Handshake Theorem: $$$\Sigma_{i=1}^n degree_i=2m$$$.
Consider a variation of Mo's algorithm.
How to find the $$$k$$$-th smallest value without $$$O( \log n)$$$ complexity?
We found that merging the $$$f$$$ values of two intervals is a bit difficult. However, if we want to add a single node to the graph, it is relatively easy to update the $$$f$$$ value using brute force. This inspires us to use the Mo's algorithm.
If we use the standard Mo's Algorithm, when expanding the interval from $$$[l, r]$$$ to $$$[l, r+1]$$$, we need to examine all edges connected to $$$r+1$$$ and update the node degrees accordingly. We define a cost array: $$$cost_i=degree_i+1$$$. The cost of update node $$$r+1$$$ is $$$cost_{r+1}$$$.
However, if a node has a high degree, repeatedly processing it in Mo's Algorithm could lead to big time complexity.
We leverage the Handshaking Lemma to use modified Mo's Algorithm to efficiently handle the queries.
Recall that the Mo's algorithm relatively evenly divides array indices into blocks according to array length and assigns block numbers. Here, our dividinging strategy is different: we "relatively evenly" divide the array into blocks based on the prefix sum value of the $$$cost$$$ array. Specifically, we calculate the prefix sum of the cost array $$$precost$$$, using $$$\frac{precost_i}{B}$$$ as the block number of node $$$i$$$, where $$$B$$$ is the block length parameter.
Assuming we have a data structure that can maintain the value of $$$k$$$-th smallest value in $$$O(1)$$$, using the same analysis method as standard Mo's algorithm, we can get the complexity is $$$O((n+m+q)\sqrt{n+m})$$$.
To find the $$$k$$$-th smallest value without $$$O( \log n)$$$ complexity, we only need to maintain two arrays $$$cnt$$$ and $$$sum$$$, where $$$cnt_i$$$ is the occurrence of value $$$i$$$, and $$$sum_i=\Sigma_{j=(i-1)B+1}^{iB}cnt_j$$$ ($$$B$$$ is the block length parameter). We can perform modification in $$$O(1)$$$ and query in $$$O(\sqrt{n})$$$. The complexity is $$$O((n+q)\sqrt{n})$$$.
Regarding implementation, you need to pay attention to two common pitfalls:
If you use $$$cost_i=degree_i$$$ instead of $$$cost_i=degree_i+1$$$, the dividing algorithm doesn't work for the graph with many nodes with degree $$$0$$$.
Similarly, you can not set $$$B=\sqrt{m}$$$.
#include<bits/stdc++.h>
using namespace std;
#define all(a) a.begin(),a.end()
#define pb push_back
#define sz(a) ((int)a.size())
using ll=long long;
using u32=unsigned int;
using u64=unsigned long long;
using i128=__int128;
using u128=unsigned __int128;
using f128=__float128;
using pii=pair<int,int>;
using pll=pair<ll,ll>;
template<typename T> using vc=vector<T>;
template<typename T> using vvc=vc<vc<T>>;
template<typename T> using vvvc=vc<vvc<T>>;
using vi=vc<int>;
using vll=vc<ll>;
using vvi=vc<vi>;
using vvll=vc<vll>;
#define vv(type,name,n,...) \
vector<vector<type>> name(n,vector<type>(__VA_ARGS__))
#define vvv(type,name,n,m,...) \
vector<vector<vector<type>>> name(n,vector<vector<type>>(m,vector<type>(__VA_ARGS__)))
template<typename T> using min_heap=priority_queue<T,vector<T>,greater<T>>;
template<typename T> using max_heap=priority_queue<T>;
// https://trap.jp/post/1224/
#define rep1(n) for(ll i=0; i<(ll)(n); ++i)
#define rep2(i,n) for(ll i=0; i<(ll)(n); ++i)
#define rep3(i,a,b) for(ll i=(ll)(a); i<(ll)(b); ++i)
#define rep4(i,a,b,c) for(ll i=(ll)(a); i<(ll)(b); i+=(c))
#define cut4(a,b,c,d,e,...) e
#define rep(...) cut4(__VA_ARGS__,rep4,rep3,rep2,rep1)(__VA_ARGS__)
#define per1(n) for(ll i=((ll)n)-1; i>=0; --i)
#define per2(i,n) for(ll i=((ll)n)-1; i>=0; --i)
#define per3(i,a,b) for(ll i=((ll)a)-1; i>=(ll)(b); --i)
#define per4(i,a,b,c) for(ll i=((ll)a)-1; i>=(ll)(b); i-=(c))
#define per(...) cut4(__VA_ARGS__,per4,per3,per2,per1)(__VA_ARGS__)
#define rep_subset(i,s) for(ll i=(s); i>=0; i=(i==0?-1:(i-1)&(s)))
template<typename T, typename S> constexpr T ifloor(const T a, const S b){return a/b-(a%b&&(a^b)<0);}
template<typename T, typename S> constexpr T iceil(const T a, const S b){return ifloor(a+b-1,b);}
template<typename T>
void sort_unique(vector<T> &vec){
sort(vec.begin(),vec.end());
vec.resize(unique(vec.begin(),vec.end())-vec.begin());
}
template<typename T, typename S> constexpr bool chmin(T &a, const S b){if(a>b) return a=b,true; return false;}
template<typename T, typename S> constexpr bool chmax(T &a, const S b){if(a<b) return a=b,true; return false;}
template<typename T, typename S> istream& operator >> (istream& i, pair<T,S> &p){return i >> p.first >> p.second;}
template<typename T, typename S> ostream& operator << (ostream& o, const pair<T,S> &p){return o << p.first << ' ' << p.second;}
#ifdef i_am_noob
#define bug(...) cerr << "#" << __LINE__ << ' ' << #__VA_ARGS__ << "- ", _do(__VA_ARGS__)
template<typename T> void _do(vector<T> x){for(auto i: x) cerr << i << ' ';cerr << "\n";}
template<typename T> void _do(set<T> x){for(auto i: x) cerr << i << ' ';cerr << "\n";}
template<typename T> void _do(unordered_set<T> x){for(auto i: x) cerr << i << ' ';cerr << "\n";}
template<typename T> void _do(T && x) {cerr << x << endl;}
template<typename T, typename ...S> void _do(T && x, S&&...y) {cerr << x << ", "; _do(y...);}
#else
#define bug(...) 777771449
#endif
template<typename T> void print(vector<T> x){for(auto i: x) cout << i << ' ';cout << "\n";}
template<typename T> void print(set<T> x){for(auto i: x) cout << i << ' ';cout << "\n";}
template<typename T> void print(unordered_set<T> x){for(auto i: x) cout << i << ' ';cout << "\n";}
template<typename T> void print(T && x) {cout << x << "\n";}
template<typename T, typename... S> void print(T && x, S&&... y) {cout << x << ' ';print(y...);}
template<typename T> istream& operator >> (istream& i, vector<T> &vec){for(auto &x: vec) i >> x; return i;}
vvi read_graph(int n, int m, int base=1){
vvi adj(n);
for(int i=0,u,v; i<m; ++i){
cin >> u >> v,u-=base,v-=base;
adj[u].pb(v),adj[v].pb(u);
}
return adj;
}
vvi read_tree(int n, int base=1){return read_graph(n,n-1,base);}
template<typename T, typename S> pair<T,S> operator + (const pair<T,S> &a, const pair<T,S> &b){return {a.first+b.first,a.second+b.second};}
template<typename T> constexpr T inf=0;
template<> constexpr int inf<int> = 0x3f3f3f3f;
template<> constexpr ll inf<ll> = 0x3f3f3f3f3f3f3f3f;
template<typename T> vector<T> operator += (vector<T> &a, int val){for(auto &i: a) i+=val; return a;}
template<typename T> T isqrt(const T &x){T y=sqrt(x+2); while(y*y>x) y--; return y;}
#define ykh mt19937 rng(chrono::steady_clock::now().time_since_epoch().count())
//#include<atcoder/all>
//using namespace atcoder;
//using mint=modint998244353;
//using mint=modint1000000007;
const int N=1<<18,K=1<<10;
struct DS{
int big[N/K],small[N];
void ins(int x){
small[x]++;
big[x>>10]++;
}
void del(int x){
small[x]--;
big[x>>10]--;
}
int kth(int k){
k--;
int cur=0;
while(big[cur]<=k) k-=big[cur++];
cur<<=10;
while(small[cur]<=k) k-=small[cur++];
return cur;
}
}ds;
void ahcorz(){
int n,m; cin >> n >> m;
vvi adj=read_graph(n,m);
vi block(n);
int cur=0;
rep(n){
if(cur>=n) break;
int tot=0;
while(cur<n&&tot<K){
tot+=sz(adj[cur])+1;
block[cur++]=i;
}
}
int q; cin >> q;
vc<array<int,4>> qq(q);
vi res(q);
rep(q){
int l,r,k; cin >> l >> r >> k; l--,r--;
qq[i]={l,r,k,i};
}
sort(all(qq),[&](array<int,4> arr1, array<int,4> arr2){
if(block[arr1[0]]!=block[arr2[0]]) return block[arr1[0]]<block[arr2[0]];
if(block[arr1[0]]&1) return arr1[1]>arr2[1];
else return arr1[1]<arr2[1];
});
vi val(n);
int l=0,r=0;
ds.ins(0);
auto ins=[&](int i){
for(auto j: adj[i]){
if(j>=l&&j<=r){
ds.del(val[j]);
val[j]^=i+1,val[i]^=j+1;
ds.ins(val[j]);
}
}
ds.ins(val[i]);
};
auto del=[&](int i){
ds.del(val[i]);
for(auto j: adj[i]){
if(j>=l&&j<=r){
ds.del(val[j]);
val[j]^=i+1,val[i]^=j+1;
ds.ins(val[j]);
}
}
};
auto gogo=[&](int l2, int r2){
while(l>l2){
ins(l-1);
l--;
}
while(r<r2){
ins(r+1);
r++;
}
while(l<l2){
l++;
del(l-1);
}
while(r>r2){
r--;
del(r+1);
}
};
for(auto [l2,r2,k,i]: qq){
gogo(l2,r2);
res[i]=ds.kth(k);
}
rep(i,l,r+1) ds.del(val[i]);
rep(q) print(res[i]);
}
signed main(){
ios_base::sync_with_stdio(0),cin.tie(0);
cout << fixed << setprecision(20);
int t=1;
cin >> t;
while(t--) ahcorz();
}
D1F1 Top-K Tracker (Easy Version) and D1F2 Top-K Tracker (Hard Version)
Did you notice the double meaning of the title?
Consider easy version. Try to use only operation $$$1$$$ and $$$3$$$ (Due to symmetry, using operation $$$2$$$ and $$$4$$$ is the same), and only use $$$m=30$$$ in operation $$$1$$$ and $$$m=300$$$ in operation $$$3$$$.
Let $$$S_i$$$ represent the set in which queries element $$$i$$$ is accessed. Make all $$$S_i$$$ different to distinguish all elements.
Greedily choose all $$$S_i$$$ such that $$$|S_i| \lt 3$$$. Afterwards, carefully construct a set of size $$$3$$$ to reach the upper bound.
Suppose there are $$$30$$$ numbers $$$1, 2, …, 30$$$, and each number can be used up to $$$30$$$ times. Can you construct $$$900$$$ distinct triplets?
Consider hard version. How to efficiently utilize the four operations, and the condition $$$m \gt k$$$?
Using operation $$$1$$$ and operation $$$4$$$ is enough. Due to symmetry, using operation $$$2$$$ and $$$3$$$ is the same.
Recall the bottleneck of the easy version. We used too many sets of size $$$3$$$. Let’s say it another way — too many elements were determined at the cost of $$$3$$$. How to optimize it?
Consider $$$S_i=S_j$$$. This actually provides an unordered pair. In other words, we know unordered pair $$$(p_i,p_j)=(x,y)$$$, but we can not determine whether $$$p_i=x$$$ or $$$p_j=x$$$. To determine it, we can add one of $$$x$$$ or $$$y$$$ to the query of type $$$4$$$.
Suppose there are $$$300$$$ unordered pairs $$$(x_1, y_1), (x_2, y_2), …, (x_{300}, y_{300})$$$. We can add one element of $$$(x_i, y_i)$$$ into query $$$4$$$, then query to determine all unordered pairs. However, the average cost is still near to $$$3$$$, which is no better than the easy version.
Try to determine $$$435$$$ unordered pairs using single query $$$4$$$.
Use only operation $$$1$$$ and $$$3$$$, with $$$m=30$$$ in operation $$$1$$$ and $$$m=300$$$ in operation $$$3$$$.
Let $$$S_i$$$ represent the set accessed by query element $$$i$$$. To distinguish all elements, we need to ensure that all $$$S_i$$$ ($$$1 \le i \le n$$$) are distinct.
After selecting all $$$S_i$$$ such that $$$|S_i| \lt 3$$$, we must also construct a set of size $$$3$$$ to reach the upper bound.
Another way to phrase this: We have $$$30$$$ numbers $$$1, 2, \ldots, 30$$$, where $$$1$$$ through $$$29$$$ can each be used $$$30$$$ times, and $$$30$$$ can be used $$$270$$$ times. The goal is to construct as many distinct triplets as possible to maximize the count.
A natural approach is to prioritize consuming the usage count of $$$30$$$. Since $$$29$$$ is prime, the sequence $$$x = [1, k \mod 29 + 1, 2k \mod 29 + 1, \ldots, 28k \mod 29 + 1]$$$ ensures all elements are distinct. For $$$1 \le k \le 9$$$, we can construct triplets $$$(x_1, x_2, 30), (x_2, x_3, 30), \ldots, (x_{29}, x_1, 30)$$$.
After this, $$$1$$$ through $$$29$$$ will each have $$$12$$$ remaining uses, and $$$30$$$ will have $$$9$$$ remaining uses. This distribution is not ideal. Instead, we can use $$$(1, 2, 3)$$$ and $$$(4, 5, 6)$$$ in place of $$$(1, 2, 30), (3, 4, 30),$$$ and $$$(5, 6, 30)$$$. This adjustment ensures that $$$1$$$ through $$$30$$$ each have $$$12$$$ remaining uses.
Define a perfect set $$$S_x$$$ as a set of triplets where each number from $$$1$$$ to $$$x$$$ appears exactly once.
For numbers $$$1, 2, \ldots, 3l$$$, define:
$$$al = [1, 2, \ldots, l]$$$,
$$$am = [l+1, l+2, \ldots, 2l]$$$,
$$$ar = [2l+1, 2l+2, \ldots, 3l]$$$.
Let $$$\text{lshift}(a, i) = [a_{1+i}, a_{2+i}, \ldots, a_{(n+i-1) \mod n + 1}]$$$. Then, define:
We can generate $$$l^2$$$ distinct perfect sets of the form $$$S_l(al, \text{lshift}(am, i), \text{lshift}(ar, j))$$$ for $$$0 \le i, j \le l$$$.
Now that all numbers have equal remaining usage counts, we can apply the above construction.
Using the above construction, there are many different approaches.
Overall, we can construct a total of $$$846$$$ distinct sets, which is sufficient to pass the easy version.
The hard version differs significantly from the easy version in its approach, particularly in utilizing the concept of distinguishing elements through $$$S_i$$$.
We need to efficiently utilize the four operations, and the condition $$$m \gt k$$$.
In this solution, use operation $$$2$$$ $$$29$$$ times and operation $$$3$$$ once.
Consider all sets $$$S_i$$$ with $$$|S_i| \lt 3$$$ and elements in $$$[1,29]$$$, assigning each to exactly two indices, yielding $$$2×(1+29+\frac{29×28}{2})=872$$$ sets forming 436 unordered pairs where $$$S_i=S_j$$$, where we cannot initially determine whether $$$pos_i = x$$$ or $$$pos_j = x$$$.
Consider $$$S_i = S_j$$$. This actually provides an unordered pair $$$(pos_i, pos_j) = (x, y)$$$, where we know the pair of positions but cannot determine whether $$$pos_i = x$$$ and $$$pos_j = y$$$, or $$$pos_i = y$$$ and $$$pos_j = x$$$.
Let these $$$\frac{872}{2} = 436$$$ pairs be denoted as $$$(pos_1, pos_n), (pos_2, pos_{n-1}), \ldots, (pos_{436}, pos_{n-435})$$$ (for convenience, we assume $$$n = 890$$$). Then, randomly select exactly one $$$pos$$$ from each of these pairs, and query the maximum value (considering only one maximum for now) among $$$p[pos_1 \text{ or } pos_n], p[pos_2 \text{ or } pos_{n-1}], \ldots$$$.
Let us analyze what will happen:
| Return Value | Pairs Determined | Probability |
|---|---|---|
| $$$n$$$ | $$$1$$$ | $$$\frac{1}{2}$$$ |
| $$$n-1$$$ | $$$2$$$ | $$$\frac{1}{4}$$$ |
| $$$n-2$$$ | $$$3$$$ | $$$\frac{1}{8}$$$ |
| ... | ... | ... |
We can see ehe expected number of resolved pairs per query:
With an expectation value of $$$2$$$, we can estimate that performing $$$\text{max}$$$ query $$$300$$$ times yields a high probability $$$P \approx 1$$$ of determining all $$$436$$$ unordered pairs $$$(pos_i, pos_j)$$$. This implies that a single application of operation $$$3$$$ suffices to resolve all such pairs.
Through this methodology, we successfully determine $$$872$$$ elements of the permutation $$$p$$$. At this stage, $$$29$$$ query operations remain available (with exactly $$$2$$$ uses permitted), and the determination of all $$$n=890$$$ elements becomes computationally tractable.
Mathematically, this follows a geometric distribution with success probability $$$p = \frac{1}{2}$$$ where $$$\mathbb{P}(X = i) = 2^{-i}$$$, and the sum of such geometric random variables yields a negative binomial distribution $$$\text{NB}(r,p)$$$. For large $$$n$$$, this can be approximated by a normal distribution $$$\mathcal{N}(\mu, \sigma^2)$$$ via the Central Limit Theorem, where $$$\mu = \frac{1}{p} = 2$$$ and $$$\sigma^2 = \frac{1-p}{p^2} = 2$$$.
Probability that $$$\sum_{i=1}^{300} X_i \le 435$$$
Given. $$$X$$$ is discrete with $$$ \mathbb{P}(X=k)=2^{-k},\; k=1,2,\dots$$$. Thus $$$X\sim\mathrm{Geom}(p=\tfrac12)$$$ on $$${1,2,\dots}$$$ with mean $$$1/p=2$$$.
1) Sum–negative-binomial connection
Let $$$S=\sum_{i=1}^{300} X_i$$$. The sum of i.i.d. $$$\mathrm{Geom}(p)$$$ variables is negative binomial:
i.e., $$$S$$$ is the number of Bernoulli$(\tfrac12)
Unable to parse markup [type=CF_MATHJAX]
successes.2) Convert to a binomial tail
3) Exact expression
By symmetry of the binomial with $$$p=\tfrac12$$$,
4) Numerical value
5) Sanity check (normal approximation to the binomial)
With $$$\mu = np = 217.5$$$, $$$\sigma=\sqrt{np(1-p)}=\sqrt{108.75}\approx 10.428$$$, and continuity correction,
which is of the same (tiny) order as the exact value above.
#include <map>
#include <set>
#include <cmath>
#include <ctime>
#include <queue>
#include <stack>
#include <cstdio>
#include <cstdlib>
#include <vector>
#include <cstring>
#include <algorithm>
#include <iostream>
#include <bitset>
#include <unordered_map>
#define fastio ios_base::sync_with_stdio(false);cin.tie(0);cout.tie(0);
using namespace std;
typedef double db;
typedef long long ll;
typedef unsigned long long ull;
const int N=1010;
int T,n;
vector<int> Sv[N],Sp[N];
void init()
{
cin>>n;
for(int i=1;i<N;i++)
Sv[i].clear(),Sp[i].clear();
}
vector<int> ask(int ty,vector<int> &v)
{
cout<<ty<<" "<<v.size();
for(int i=0;i<v.size();i++)
cout<<" "<<v[i];
cout<<"\n"<<flush;
int k;
if(ty<3) k=min(60,(int)v.size());
else k=min(300,(int)v.size());
vector<int> res;
for(int i=1;i<=k;i++)
{
int t;
cin>>t;
res.push_back(t);
}
return res;
}
void solve()
{
int cnt=0;
vector<int> p(n+4);
for(int i=1;i<=30;i++)
Sv[++cnt].push_back(i);
for(int i=1;i<=30;i++)
{
for(int j=i+1;j<=30;j++)
{
cnt++;
Sv[cnt].push_back(i),Sv[cnt].push_back(j);
}
}
for(int i=1;i<=9;i++)
{
int cur=1;
while(1)
{
int nxt=cur+i;
if(nxt>29) nxt-=29;
cnt++;
Sv[cnt].push_back(30);
Sv[cnt].push_back(cur);
Sv[cnt].push_back(nxt);
cur=nxt;
if(nxt==1) break;
}
}
vector<int> vl(11),vm(11),vr(11);
for(int i=1;i<=10;i++)
vl[i]=i,vm[i]=10+i,vr[i]=20+i;
for(int i=1;i<=9;i++)
{
for(int j=1;j<=10;j++)
{
cnt++;
Sv[cnt].push_back(vl[j]);
Sv[cnt].push_back(vm[j]);
Sv[cnt].push_back(vr[j]);
}
for(int j=0;j<10;j++)
vr[j]=vr[j+1];
vr[10]=vr[0];
}
for(int i=1,cur=1;i<=29;i++)
{
int x=cur,y=cur+1,z=cur+2;
while(x>29) x-=29;
while(y>29) y-=29;
while(z>29) z-=29;
cnt++;
Sv[cnt].push_back(x);
Sv[cnt].push_back(y);
Sv[cnt].push_back(z);
cur+=3;
}
for(int i=1;i<=30;i++)
{
vector<int> v,t;
for(int j=1;j<=n;j++)
if(find(Sv[j].begin(),Sv[j].end(),i)!=Sv[j].end())
v.push_back(j);
if(v.empty()) continue;
if(i<30) t=ask(2,v);
else t=ask(4,v);
for(int j=0;j<t.size();j++)
Sp[t[j]].push_back(i);
}
for(int i=1;i<=n;i++)
sort(Sp[i].begin(),Sp[i].end()), sort(Sv[i].begin(),Sv[i].end());
for(int i=1;i<=n;i++)
for(int j=1;j<=n;j++)
if((!Sp[i].empty())&&Sp[i]==Sv[j]) p[i]=j;
cout<<"! ";
for(int i=1;i<=n;i++)
cout<<p[i]<<(i==n?'\n':' ');
cout<<flush;
}
int main()
{
fastio;
cin>>T;
while(T--)
{
init();
solve();
}
return 0;
}
#include <map>
#include <set>
#include <cmath>
#include <ctime>
#include <queue>
#include <stack>
#include <cstdio>
#include <cstdlib>
#include <vector>
#include <cstring>
#include <algorithm>
#include <iostream>
#include <bitset>
#include <unordered_map>
#define fastio ios_base::sync_with_stdio(false);cin.tie(0);cout.tie(0);
using namespace std;
typedef double db;
typedef long long ll;
typedef unsigned long long ull;
const int N=1010;
int T,n;
vector<int> Sv[N],Sp[N],prv[N],prp[N];
void init()
{
srand(time(0));
cin>>n;
for(int i=1;i<N;i++)
Sv[i].clear(),Sp[i].clear(),prv[i].clear(),prp[i].clear();
}
vector<int> ask(int ty,vector<int> &v)
{
cout<<ty<<" "<<v.size();
for(int i=0;i<v.size();i++)
cout<<" "<<v[i];
cout<<"\n"<<flush;
int k;
if(ty<3) k=min(60,(int)v.size());
else k=min(300,(int)v.size());
vector<int> res;
for(int i=1;i<=k;i++)
{
int t;
cin>>t;
res.push_back(t);
}
return res;
}
void solve()
{
int cnt=0,cntr=871,cntpr=30;
vector<int> p(n+4);
for(int i=1;i<=29;i++)
{
cnt++;
Sv[cnt].push_back(i);
cntr--;
Sv[cntr].push_back(i);
}
for(int i=1;i<=29;i++)
{
for(int j=i+1;j<=29;j++)
{
cnt++;
Sv[cnt].push_back(i),Sv[cnt].push_back(j);
cntr--;
Sv[cntr].push_back(i),Sv[cntr].push_back(j);
}
}
for(int i=871,hd=1;i<890;i++,hd+=3)
{
int x=hd,y=hd+1,z=hd+2;
if(x>29) x-=29;
if(y>29) y-=29;
if(z>29) z-=29;
Sv[i].push_back(x),Sv[i].push_back(y),Sv[i].push_back(z);
}
for(int i=1;i<=29;i++)
{
vector<int> v,t;
for(int j=1;j<=n;j++)
if(find(Sv[j].begin(),Sv[j].end(),i)!=Sv[j].end())
v.push_back(j);
if(v.empty()) continue;
t=ask(2,v);
for(int j=0;j<t.size();j++)
Sp[t[j]].push_back(i);
}
for(int i=1;i<=n;i++)
sort(Sp[i].begin(),Sp[i].end()), sort(Sv[i].begin(),Sv[i].end());
for(int i=1;i<=n;i++)
{
for(int j=871;j<=n;j++)
if(Sp[i]==Sv[j]) p[i]=j;
}
for(int i=1;i<=n;i++)
{
if(Sp[i].size()==1) prp[Sp[i][0]].push_back(i);
else if(Sp[i].size()==2) prp[30*Sp[i][0]+Sp[i][1]].push_back(i); //hash the set
//|S|>2 and empty set are ignored
}
for(int i=1;i<=n;i++)
{
if(Sv[i].size()==1) prv[Sv[i][0]].push_back(i);
else if(Sv[i].size()==2) prv[30*Sv[i][0]+Sv[i][1]].push_back(i); //hash the set
}
//necessary shuffle
for(int i=1;i<N;i++)
{
if(prp[i].size()<2) continue;
if(rand()&1) swap(prp[i][0],prp[i][1]);
}
for(int i=1;i<=1;i++)
{
vector<int> v,t;
for(int j=1;j<N;j++)
{
if(prp[j].empty()) continue;
else if(prp[j].size()==1)
{
p[prp[j][0]]=prv[j][0];
prp[j].clear();
prv[j].clear();
continue;
}
v.push_back(prp[j][0]);
}
if(v.empty()) continue;
t=ask(3,v);
for(int j=1;j<N;j++)
{
if(prp[j].empty()) continue;
if(t.empty()) break;
if(prv[j][1]==t.back())
{
t.pop_back();
p[prp[j][0]]=prv[j][1];
p[prp[j][1]]=prv[j][0];
}
else
{
p[prp[j][0]]=prv[j][0];
p[prp[j][1]]=prv[j][1];
}
prp[j].clear();
prv[j].clear();
}
}
cout<<"! ";
for(int i=1;i<=n;i++)
cout<<p[i]<<(i==n?'\n':' ');
cout<<flush;
}
int main()
{
fastio;
cin>>T;
while(T--)
{
init();
solve();
}
return 0;
}








fast editorial!!
why the most favourite + most hated problem share the same vote
congrats me for becoming newb again
it's quite interesting that the constraints are just enough for all inequalities for the solution to work,
i.e. $$$30+435\times 2 + 380\times 3 = 60\times 29+300$$$,
and the $$$435-30=405$$$ subsets of size $$$2$$$ of $$$\{1,\dots,29\}$$$ we need is only $$$1$$$ less than $$$\binom{29}{2}$$$
nice design
Glad to see you enjoying the problem
People downvoting me for no reason , just for congratulating someone , seriously , you guyz are just enigmatic !
orz
For Div2C, can you not just order all segments, and for each iteration, take, among all the segments that start before the end of the latest segment taken, the one that ends the latest possible. If no possible segments, just take the segment with the closest start after end and among all those that start at this closest start, take the one that ends the latest possible. There will never be cycle that way and by design you're maximising the length of the union of all segments. And it's nlogn (just a sorting has to be done). Or maybe it's possible to hack me? https://mirror.codeforces.com/contest/2130/submission/331785344
Alternatively, you sort by $$$(a_i, -b_i)$$$ (increasingly), then deduplicate segments with the same $$$a_i$$$. submission
A simple Python solution can be
yes i did the same. linear time solution
How is linear if you do sorting?
right, my bad
the most favourite + most hated problem share the same vote
I’m pretty sure the complexity for div1D can be made n^3, if you keep your dp states from 0 to s[i] if s[i] exists and otherwise keep only one general state not depending on the value contributed to this side, so the complexity will be (sum(s[i]) + n)^3, but sum(s[i]) < n for the answer not to be 0.
thank you for superfast editorial.
can someone explain the editorial solution for 2D/1B?
You can just check greedily that is it better to convert this number into 2*n-(num) or keep as it is. You want to minimise the score so just take lower one. You go from left to right and check for every number and count the numbers that are bigger than it from left and right side. If you keep number as it is then the numbers that are greater on left side will give inversion or if you choose to increase the number to 2*n-(num) than number on right will become smaller and will give inversion, even in future if you decide to increase that numbers which are on right side then also they will be smaller than this number so they will give inversion.
I need the proof that they are independent
I don't know how to prove but here is the simple method I used.
The numbers on left have already calculated their share in the answer. So if the number on the left is smaller than their is no effect whether this number is increased or not, but if it is bigger than that will give inversion so if we increase this number then inversions on left will be solved. And for the number smaller on right side, they will get calculated on their turn.
For the the numbers on right side, suppose 1 < 5 but 2*n-1 = 11 if n = 6 , so 11 > 5 , even if we increase 5 then also 11 > 7 so we can calculate inversions by this number on right side.
I don't know if you understand what I am saying, if anyone can drop the proper proof here, will be helpful.
I don't know if this counts as a proof but here it is, in any case of a < b or a > b, changing the greater value won't affect the relation, the change is only done when the smaller value is changed to 2 * n — val, so if you change some value p_i to 2 * n — p_i, it affects its relation with values > than it to its left and right, so u can just take the better choice.
bro this hint was on the next level solved D in once after getting this hint
It's very simple. Please dry run on paper. U will observe if we process from element 1-N in P, Any element greater than 1 we will change in future that won't be change inversions for 1. Same for 2, 3 and so on.
i had a doubt ,how do we know that a n^2 solution would get accepted ?
I was trying this ques , and could come up with n^2 solution , or by using ordered set, i didnt have practise of ordered set so wasted a lot of time thinking of an alternative
Thanks in advanced!
the problems were really nice! very impressive. thank you!
The votes in the which problem you like thing are broken. You can't have different votes for both.
Ruler Of The Zoo II after 4 years?!
Let me guess, half of those hints can be literally replaced by "this is a hint which is hinting you stuff" or "you can do it, don't give up"?
Anyone else mess up on C by proving that if theres a cycle, there wouldnt be any addition in the union? (instead of proving the reverse, that if theres no addition, there wont be a cycle)
Hi, according to the editorial's code for D2B, the input 1 4 8 1 1 2 2 does not have a solution, but doesn't 1 2 1 2 work?
There must be at least one $$$0$$$ in the input
Thank you you're right
use this string can get $$$\frac{n}{14} + O(\log)$$$ querys in div1C, better than offical solution $$$\frac{n}{13} + O(\log)$$$
after some optimization, can reach 80 queries limit.
May I know how you approached to this solution?
Srinivasa Jinglong told me this string in dream, after i wake up i checked it with C++
A very simple $$$O(n)$$$ solution for C is to remove all edges u -> v except for the maximum edge. It's easy to see that this doesn't affect $$$F(s)$$$, and it also ensures $$$G(s)$$$ is zero since if there is a cycle of length $$$ \gt = 3$$$, then there must exist a node with more than one edge.
I did the same; actually quite surprised the editorial did not mention this.
The bracket trick for E1 is very clever, I failed to differentiate '((' and '))' during the contest.
((+")()" = 3 ))+")()" = 1
I think you failed because you chose a symmetrical pattern such as "?)(?", so both "))" and "((" produce the same result. To fix this, simply add extra brackets on one side to break the symmetry-for example, use "?)(?()" instead.
Good problems. I enjoy them very much.
But I did very badly. Too slow and too many wa atts. Bye GM.
For the 3rd version, $$$\left\lceil\frac n{11}\right\rceil+\left\lceil\log n\right\rceil=101$$$. I generated it through a silly idea. But it's soooooo close to the constraint. Is it intended to stop it from accepted?
As a Professional Newbie™️ I have to say B felt REALLY hard to solve and took me about 2 hours. Awesome problem though and probably a skill issue on my end but did anyone else find it hard?
Glad to meet you, señor. (Fellow professional newbie)
yah, discovering d = s — base_sum different than 1 result Alice always win is the key for this problem, that idea is quite random to me
For Div2E1, my pattern is
s1[]s2]]which has a value of 1,2,3 or 4 depending on s1 and s2.In the solution of D1C3, there is a function $$$g(i)$$$ mentioned, which should have the property that $$$g(i) \gt g(1) + g(2) + ... + g(i-1)$$$. How is that different from the medium version, where $$$g(i) = 2^i$$$?
Nvm, I figured it out. The condition $$$g(i) \gt g(1) + g(2) + ... + g(i-1)$$$ is wrong, it should rather be $$$f(g(i)) \gt f(g(1)) + f(g(2)) + ... + f(g(i-1))$$$ where $$$f(n) = \frac{n(n+1)}{2}$$$ (in English: the number of regular substrings that we can build using $$$g(i)$$$ pairs has to be greater than the number of regular substrings using $$$g(1)$$$ pairs plus the number of substrings using $$$g(2)$$$ pairs and so on).
Hi, can you explain why does it have to be f(g(i)) > f(g(1)) + ... + f(g(2)) ? The medium version was quite easy for me to understand but the hard one isn't. Thank you
The brackets can either be opening brackets or closing brackets. If you decide to include some bracket $$$n$$$ times, there are two cases: Either, the construction from the editorial contributes $$$\frac{n(n+1)}{2}$$$ to the total number of regular substrings (which means we have an opening bracket), or 0 (meaning we have a closing bracket). Since we want to be able to determine the bracket configuration using just the total number of regular bracket substrings, we need the condition $$$f(g(i)) \gt f(g(1)) + ... + f(g(i-1))$$$.
that condition is to make sure f(g(1)) + ... + f(g(i — 1)) doesn't affect f(g(i)) ? Like in binary whether the i-th bit is on or off it doesn't change (i + 1)-th bit (sorry for my bad English)
Exactly
wuhudsm Proof for D1B is incomplete. You proved it for '1', and then said, we have solved the problem..!!! how ?
You should have proved it for 'x', and then build transition to x+1 and then announce it to be proven.
Here is my doubt ( which I couldn't prove during the contest... )...
"replacement while moving from left to right doesn't affect the answer" hasn't been proven. as part of editorial.
Difficult to proof, why this solution works. Doubts you may face while proving is
1) While traversing from left to right, if you switched some values in previous indices, will they affect the current 'i' ? 2) How are we certain that any index 'j' > 'i' will flip or not flip which can change your number of greater values on left or right.
Above approach fails. try to figure out yourself if you can. otherwise, move with next part.
Hint : (For some index 'i', What if some value on left doesn't become lesser than 2*n-p[i], is it possible ? try to think such case ).
Understand the simulation.
[5 8 6 3 4 7 9 10 1 2 ]...
-> first three elements won't do any reverse ( 5 , 8 , 6 ) don't satisfy the condition (lcount > rcount). -> fourth element does satisfy the condition, so we flip it. new array will be [5 8 6 17 4 7 9 10 1 2 ]
-> Now while processing this new array for index '5' ( value 4 ), we are introducing a BUG. if we flip 4 to 2 * 10 — 4 = 16 , we are still not able to reduce the number of inversions on left. Still one inversion from the left is pending ( which is caused by the value 17).
If you understand above part. You just need to write one more if condition, and your issue is fixed. Follow below code.
If we start processing this array from large elements to small elements ( n,n-1,n-2... 2, 1) , and repeat the process (comparing leftside and rightside larger values than current value ), it works.
Why !
Because, any flip done earlier will not affect future indices. The new values are bound to be greater than previously flipped values. ( If x = 5 , y = 3 , n = 10 , whether the value of 'x' is flipped(20-5=15) or not(5), it will still be smaller than flipped value of 'y' (20 — 3 = 17). This way we are guaranteed to decrease inversions.
Found the approach from this solution : https://mirror.codeforces.com/contest/2130/submission/331838905
I think the editorial proof works. Can you elaborate on what part you think remains to be proven?
can you explain it to me
I also didnt fully understand it. the proof was just about pi=1? like I think for every number besides pi = 1, it is important if other numbers are mirrored or not
The main idea is that to change any inequality '<' -> '>' or vice versa only depends on the smaller element. So mirroring element Ai makes all the elements bigger than it smaller. a < b, a < 2n - b whereas 2n - a > b, 2n — a > 2n — b. So changing bigger element doesnt change the inequality at all. To find answer, fix the smaller element and take the decision greedily, it will still hold all the other inequalities. Further solution is left for reader.
got it thanks
In this problem’s code, one greedy strategy is being applied for choosing whether to take $$$a_i$$$ as low or high, and a different greedy strategy is being applied for assigning values inside $$$a_i$$$, which is not valid.
But for E3, if g(i)> sum(g(1),g(2),... ,g(i-1)) that means g(i+1) > g(i) + sum(g(1),g(2),... ,g(i-1)) > 2*sum(g(1),g(2),... ,g(i-1)) So wouldn't g have to increase based on powers of 2, not letting us have 12 different values in one query?
For div2 B is the sequence 0 0 0 .. 2 1 2 1 .. not possible ?
I wrote this sequence but it got me incorrect answer
0000...2222...1111 is valid solution i think but 0000...2 1 2 1 2 1 would go into subcases
because are there enough 2s to match 1s i.e number of 2s = number of 1s , no of 2 < no of 1s etc
this 0 0 0 .. 2 1 2 1 .. sequence is correct.
In div1D, am I missing something, or is the actual complexity $$$\mathcal{O}(n^3\log^4n)$$$? To recalculate
dp[l][r][x][y]we need to iterate overi, j, k, and I spent some time on constant optimizations, though in the end it worked in about a second, so some of them were probably unnecessary.why $$$log^4$$$? you have $$$x$$$, $$$y$$$, and the middle thing $$$z$$$. the same way as with $$$n^3$$$.
in my and the editorial's solution, there are a total of 7 variables, i.e. 7 for loops. Then I optimized with the following:
When a[i] = -1, we can just add up all ways
otherwise, there is a dependency between 2 variables, so looping over 6 is enough.
oh. i see. i did exactly the same thing, yes.
Has anyone solved Div1B using dynamic programming? I was trying to solve, but was getting Memory Limit Exceeded. 331823637 How to optimize the memory here?
I think you are copying the vector x for each indice. That could be why.
funny white text on 1B :)
If you forgot to deal with the nodes of degree $$$0$$$ in D1E,
you are not alone. $$$\Large\unicode{x1F62D}$$$
Also, my screencast (in Chinese)
Given an array A of length n, define a transformation on each element as B[i] = 2n — A[i]. Then construct a new array by applying the following rules: **** Initialization: **** Set A[0] = min(A[0], B[0]). **** For each index j from 1 to n — 1: **** Compare the previous value A[j — 1] with both A[j] and B[j]. **** Case 1: If A[j — 1] ≤ A[j] and A[j — 1] ≤ B[j], → choose min(A[j], B[j]) as the next value. **** Case 2: If A[j — 1] > A[j] and A[j — 1] > B[j], → again choose min(A[j], B[j]) . **** Case 3: If A[j — 1] > A[j] and A[j — 1] ≤ B[j], → choose B[j] **** Case 4: If A[j — 1] ≤ A[j] and A[j — 1] > B[j], → choose A[j] **** **** **** will it work...??
For problem Div2E1, the editorial says
For example, if $$$f_0−f_1 = (10000101)_2$$$, we can get $$$s_1 = s_3 = s_8 = '('$$$.
shouldn't it be $$$s_1 = s_3 = s_8 = ')'$$$? Or am I understanding it wrong?
can anyone provide me questions similar to "D2A Submission is All You Need"
Good
Is it just me, or the constraint for C confusing af?..
It's basically sort all segments increasing L and decreasing R for same Ls, then
Works NlogN.
same, inspired by job secheduling
Could anybody explain a little bit clearer on how does DP in D1D work? Can't get it from editorial :/
Div1C can be done in 100 or less queries by observing we can ask for the answer of a string consisting of "$$$s_i)$$$" concatenated $$$x$$$ times and get $$$\frac{x(x+1)}{2}$$$ as the answer if $$$s_i$$$ is '$$$($$$' and $$$0$$$ otherwise.
With that in mind we can try to construct every power of $$$2$$$ up to $$$2^x$$$ as a sum of triangular numbers and query for it after. Turns out $$$x$$$ can go up to $$$13$$$ if our max for characters in a query is $$$1000$$$, so we can solve even the hard version this way.
Nice solution! May I know the proof that each power of 2 can be represented as the sum of triangular numbers? Thanks
$$$1$$$ is a triangular number, so actually any positive value can be represented by triangular numbers. You can construct a number $$$x$$$ greedly always picking the greatest triangular number less than or equal to $$$x$$$ and subtracting $$$x$$$ by it. I don't know if that's the best way, but it's good enough for this problem.
Ok, I see. It turns out that according to Gauss's Eureka Theorem, it states that all positive integers can be represented as the sum of 3 triangular numbers. The optimal configuration also cannot be greedly found. However, finding this configuration is not needed for this problem as seen in 332146124.
My code utilizes the fact that a similar system to replace binary encoding can be created by just using triangular numbers. I brute force each triangular number in a seperate piece of code to find the indexes of a series of triangular numbers where
a_i_j > \sum_{1}{i_j-1}a_i_kto preserve uniqueness of each possible combination. The maximum number of locations per query using this solution is 13. If anyone finds a better solution in terms of locations per query, please let me know with how you did it!in D2E1 & E2 & E3, i can only find a '(' to solve all the problem : (x ((yy (x ((yy ((((zzzz ((((((((wwwwwwww ...... (x ( (y(y ( (z(z(z .......
the first is (x((yy
the second is (x((yy((((zzzz((((((((wwwwwwww....
the third is (x ( (y(y ( (z(z(z like the second solution of E3
C3 too hard for me :(
Is there any knowledge to know How to find out the ways to construct the string on E2 / C2, E3 / C3 ?
On C1, I just write it on my notebook. But on C2, I have to run backtracking but I can't :(
The overall solution involves using $$$2^{0}, 2^{1}, ..., 2^{k - 1}$$$ occurrences to detect
kunknown characters in the string and after that, it's just maximizingk.But how do you know that ? Or just using backtracking to find out ^^
Sorry because I'm just caring about "how to think", not the editorials :D :D :D
It was an accident I guess haha. I was doing E1 where I tried to know 2 unknown characters at a time and my brain just thought of making one appearing 2 times while the other exists once.
Yeah it took me about 40 minutes to write on notebook to find out the way to construct on E1 / C1.
I want to find s[i] and s[i + 1], so I listed all the case
But on E2 / C2, I think I have to find about 6 contiguous characters, so can not write on notebook, I used backtracking but can't :( :( :(
If only you noticed the binary coding fr.
I made a brute force to find pattern in C2/E2
Same as me, but I used backtracking but my backtracking program works about 30 minutes and it still did not find out the ways :D
Alternate solution for 1C2:
331884800
My $$$O(n^3\log^3n)$$$ solution to div1D got TLE until I optimized it to $$$O(n^3\log^2n)$$$. Maybe the constraints are too narrow.
I passed with $$$O(n^3\log^4 n)$$$ lol
Actually, Problem C/E can determine 13 locations in one query. It needs about 87 queries.
The main idea is using $$$\underbrace{()()()()...()}_{c\ times}$$$ structure, and $$$f(s)=\binom{c}{2}+c$$$. Just using knapsack and you can see that $$$2^0, 2^1,\dots, 2^{12}$$$ only need about 700.
Code: 331838756
Sorry for my poor English.
Ok, so something strange that I noticed is that I had a couple of asserts placed throughout the code. Then, when I turned in C2, I got WA. I figured it was a flaw with my approach and spent a decent amount of time trying to figure out what was wrong. Then, through a moment of pure divine stupidity, I decided to turn in my code under the assumption that the WA was actually an RTE... and it worked.
After the contest, I checked the submissions and indeed, my RTE was turned into a WA. So, are interactive problems managed this way or is this something new that I should take into account? Also, does anybody know if the same behavior is present in ICPC tournaments (domJudge, etc)?
really good round
For div2B, if anyone is struggling to understand/prove "Why every number except 1 can be obtained be with just 2 & 3"
Below arrangement is optimal for bob as it will not have adjacent pair with 0 and 1
0 0 0 2 2 2 1 1 1
Possible pair sum we were left with is 2 and 3
There exist an integer x such that x=2*a+3*b will always be obtained.
In context of this problem x = s-sum_of_array
Chicken McNugget theorem
Consider a coprime integers m,n such that a*m+b*n=X
Then greatest number that cannot be obtained with any combination of (m,n) is m*n-(m+n)
This is also provable without Chicken McNugget Theorem; all even numbers can be represented as 2x, where x is an arbitrary integer, and all odd numbers can be represented as an even number + 1, and 2x+1 = 2(x-1) + 3, and since we can't be have x <= 0 (as then you would be subtracting 2), you can get all numbers greater than 1.
Thanks for smart and concise proof!!
But I am so happy that I found this theorem, because I have encountered similar problems like this but had hard time proving it.
is it possible to solve div2D/div1B using merge sort? if yes, can you please give some ideas how to implement it because I tried so much and it always failed at 2 pretest
Attention is all you need , Famous paper on Transformer model.
my solution of C in O(nlogn) time complexity
Can you explain in detail, please?
I looked at your code, and if an edge is fully covered, it won't work in this case.
It will work. all points are distinct. and I just need to go min value to max value without creating any cycle.
can you give a test case as you said.
Sorry, I misunderstood. Your code's output does achieve the maximum value, but it selects unnecessary edges.
example:
1
5
1 3
3 5
4 6
5 6
1 4
your output:
4
1 5 2 3
good idea
Who can tell me the solve of div2D, I can't understand the Editorial
I was confused by this problem too, but this is how I figured it out:
The actual solution involves going through each element from smallest -> largest values. Start with 1 (no matter what index it may be). You can either keep it as 1 or flip it to the highest possible number (2N-1), resulting in a inversion count of either the number of items before that index (if you kept as 1 because all number before will be higher) or the number of items after (if you flipped to the highest possible number). Then, you REMOVE that index from the permutation p and you go to the next value, 2, and repeat. Notice that whether you flipped 1 or not, it WILL NOT affect any items after it! This is the part that I missed! Think about it:
Because of this logic, all you need to do is iterate through the numbers from the lowest -> highest original value! If you don't want the hassle of removing the numbers, you could do what the editorial did and just pre-calculate all the numbers to the left/right that are greater for each index considering that all the lower numbers would've already been processed and removed by the time you got to the new number. I hope that makes sense!
You can see my solution here: 331985291
"Notice that whether you flipped 1 or not, it WILL NOT affect any items after it!" — I can see why this is true: because it's 1, and it's always going to be either the smallest or the largest (if you invert it). Why is that true for other numbers?
Because you're processing the numbers in increasing order, it's guaranteed each of the flipped numbers will be lower than previously flipped ones. (Ex. If you flipped 3 first, then flipping 5 later will result in a value less than the result of flipping 3. You account for all the inversions when you first flip 3, including the one involving 5 later, so whether or not you end up flipping 5, its relation with 3 is irrelevant as it was already accounted for.)
So if i is processed after j, because p_j<p_i, no matter how you choose a_j, either a_j bigger than both, p_i and 2n-p_i or it's smaller than both. I get it now, thank you!
Thank you this was an excellent explanation!!
of course! i’m happy it helped :)
D1C3
Here's a different approach: Start by querying $$$s_1s_2s_1$$$. If the answer is $$$1$$$, then $$$s_1 \neq s_2$$$; otherwise, $$$s_1 = s_2$$$. Based on this, we can optimize the subsequent string construction: build a pattern like $$$s_1s_ks_1\dots s_ks_1$$$, where $$$s_k$$$ is repeated $$$x$$$ times. If $$$s_1 \neq s_k$$$, the answer will be $$$\frac{x \cdot (x + 1)}{2}$$$, and the query length will be $$$2x + 1$$$.
Next, we need to construct a set $$$A$$$ that is as large as possible, satisfying:
The subset sums of $$$\{\frac{x \cdot (x + 1)}{2} \mid x \in A \}$$$ are all distinct;
The total length constraint $$$\sum_{x \in A} (2x + 1) \le 1000$$$.
Each query can then determine $$$|A|$$$ positions' relation to $$$s_1$$$. Finally, by finding any position $$$s_k$$$ where $$$s_k \neq s_1$$$ and querying $$$s_1s_k$$$, we can deduce $$$s_1$$$ and thus the entire string. The total number of queries is at most $$$\lceil \frac{n - 1}{|A|} \rceil + 1$$$.
Constructing Set $$$A$$$: Start with $$$1$$$ and iteratively add numbers, ensuring each new element is larger than the sum of all previous elements. This gives $$$|A| = 13$$$, with a worst-case query count of just $$$78$$$.
Div.1-E, I noticed that it can be solved in O((N+M)*sqrt(Q) + Q*sqrt(N)) without thinking about degrees.
First, sort the queries by L, after that, sort them by R for each block of [0,D), [D,2D),...,[kD,Q) with a block size D ( = O(sqrt(Q)) ). Then, in that order, just expand and contract L and R like in Mo's algorithm.
During the expansion/contraction, the update at each R happens at most once in each block. The update for each L happens in at most 2 blocks. This is because if the same L were to span 3 or more blocks, L should be constant, except for the first and the last.
I think this technique is useful, and with this algorithm, I got the current fastest: https://mirror.codeforces.com/contest/2129/submission/331921365
Proof for the solution of D?
In div2C/div1A, why does the simple dfs search tree not work?
Thats what I did and it passed
sorry man I made a mistake while coding it and figured out but then forgot to delete the comment :(
i think tutorial to div 2 D is not well written or you can it will be easier to understand if you have told us that in a relation a<b this relation will only change if you convert a to 2n -a not when you convert b to 2n-b
Is there any intuition for E3? I can understand how I can find 12 locations with one query and why it works, but getting that idea for optimizing the number of queries seems to be on a different level.
The key intuition here for me was try to separate brackets in query. Here I tried describe my whole thoughts during contest:
I thought to guess $$$12$$$ locations in one query, then in my query $$$f_{1} s_{1} f_{2} s_{2} \ldots, f_{12} s_{12} f_{13}$$$ I must divide values $$$s_i$$$ with some seperators $$$f$$$ so $$$s_1, s_2,\ldots,s_{12}$$$ will not intersect in RBS and then query will give me value $$$v_1 + v_2 + \ldots + v_{12}$$$. And now task is "in sum $$$s$$$ guess the numbers that formed that sum", it looked like greedy, for example, then we get number from binary notation. But we sorta can't make powers of two, but we can get values $$$1,3,6,10,15$$$ etc, then just I checked that if we choose numbers that differs at least $$$2$$$ times we will get exactly $$$12$$$ numbers.
Wow, very clever. Thanks for explaining.
For Div2B if n = 3 , s = 10 and a = {0,1,2} why the answer is -1? Why not 0 2 1?
$$$[0, 2, 0, 2, 1, 2, 0, 2 , 1]$$$ gives $$$10$$$
I got it. Thanks a lot!
A simpler and more detailed explanation of Div.1B (Div.2D) if anyone needs it
We want to build an array A, each element of which is either p[i] or 2*n — p[i] For convenience, we will call p[i] the "smaller" value, and 2*n — p[i] the "larger" value.
So, we want to put either a larger or smaller element in each position of the array so that the number of inversions in the end is minimal. The idea is this — for each position in the array, we iterate over both options — how many inversions are GUARANTEED to be added after adding a smaller element, and how many inversions are GUARANTEED to be added after adding a larger element. We will choose the one that will give the least inversions in the current position. For position i we want to choose which element to put:
1) If we put a smaller element — we count how many inversions will be added GUARANTEED. GUARANTEED inversion will be added if before the current element (j = 0, 1, ..., i — 1) there is some number that is greater than our current "smaller". How many elements will be before the current index greater than our element — that's how many inversions will be added.
Inversions will also be added if AFTER the current index there is an element LESS than the current one. But. We said that we will count how many inversions will be added GUARANTEED, IN ANY CASE, regardless of how we arrange the following elements. But we are guaranteed not to count anything on the right, because if there is some element smaller than the current one, then we could always make this element 2*n — p[j] at some future step and it would definitely be greater than the current one. Result: we do not count smaller elements on the right, because they may not exist at all in the future. They do not guarantee anything.
2) If we put a larger element — in this case, firstly, we can say for sure that the inversion will be added if there is some element to the RIGHT of the current element that is greater than the current one. Why is that? The largest element that is generally possible in the array is 2*n — 1 (since the minimum element in p is 1). And as we can see, the smaller the number, the larger its "mirrored" version. In essence, we subtract some element from 2*n. And it is clear that the larger the second element in the difference, the smaller the difference itself will be. From here we come to the conclusion — if we inserted the current element as the difference 2*n — p[i], and then found some p[j] on the right that is larger than the current p[i], then it is clear that the difference 2*n — p[j] will be smaller. But on the other hand, if so, then, of course, p[j] itself will be smaller than 2*n — p[i]. (If the larger version of the element turned out to be smaller, then the regular version will be even smaller). Result_1: If an element larger than the current one is found on the right, then it will NECESSARILY add the inversion. We count the number of larger elements on the right.
Now the important point of this point. We've counted on the right, good. Now, if there's an element on the left that's larger than the current one, then in that case we'll have to add another inversion, because it's also GUARANTEED to be in the array? The answer is no. We don't need to count the elements on the left that are larger, because the minimum possible value of the "increased version" is 2*n — n = n, that is, if we find an element on the left that's larger than the current one, then it will definitely be an increased version of some previous number. And we just found out that if we insert a larger version of a number, then we count how many inversions it will create with the elements on the right. And our current number is just to the right of it. So if we also count the elements on the left that are larger than it, then we'll just add the same inversions twice. For example: in step #2 we added the inversion (2; 7), and then in step #7 we calculated the inversion (7; 2). This is the same inversion. We have already taken this case into account earlier, so there is no need to count it again. Conclusion_2: We do not count the elements on the left, because the current number of inversions has already been calculated for them in the previous step.
So, when we figure out which version of the element gives fewer inversions — left or right, let the inversion counter for the smaller version of the element be called count1, and for the larger one — count2. There are several possible solutions. The simplest one, where we simply count the number of inversions — we immediately add the minimum number of inversions to the result — min(count1, count2).
The second one, with a little extra actions, but it may be clearer for a beginner. Earlier we said that we will choose the element with the least number of inversions. So let's choose it. If count1 < count2 — result[i] = p[i] Otherwise, if count2 >= count1 — result[i] = 2*n — p[i]. When we have restored the entire array, we will simply calculate the number of inversions for each element with a double cycle for O(N^2) complexity and output the answer.
I think there is a mistake in the explanation of problem C2. The last line should be s1=s3=s8=′)′ Am I right?
Was looking for this comment, and yes it should be ')' instead of '('.
I really don't konw why D1D/D2F calculating dp[l][r][x][y] should multiply the combination ((r-l)(i-l)),sorry for poor english and don't konw how to express combination
The order of placement of the grids in two different intervals
So, what is the relationship between Problem C and the Disjoint Set Union?
In div2 C, is it a typo: "we can always find $$$x_{i-1} \lt x_i \gt x_{i+1}$$$ , where either $$$[x_{i-1},x_i] \subseteq [x_i,x_{i+1}]$$$ or $$$[x_i,x_{i+1}] \subseteq [x_{i-1},x_i]$$$" and should be "$$$[x_{i-1},x_i] \subseteq [x_{i+1},x_i]$$$ or $$$[x_{i+1},x_i] \subseteq [x_{i-1},x_i]$$$" instead ($$$x_{i+1}$$$ and $$$x_i$$$ swapped)?
why the solution of Div1.A cannot even solve the example ? solution's output is 1 1 2 3 4 but the correct output is 1 1 3 1 2 4
i really liked how d1A presents that n^2 solution is predicted when the actual solution was n
Can someone explain why d2b works
D2E3 / D1E3, and the problem as a whole, is one of my favourite problems I have encountered and solved on CF. For a long time I was stuck on trying to query
( i₁ i₂ )but this doesn’t work because you have no way to distinguish whether i₁ is)or i₂ is(if the query returns 1.After a while, a hint led me to the realization that we’ll have to query something like
i₁ ) i₁ ) ) i₂ )to distinguish:This is sufficient for E1.
Key observation: a longer RBS always has more valid substrings. Define f(x)=x(x+1)/2 as the total substrings formed by x consecutive pairs. We choose k₁,k₂,… such that f(kᵢ)>∑_{j< i}f(kⱼ) so each block’s contribution is larger than all previous combined.
Powers of 2 satisfy this, but only up to 8 fit under 1000 characters. Instead, brute‐force up to 1000 to find all kᵢ, yielding (in descending order): {123, 87, 61, 43, 30, 21, 15, 10, 7, 5, 3, 2, 1}.
We build the query as [ i₁ ) i₁ ) ]×123 + ) + [ i₂ ) ]×87 + ) + … + [ i₁₃ ) ]×1 + )
Total length of query = 829 characters.
Since each f(kᵢ) exceeds the sum of all earlier f’s, the total substrings returned >f(kₓ) iff iₓ is
(, else it’s).This recovers 13 positions in 1 query (vs editorial’s 12), enabling a more lenient binary search.
Total queries: ⌈1000/13⌉ + 2·log₂(1000) + 2 ≈ 99
Amazing problem, thank you.
Tutorial of D1F2.
$$$2\times(1+29+\frac{29\times 30}{2})=872$$$
should be changed to
$$$2\times(1+29+\frac{29\times 28}{2})=872$$$
Div2D can be solved in $$$O(n \log n)$$$ by greedily choosing the best possible element from the back.. https://mirror.codeforces.com/contest/2130/submission/332358826
For questions like Div1C1, do you guys think a while to find a bracket pattern with 2 missing positions that give different # of valid bracket substrings for each of the 4 possibilities or write a script to brute force every bracket string of length x?
I think you can tighten the complexity analysis of D to around $$$O(n^3)$$$. Here's how:
Note the sum of the positive elements of $$$s$$$ is bounded by $$$n - 1$$$ (or else the answer is always $$$0$$$). Concretely, $$$\sum_{i = 1}^{i = n} \text{max}(s[i], 0) \leq n - 1$$$ going forward.
We take the same approach as the tutorial, except in our $$$dp[l][r][x][y]$$$ we only need to maintain $$$x \leq s[l - 1]$$$ and $$$y \leq s[r + 1]$$$. If either of these are $$$-1$$$ then we'll just code them specially (so shift everything up by $$$1$$$, per se).
Then, for some fixed $$$l, r$$$, we have the following scenarios:
$$$s[l - 1] \geq 0, s[r + 1] \geq 0 \Rightarrow (s[l - 1] + 1) \cdot (s[r + 1] + 1)$$$ states.
$$$s[l - 1] = -1, s[r + 1] \geq 0 \Rightarrow (s[r + 1] + 1)$$$ states.
$$$s[l - 1] \geq 0, s[r + 1] = -1 \Rightarrow (s[l - 1] + 1)$$$ states.
$$$s[l - 1] = -1 = s[r + 1] \Rightarrow 1$$$ state.
Hand-wavily we can say this is roughly bounded by $$$(s[l - 1] + 1) \cdot (s[r - 1] + 1)$$$ (if we pretend to "clip" $$$s$$$ to $$$0$$$).
The cost of a transition is to iterate over the entire range and iterate to fix some $$$r$$$ to the left and some $$$s[k] - r$$$ on the right (similar to tutorial). So the total cost to compute a state is approx. $$$s[l - 1] \cdot s[r + 1] \cdot ((s[l] + 1) + \ldots + (s[r] + 1))$$$. The latter term is bounded by $$$n - 1 + n = 2n$$$. So the total cost of everything is bounded by $$$\sum_{l = 1}^{l = n} \sum_{r = l}^{r = n} (s[l] + 1) \cdot (s[r] + 1) \cdot 2n \leq ((s[1] + 1) + \ldots (s[n] + 1))^2 \cdot 2n \leq 8n^3 = O(n^3)$$$.
Implementation is more of a pain witht his for sure because you need to make sure you're not iterating $$$x$$$ and $$$y$$$ too much and handle somewhat edge cases when $$$s[l - 1] = -1$$$ or $$$s[r + 1] = -1$$$ or $$$s[k] = -1$$$ when iterating between $$$l$$$ to $$$r$$$.
For the problem "Stay or mirror", Why the number of inversions containing i is only related to a[i]?
When we say that the number of inversions involving $$$i$$$ depends only on $$$a_i$$$, we mean that it doesn’t matter what decisions we make for the other elements: if an element is greater than $$$p_i$$$, it will remain greater regardless of whether we keep it as $$$p_j$$$ or change it to $$$2n - p_j$$$; and if it is smaller, it will remain smaller in both cases.
The key is to count the inversions involving $$$i$$$ without double-counting cases.
If we set $$$a_i = p_i$$$ (small value), then all elements greater than $$$p_i$$$ that are to the left will form inversions with $$$i$$$. This remains true even if those elements are changed to $$$2n - p_j$$$, because they will still be greater than $$$p_i$$$.
If we set $$$a_i = 2n - p_i$$$ (large value), then all elements greater than $$$p_i$$$ that are to the right will form inversions with $$$i$$$. This also remains true even if they are changed to their large form, because they will still be smaller than $$$a_i$$$.
Thus, for each $$$i$$$, there are only two possible inversion counts involving it:
We choose the smaller of the two, then remove $$$i$$$ from the array. After this, the remaining problem is of exactly the same type, so the process repeats recursively until all positions are processed.
Loved the solution for D1B — Stay or Mirror!!
Click here
For Div1A, I ended up doing an $$$O(n)$$$ greedy. I made the observation that, for a range $$$[s_k,e_k]$$$, a cycle through $$$s_k$$$ and $$$e_k$$$ would require two paths from $$$s_k$$$ to $$$e_k$$$ (one of which is redundant for the score $$$f(S)$$$). Accordingly, if we treat $$$e_k$$$ as a unique representative and take the smallest corresponding $$$s_{k'}$$$, this will get rid of subintervals existing while keeping the answer optimal. (Note that, while I mention a representative, this solution does not use a DSU; a DSU is a touch overkill.)
can anyone please help me to find mistakes in this solution for problem D1A Double Perspective.
If you are stuck on understanding what the editorial for Div1D is trying to do like I was, the DP state is dp[l][r][x][y] = the number of ways to fill in the subarray from l to r where we assume that l-1 and r+1 have already been chosen, and that the counts of l-1 and r+1 must increase by x and y respectively by filling in this subarray. Then we calculate by looking which of the elements from l to r we choose first.