Thank you for participating !
Special Thanks to hxu10 for giving me the chance to write the editorial and for maomao90 for enhancing the editorial !.
Editorial of Problems E and F are written by hxu10, and the rest are written by me. I hope they're insightful and concise, please let me hear your opinion about it or any feedback in general 💙.
2156A - Pizza Time
Since $$$m_1 \le m_2$$$, the total number of slices Alice ate is always at least as much as the total number of slices Bob ate. What's optimal construction ?
We want to try to minimize the difference between the total number of slices Alice ate and the total number of slices Bob ate.
The optimal construction is to let both $$$m_1$$$ and $$$m_2$$$ to be equals to one. The sequence of operations, denoted by $$$(m_1, m_2, m_3)$$$, is as follows:
The general form of the sequence
Note that we're considering the operations when $$$m_1 \le m_2 \le m_3$$$.
Consider the case where $$$n$$$ is even:
The number of operations is the length of the sequence $$${2,4,6,\ldots, 2(\frac{n}{2}-1)}$$$ which is $$$\frac{n}{2}-1$$$.
Consider the case where $$$n$$$ is odd:
The number of operations is the length of the sequence $$${1,3,5,7,\ldots, 2(\lfloor \frac{n}{2} \rfloor-1)+1}$$$
By the representation of odd numbers we've $$${2(0)+1,2(1)1+,\ldots, 2(\lfloor \frac{n}{2} \rfloor-1)+1}$$$ , which is equivalent to the length of $$${0,1,2,\ldots, \frac{n-1}{2}-1}$$$ which gives the length of $$$\frac{n-1}{2}$$$.
Both cases are equivalent to the close form
Complexity $$$\mathcal{O}(1)$$$.
from math import *
def solve():
n=int(input())
print((n-1)//2)
for _ in range(int(input())):
solve()
It's possible to use fair split that's we use $$$\lfloor \frac{n}{3} \rfloor , \lfloor \frac{n}{3} \rfloor , \lfloor \frac{n}{3} \rfloor + n \bmod 3$$$ and keep simulating with $$$n=m_3$$$ on steps except the first step until you reach $$$m_2 \le 2$$$.
Complexity $$$\mathcal{O}(\log_3(n))$$$
from math import *
def solve():
n=int(input())
a,b,c=n//3,n//3,n//3+(n%3)
ans=0
while(n>2):
ans+=n//3
n=n//3+(n%3)
print(ans)
for _ in range(int(input())):
solve()
2156B - Strange Machine
What's the consuming part of processing the string of operations $$$s$$$ ?
If the string $$$s$$$ contained at least one $$$\mathtt{B}$$$ operation , What's the maximum count of iterations we're going to run so that $$$a=0$$$ ?
If the string $$$s$$$ contains $$$x$$$ copies of $$$\mathtt{B}$$$ then $$$a$$$ is guaranteed to be $$$a=0$$$ in at most $$$\mathcal{O}(\log_{2x} A)$$$ where $$$A=\max(a_1,a_2,\ldots ,a_n)$$$ , that's at most $$$\mathcal{O}(n \log_2 A) \sim \mathcal{O}(30n)$$$ operations are required .
How to manage too many appearances of $$$\mathtt{A}$$$ for efficiency ?
It's enough to handle the case where $$$\underbrace{\color{red}{\mathtt{AAAAA\ldots AAA}}}_{}$$$ that's the answer for query $$$a$$$ is $$$a$$$ because we're incrementing by 1 at each iteration ; otherwise the complexity becomes $$$\mathcal{O}(10^9q)$$$ for such cases which is slow. and the rest can be done in $$$\mathcal{O}(nq \log_2(10^9))$$$ which is safe according to the constraints.
If you've one $$$\color{blue}{\mathtt{B}}$$$ in string $$$s$$$ , then you'll have to do operations in $$$\mathcal{O}(n\log_2(A))$$$ which is $$$\sim \mathcal{O}(20 \cdot 30)$$$ which is fine for $$$n \le 20$$$ and $$$\sum q \le 10^4$$$ , at worst case this is roughly $$$\mathcal{O}(6 \cdot 10^5)$$$ which is safe.
If we've a string $$$s$$$ that's all $$$\color{red}{\mathtt{A}}$$$ i.e. :
then answer is simply for each $$$a_i$$$ is $$$a_i$$$ itself because we cannot do anything except just subtracting by $$$1$$$ many times ; otherwise the complexity becomes $$$\mathcal{O}(10^9q)$$$ for such cases which is slow.
Complexity $$$\mathcal{O}(nq \log_2(10^9))$$$.
Compress the string $$$s$$$ into sequence of "Move Blocks" that's for $$$s=\mathtt{AABBBABAAA}$$$ the sequence of moves will be
Be careful in simulation
Observation : The consuming part of processing the operations where a segment of $$$\underbrace{\color{red}{\mathtt{AAAAA\ldots AAA}}}_{}$$$ because we're processing one by one which is inefficient and answering query will take $$$\mathcal{O}(nq \log_2(10^9))$$$ , why ? , because processing $$$\underbrace{\color{red}{\mathtt{AAAAA\ldots AAA}}}_{}$$$ operation-by-operation is slow , for example consider the sequence $$$\underbrace{\color{red}{\mathtt{AAAAA\ldots AAA}} \color{blue}{\mathtt{B}}}_{}$$$
This case will take $$$\mathcal{O}(nq \log_2(10^9))$$$ at worst case , which is inefficient.
A good way to optimize is by compress the blocks of same type of operations as $$$(\mathrm{type},\mathrm{count})$$$ that way all long blocks of $$$\color{red}{\mathtt{A}}$$$ will be performed much faster , For blocks of $$$\color{blue}{\mathtt{B}}$$$ it's already fast to process them in order because of $$$\mathcal{O}(\log_2(a))$$$ and it reduces $$$a$$$ to $$$0$$$ in $$$\mathcal{O}(\log_2(10^9)) \sim 30$$$ at most , However compressing the operations is still better to compose the operations.
That's we're going to process at most $$$\sim 60$$$ blocks , because if we started with $$$\color{red}{\mathtt{A}}\color{blue}{\mathtt{B}}\color{red}{\mathtt{A}}\color{blue}{\mathtt{B}}\color{red}{\mathtt{A}}\color{blue}{\mathtt{B}}\ldots $$$ assuming $$$a=10^9$$$ , which is the worst case of the algorithm.
Now after composing the blocks as $$$(\mathrm{type},\mathrm{count})$$$ , you should iterate through them :
- If $$$s_i=\color{red}{\mathtt{A}}$$$ then we perform $$$a_i=a_i-\min(\mathrm{count},a_i)$$$
- If $$$s_i=\color{blue}{\mathtt{B}}$$$ then we perform $$$\displaystyle a_i=\left \lfloor \frac{a_i}{\displaystyle 2^{\min(\text{count},\lceil \log_2(n) \rceil)}} \right \rfloor$$$
make sure to update number of operations accordingly.
Complexity $$$\sim \mathcal{O}(n+q \log_2^2(10^9))$$$.
def solve():
n,q=map(int,input().split())
s=input().strip()
a=list(map(int,input().split()))
A,B=s.count('A'),s.count('B')
for i in range(q):
if(B==0):
print(a[i])
else:
ans=0
while(a[i]):
for j in s:
if(a[i]==0):break
ans+=1
if(j=='A'):a[i]-=1
else: a[i]//=2
print(ans)
for _ in range(int(input())):
solve()
def solve():
n,q=map(int,input().split())
s=input()
a=list(map(int,input().split()))
A=s.count('A')
B=s.count('B')
last=s[0]
seq=[]
cur=1
for i in range(1,n):
if(s[i]==last): cur+=1
else:
seq.append([last,cur])
cur=1
last=s[i]
if(cur!=0): seq.append([last,cur])
if(B==0):
for i in a: print(i)
else:
for i in range(q):
ans=0
while(a[i]>0):
for j in seq:
if(a[i]==0): break
if(j[0]=='A'):
mn=min(j[1],a[i])
ans+=mn
a[i]-=mn
else:
if a[i]==0: break
x=a[i]
p=0
while(x):
x//=2
p+=1
mn=min(p,j[1])
ans+=mn
a[i]//=(2**mn)
print(ans)
for _ in range(int(input())):
solve()
2156C - Maximum GCD on Whiteboard
You can do all the Erase operations before the Split operations.
Suppose there exists an optimal solution where you do an Erase operation on an element that was produced from a Split operation. You can instead do an Erase operation on the original element that it was being Split from, which will result in the whiteboard containing a subset of the elements in the optimal solution. Hence, it will have a beauty of at least as much as the optimal solution.
Fix a target gcd $$$g$$$. Which numbers can be made divisible by $$$g$$$ using the Split operation?
If $$$x$$$ is a multiple of $$$g$$$, we do not need any Split operation. If $$$x\ge 4g$$$, one split can result in two multiples of $$$g$$$ (discard the middle).
Split $$$x$$$ into $$$(g, g + x\bmod g, x - 2g - x\bmod g)$$$. Both $$$g$$$ and $$$x - 2g - x\bmod g$$$ are divisible by $$$g$$$. - If $$$x \lt 5g$$$, $$$x - 2g - x\bmod g = 2g$$$. $$$g + x\bmod g \le 2g = x - 2g - x\bmod g$$$, hence, $$$x_1 \le x_2 \le x_3$$$ is satisfied. - Otherwise, $$$x \ge 5g$$$. $$$x - 2g - x\bmod g \ge 5g - 2g - g = 2g \ge g + x\bmod g$$$, hence, $$$x_1 \le x_2 \le x_3$$$ is satisfied.
Conversely, if $$$x\lt 4g$$$ and $$$x$$$ is not divisible by $$$g$$$, you can’t make it good by splits → it must be erased.
We will prove this by strong induction.
Let $$$P_i$$$ be the statement: $$$i$$$ cannot be made divisible by $$$g$$$ using any number of splits if $$$i \lt 4g$$$ and $$$i$$$ is not divisible by $$$g$$$.
Base case: $$$P_1$$$ is true as $$$1$$$ cannot be split.
Suppose $$$P_i$$$ is true for all $$$i \lt K$$$ for some $$$K \ge 2$$$. We will prove that $$$P_K$$$ is also true.
If $$$K \ge 4g$$$ or $$$K$$$ is divisible by $$$g$$$, $$$P_K$$$ is vacuously true. From here on, we will assume that $$$K \lt 4g$$$ and $$$K$$$ is not divisible by $$$g$$$.
Suppose we split $$$K$$$ into $$$x_1\le x_2\le x_3 \lt K$$$. If either $$$x_1$$$ or $$$x_3$$$ is not divisible by $$$g$$$, $$$P_{x_1}$$$ and $$$P_{x_3}$$$ implies that they can never be made divisible by $$$g$$$ using any number of splits. Hence, if we want to make $$$K$$$ divisible by $$$g$$$, $$$x_1$$$ and $$$x_3$$$ have to both be divisible by $$$g$$$.
If $$$x_1 \ge 2g$$$, $$$x_3 \ge x_2 \ge x_1 \ge 2g$$$, which means that $$$K = x_1 + x_2 + x_3 \ge 2g + 2g + 2g = 6g$$$, which contradicts $$$K \lt 4g$$$. Hence, $$$x_1 = g$$$ is the only possibility.
- If $$$x_3 = g$$$, $$$x_1 \le x_2 \le x_3 \rightarrow g \le x_2 \le g$$$. Hence, $$$x_2 = g$$$ as well, and $$$K = x_1 + x_2 + x_3 = g + g + g = 3g$$$, contradicting that $$$K$$$ is not divisible by $$$g$$$.
- If $$$x_3 \ge 2g$$$, $$$K = x_1 + x_2 + x_3 \le g + g + 2g = 4g$$$, contradicting $$$K \lt 4g$$$.
Hence, it is not posible to make $$$K$$$ divisible by $$$g$$$ using any number of splits.
So $$$g$$$ is good if
Count with a frequency array / map and a prefix sum for $$$g=1,2,\ldots ,n$$$
We want the maximum gcd $$$g$$$ achievable after $$$\le k$$$ erases and any splits.
For a fixed $$$g$$$, the elements that won’t consume an erase are: - all multiples of $$$g$$$ (already good), and - any $$$x\ge 4g$$$ (can be made good via one split using the construction in hint 3).
Thus, the only "bad" numbers (that would have to be erased) are those with $$$x\lt 4g$$$ that are not multiples of $$$g$$$. Therefore $$$g$$$ is good if
Since $$$1\le a_i\le n$$$, precompute a frequency array and prefix sums $$$p[v]=\sum_{u\le v}f[u]$$$. Then
Iterate through $$$g=1,2,3,\ldots ,n$$$ and keep the largest good $$$g$$$.
Complexity : $$$\mathcal{O}(n)$$$.
#include <bits/stdc++.h>
using namespace std;
#define int long long
void solve(){
int n,k;cin>>n>>k;
vector<int>a(n),mp(n+1),p(n+1);
for(auto &i:a){
cin>>i;
mp[i]++;
}
for(int i=1;i<=n;i++){
p[i]=p[i-1]+mp[i];
}
int ans=1;
for(int g=1;g<=n;g++){
int t=min(n,4*g-1);
int good=n-p[t];
if(g<=n)good+=mp[g];
if(2*g<=n)good+=mp[2*g];
if(3*g<=n)good+=mp[3*g];
if(good>=n-k)ans=g;
}
cout<<ans<<"\n";
}
signed main(){
ios::sync_with_stdio(0);
cin.tie(0);
int t;cin>>t;
while(t--)solve();
}
2156D - Find the Last Number
We determine the $$$k$$$-th bit of $$$p_n$$$ for $$$k=0,1,\ldots,\lfloor\log_2 n\rfloor$$$.
Let $$$\text{bit}_k(i)$$$ denote the whether the $$$k$$$-th bit of $$$i$$$ is zero or one. How can we determine the $$$k$$$-th bit of $$$p_n$$$, i.e. $$$\text{bit}_k(p_n)$$$, from $$$\text{bit}_k(p_1) + \text{bit}_k(p_2) + \ldots + \text{bit}_k(p_{n - 1})$$$?
How can we achieve this halving?
Recall from Hint 2 that $$$\text{bit}_k(i)$$$ is defined as $$$k$$$-th bit of the integer $$$i$$$. A naive solution using $$$n\log_2 n$$$ queries is as follows:
Compute $$$\text{bit}_k(p_1) + \text{bit}_k(p_2) + \ldots + \text{bit}_k(p_{n - 1})$$$ for all $$$k=0,1,\ldots,\lfloor\log_2 n\rfloor$$$.
$$$\text{bit}_k(p_i)$$$ can be computed with a single query $$$p_i \ \mathtt{AND} \ 2^k$$$ ($$$1\le i\le n - 1$$$).
If $$$\text{bit}_k(1) + \text{bit}_k(2) + \ldots + \text{bit}_k(n) = \text{bit}_k(p_1) + \text{bit}_k(p_2) + \ldots + \text{bit}_k(p_{n - 1})$$$, it means that the $$$k$$$-th bit of $$$p_n$$$ is zero. Otherwise, it is one.
This works due to the fact that $$$\text{bit}_k(p_1) + \text{bit}_k(p_2) + \ldots + \text{bit}_k(p_{n - 1}) = \text{bit}_k(1) + \text{bit}_k(2) + \ldots + \text{bit}_k(n) - \text{bit}_k(p_n)$$$.
To optimise this, we will half the search space after each step. Starting from $$$k = 0$$$, we still use $$$n - 1$$$ queries to calculate $$$\text{bit}_k(p_1) + \text{bit}_k(p_2) + \ldots + \text{bit}_k(p_{n - 1})$$$. After using this to determine the value of the $$$0$$$-th bit of $$$p_n$$$, we can discard all $$$p_i$$$ that has a different $$$0$$$-th bit from $$$p_n$$$.
More formally, let $$$S_0$$$ denote the set of elements $$$i$$$ such that $$$i$$$ has the same $$$0$$$-th bit as $$$p_n$$$, and $$$P_0$$$ denote the set of elements $$$i$$$ ($$$1\le i\le n - 1$$$) such that $$$p_i$$$ has the same $$$0$$$-th bit as $$$p_n$$$. $$$\sum_{i\in P_0} \text{bit}_1(p_i) = \left(\sum_{i\in S_0} \text{bit}_1(i)\right) - \text{bit}_1(p_n)$$$, so the $$$1$$$-st bit of $$$p_n$$$ is zero if $$$\sum_{i\in P_0} \text{bit}_1(p_i) = \sum_{i\in S_0} \text{bit}_1(i)$$$.
In general, we can let $$$S_k$$$ denote the set of elements $$$i$$$ such that $$$i$$$ has the same $$$0$$$-th, $$$1$$$-st, $$$\ldots$$$ , $$$k$$$-th bit as $$$p_n$$$, and $$$P_k$$$ denote the set of elements $$$i$$$ ($$$1\le i\le n - 1$$$) such that $$$p_i$$$ has the same $$$0$$$-th, $$$1$$$-st, $$$\ldots$$$ , $$$k$$$-th bit as $$$p_n$$$. Then, the $$$k$$$-th bit of $$$p_n$$$ is zero if $$$\sum_{i\in P_{k-1}} \text{bit}_k(p_i) = \sum_{i\in S_{k-1}} \text{bit}_k(i)$$$.
Observe that $$$|P_k| \le \left\lceil \frac{|P_{k - 1}|}{2} \right\rceil$$$ for all $$$k\ge 1$$$. Hence, the total number of queries used is
def query(index,num):
print("?",index,num)
return int(input())
def main():
n = int(input())
rest = set()
likely = set()
for i in range(1,n):
rest.add(i)
for i in range(1,n+1):
likely.add(i)
D = len(bin(n)[2:])
ans = 0
for d in range(D):
index_0 = set()
index_1 = set()
likely_0 = set()
likely_1 = set()
for index in rest:
res = query(index, 1<<d)
if res == 0:
index_0.add(index)
else:
index_1.add(index)
zeros = 0
ones = 0
for num in likely:
if num & (1<<d):
ones += 1
likely_1.add(num)
else:
zeros += 1
likely_0.add(num)
if zeros == len(index_0):
ans += 1 << d
rest = index_1
likely = likely_1
elif ones == len(index_1):
rest = index_0
likely = likely_0
else:
print("Error")
return
print("!",ans)
T = int(input())
t = 1
while t <= T:
main()
t += 1
2156E - Best Time to Buy and Sell Stock
Consider the simplified version of the game where Alex moves first.
Suppose Alex wants to ensure a beauty of at least $$$g$$$ (the goal), while Hao tries to keep it below $$$g$$$. Which indices should Alex lock to achieve this goal?
We can binary search on the goal $$$g$$$.
For a given $$$g$$$ and an index $$$i$$$, we define $$$f_g(a, i)$$$ as the number of indices $$$j\neq i$$$ satisfying $$$a_{\max(i, j)} - a_{\min(i, j)} \ge g$$$.
If there exists an index $$$i$$$ such that $$$f_g(a, i) \ge 2$$$, then Alex can lock index $$$i$$$ on the first move. No matter which index Hao removes next, there will still exist at least one $$$j$$$ such that $$$|a_i - a_j| \ge g$$$. Thus, Alex can secure that pair and guarantee a beauty of at least $$$g$$$.
If $$$f_g(a, i) \le 1$$$ for all indices $$$i$$$, Alex cannot achieve his goal. Whichever index $$$i$$$ Alex locks first, Hao can remove the single $$$j$$$ such that $$$|a_i - a_j| \ge g$$$. Hence, Alex will never be able to lock two indices with a difference $$$\ge g$$$, and the resulting beauty will be strictly less than $$$g$$$.
Now return to the original problem, where Hao moves first.
To keep the final beauty below $$$g$$$, Hao must remove an index such that after the removal, $$$f_g(a, i) \lt 2$$$ for all $$$i$$$. Otherwise, Alex will have a winning move as shown in Hint 1.3.
For each $$$i$$$, we only need to track whether $$$f_g(a, i)$$$ is $$$0$$$, $$$1$$$, $$$2$$$, or greater than $$$2$$$. This is because if there are any index $$$i$$$ with $$$f_g(a, i) \gt 2$$$, Hao has to remove index $$$i$$$ on his first move. Otherwise, we will have $$$f_g(a, i) \ge 2$$$ after Hao's first move, allowing Alex to have a beauty of at least $$$g$$$ as mentioned in Hint 2.
This observation allows us to compute $$$f_g(a, i)$$$ efficiently.
Read the above hints before continuing.
As noted in Hint 1.2, we can binary search on the target beauty $$$g$$$.
For each index $$$1 \le i \le n$$$, recall that $$$f_g(a, i)$$$ is the number of indices $$$j\neq i$$$ satisfying $$$a_{\max(i, j)} - a_{\min(i, j)} \ge g$$$.
As mentioned in Hint 3, we only care whether $$$f_g(a, i)$$$ is $$$0$$$, $$$1$$$, $$$2$$$, or greater than $$$2$$$. We iterate through $$$i = 1, \ldots, n$$$, and maintain the three smallest previous values $$$a_j$$$ (for $$$j \lt i$$$). For each of these, we check whether the condition holds and increment both $$$f_g(a, i)$$$ and $$$f_g(a, j)$$$ if it does. This leads to an $$$\mathcal{O}(3n) = \mathcal{O}(n)$$$ computation per $$$g$$$.
We simulate each possible first move $$$p$$$ by Hao. Let array $$$b$$$ be array $$$a$$$ after removing index $$$p$$$. When Hao removes index $$$p$$$, only the $$$f_g(a, p)$$$ indices $$$q$$$ satisfying $$$|a_p - a_q| \ge g$$$ will have their $$$f_g(b, q) = f_g(a, q) - 1$$$, while all others remain unchanged.
Hao succeeds in keeping the beauty below $$$g$$$ if and only if removing index $$$p$$$ results in all $$$f_g(b, i) \le 1$$$. Equivalently, the indices $$$q$$$ paired with $$$p$$$ must include all indices that initially had $$$f_g(a, i) = 2$$$.
If no such removal $$$p$$$ works, Alex can achieve his goal of keeping the beauty at least $$$g$$$.
Checking each first move $$$p$$$ requires $$$\mathcal{O}(f_g(a, p))$$$ time, and summing up over $$$\sum_{1\le p\le n} f_g(a, p) \le 2(3n) = \mathcal{O}(n)$$$ since we only maintain at most three previous values of $$$a_j$$$ (for $$$j \lt i$$$) during the calculation of $$$f_g(a, i)$$$.
The total time complexity is $$$\mathcal{O}(n \log_2 10^9)$$$ since we perform a binary search over $$$g$$$ and compute $$$f_g(a, i)$$$ in linear time per iteration.
def main():
n = int(input())
arr = list(map(int,input().split()))
# calculate the number of j for each i that satisfy arr[max(i,j)] - arr[min(i,j)] >= target
def measurecount(arr,target):
m = len(arr)
count = [0]*m
minimum = []
for i in range(m):
for j in range(len(minimum)):
if minimum[j] + target <= arr[i]:
count[i] += 1
minimum.append(arr[i])
minimum.sort()
if len(minimum) > 3:
minimum.pop()
maximum = []
for i in range(m-1,-1,-1):
for j in range(len(maximum)):
if maximum[j] - target >= arr[i]:
count[i] += 1
maximum.append(arr[i])
maximum.sort(reverse=True)
if len(maximum) > 3:
maximum.pop()
return count
# binary search to judge if we can find such pair
def judge(target):
count = measurecount(arr,target)
if max(count) <= 1:
return False
pick = []
if max(count) > 2:
for i in range(n):
if count[i] > 2:
pick = [i]
break
else:
for i in range(n):
if count[i] != 2:
continue
for j in range(n):
if j==i:
pick.append(j)
elif j < i and arr[j] + target <= arr[i]:
pick.append(j)
elif j > i and arr[i] + target <= arr[j]:
pick.append(j)
break
for p in pick:
count2 = measurecount(arr[:p]+arr[p+1:], target)
if max(count2) <= 1:
return False
return True
front = -10**9 - 10
rear = 10**9 + 10
while front < rear:
mid = front + (rear-front)//2
res = judge(mid)
if res:
front = mid + 1
else:
rear = mid
ans = front - 1
print(ans)
T = int(input())
t = 1
while t <= T:
main()
t += 1
#include "bits/stdc++.h"
using namespace std;
const int MAXN = 200005;
const int INF = 1000000005;
inline int fdiv(int num, int denom) {
return num / denom - ((num ^ denom) < 0 && num % denom);
}
inline int cdiv(int num, int denom) { return fdiv(num + denom - 1, denom); }
int t;
int n;
int a[MAXN];
vector<int> adj[MAXN];
void add_edge(int u, int v) {
adj[u].push_back(v);
adj[v].push_back(u);
}
bool check(int x) {
for (int i = 1; i <= n; i++) {
adj[i].clear();
}
set<pair<int, int>> st;
for (int i = 1; i <= n; i++) {
int cnt = 0;
for (auto [prv, prv_id] : st) {
if (a[i] - prv >= x) {
add_edge(prv_id, i);
}
}
st.insert({a[i], i});
if ((int) st.size() > 3) {
st.erase(prev(st.end()));
}
}
int bad_cnt = 0;
for (int i = 1; i <= n; i++) {
if ((int)adj[i].size() >= 2) {
bad_cnt++;
}
}
for (int i = 1; i <= n; i++) {
int delta = (int)adj[i].size() >= 2;
for (int v : adj[i]) {
if (adj[v].size() == 2) {
delta++;
}
}
if (delta == bad_cnt) {
return false;
}
}
return true;
}
int main() {
cin >> t;
while (t--) {
cin >> n;
for (int i = 1; i <= n; i++) {
cin >> a[i];
}
int lo = -INF, hi = INF;
while (lo < hi) {
int mid = cdiv(lo + hi, 2);
if (check(mid)) {
lo = mid;
} else {
hi = mid - 1;
}
}
cout << lo << '\n';
}
return 0;
}
2156F1 - Strange Operation (Easy Version)
What will the first element of permutation be for the final result?
If the first element is odd, we can always reduce it to $$$1$$$, otherwise, we can always reduce it to $$$2$$$.
If the first element is even, before reducing the first element to $$$2$$$, we must reduce some element to $$$1$$$ at first. Which one to choose?
You should choose the leftmost odd elements that all elements to the left is greater than it.
Treat the rest of elements as a new permutation
Since we need to find the lexicographically smallest permutation, our goal is to make the numbers on the left as small as possible.
Suppose we have a permutation $$$[p_1, p_2, \ldots , p_n]$$$.
The first question we would like to answer is: where will the number $$$1$$$ appear in the final lexicographically smallest permutation?
Case 1: $$$p_1$$$ is odd.
In this case, the answer is straightforward — $$$p_1$$$ can always be decreased to $$$1$$$. Hence, $$$1$$$ will appear at the first position.
Case 2: $$$p_1$$$ is even.
This case is more complicated. For a number $$$p_i$$$ to be reduced to $$$1$$$, two conditions must hold:
- $$$p_i$$$ is odd.
- All numbers $$$1, 2, \ldots, p_i - 1$$$ appear to the right of index $$$i$$$.
In other words, for every $$$j \lt i$$$, we must have $$$p_j \gt p_i$$$.
We find the minimum possible index $$$i$$$ that satisfies both of the above condition, and reduce $$$p_i$$$ to $$$1$$$.
After that, since $$$p_1$$$ is even, we reduce $$$p_1$$$ to $$$2$$$.
It can be proven that choosing the smallest such index $$$i$$$ and performing these two reductions — first decreasing $$$p_i$$$ to $$$1$$$, then decreasing $$$p_1$$$ to $$$2$$$ — will always yield the lexicographically smallest permutation. A formal proof will be given later.
Recursive construction:
If $$$p_1$$$ is odd, reducing $$$p_i$$$ to $$$1$$$ transforms the remaining multiset $$$[p_2, p_3, \ldots , p_n]$$$ into $$$[2, 3, \ldots, n]$$$. This can be treated as a new permutation problem of size $$$n - 1$$$.
If $$$p_1$$$ is even, after reducing $$$p_i$$$ to $$$1$$$ and $$$p_1$$$ to $$$2$$$,
the remaining multiset $$$[p_2, p_3, \ldots, p_{i-1}, p_{i+1}, \ldots, p_n]$$$ becomes $$$[3, 4, \ldots, n]$$$, which again forms a smaller permutation problem of size $$$n - 2$$$.
Thus, the process can be applied recursively.
Complexity Analysis:
The overall time complexity is $$$\mathcal{O}(n^2)$$$, which is efficient enough to pass the first subtask (F1).
Now let's prove that this solution always lead the permutation to smallest lexigraphically possible.
Suppose we have permutation $$$[p_1, p_2, \ldots, p_n]$$$
When $$$p_1$$$ is odd, the conclusion is straightforward. We can always decrease $$$p_1$$$ to $$$1$$$ and treat the remaining elements as a new permutation.
When $$$p_1$$$ is even, let $$$i$$$ be the minimum possible index such that $$$p_i$$$ can be decreased to $$$1$$$. In other words, $$$i$$$ is the smallest index such that $$$\min(p_1, p_2, \ldots, p_{i - 1}) \gt p_i$$$ and $$$p_i$$$ is odd. We need to prove that decreasing $$$p_i$$$ ($$$1 \lt i \le n$$$) to $$$1$$$ does not affect the lexicographically smallest possible prefix $$$[p_1, p_2, \ldots, p_{i-1}]$$$ that can be formed.
Transforming $$$[p_1, p_2, \ldots, p_{i-1}]$$$ into its lexicographically smallest possible form involves three types of operations. Suppose an operation chooses indices $$$x \lt y \lt z$$$ and performs the following steps: 1) item subtract $$$2$$$ from $$$p_x$$$, 2) add $$$1$$$ to $$$p_y$$$, and 3) add $$$1$$$ to $$$p_z$$$.
We analyze each possible case relative to $$$i$$$:
Case 1: $$$x \lt y \lt z \lt i$$$ This case is trivial. Changing $$$p_i$$$ to $$$1$$$ only affects indices greater than $$$i$$$, so these operations remain completely valid.
Case 2: $$$x \lt y \lt i \lt z$$$ Suppose $$$p_z$$$ is within the subarray $$$[p_i, \ldots, p_n]$$$. After changing $$$p_i$$$ to $$$1$$$, the element $$$p_z$$$ (or its value) still exists, perhaps at a different position, but the multiset of values in $$$[p_i, \ldots, p_n]$$$ remains unchanged. Therefore, the operation can still be performed.
Case 3: $$$x \lt i \lt y \lt z$$$ Since the operation does not depend on the relative order or values of $$$p_y$$$ and $$$p_z$$$, and the multiset $$$[p_i, \ldots, p_n]$$$ is preserved, we can still find suitable $$$p_y$$$ and $$$p_z$$$ to perform the operation. Hence, this case also remains valid.
Case 4: $$$y = i$$$ or $$$z = i$$$ If $$$p_i \gt 1$$$, then after decreasing $$$p_i$$$ to $$$1$$$, there must exist some index $$$j \gt i$$$ such that $$$p_j$$$ equals the original value of $$$p_i$$$. Thus, the operation can still be performed using $$$p_j$$$. If $$$p_i = 1$$$, then it is impossible for any $$$j \lt i$$$ to have $$$p_j = 1$$$, as that would violate the rule of minimality. This is precisely why we must choose the smallest possible index $$$i$$$ for which $$$p_i$$$ can be reduced to $$$1$$$; otherwise, the prefix could incorrectly contain $$$1$$$.
Therefore, if there exists a sequence of operations that transforms $$$[p_1, p_2, \ldots, p_{i-1}]$$$ into the lexicographically smallest possible prefix, none of these operations will be affected by decreasing $$$p_i$$$ to $$$1$$$ beforehand.
Hence, we can always decrease $$$p_i$$$ to $$$1$$$ without affecting the lexicographically smallest prefix.
def main():
n = int(input())
p = list(map(int,input().split()))
for i in range(n):
p[i] -= 1
seq = []
indexes = [i for i in range(n)]
def update(indexes):
count = 0
for i in range(n):
if indexes[i] < 0: continue
indexes[i] = count
count += 1
curr = 1
ans = [-1]*n
for i in range(n):
if ans[i] > 0:
continue
if indexes[p[i]] % 2 == 0:
ans[i] = curr
curr += 1
indexes[p[i]] = -1
update(indexes)
continue
prefix = p[i]
pick = -1
for j in range(i+1,n):
if ans[j] > 0:
continue
if p[j] > prefix or indexes[p[j]] % 2 > 0:
prefix = min(prefix,p[j])
continue
pick = j
break
ans[j] = curr
curr += 1
ans[i] = curr
curr += 1
indexes[p[j]] = -1
indexes[p[i]] = -1
update(indexes)
print(*ans)
T = int(input())
for _ in range(T):
main()
Each move only shuffles the triple of consecutive values $$${a,a+1,a+2}$$$ and places the smallest among them at the leftmost of the three chosen positions.
Think in terms of a value’s rank among the not-yet-fixed values: one move on a value is like moving it two places to the left in that order. A value at rank $$$r$$$ can move at most $$$\lfloor r/2\rfloor$$$ steps.
To minimize lexicographically, process positions from left to right and push the current choice as far left as allowed. Parity matters: when the current rank is odd, you must “pair” with a smaller value to make the plan realizable.
We want the lexicographically smallest permutation reachable by the triple move. Focus on the first position :
- If the first value is odd, we can keep applying moves that reduce it by $$$2$$$ until it becomes $$$1$$$.
- If it is even, we can keep reducing by $$$2$$$ until it becomes $$$2$$$.
Why? Each move that subtracts $$$2$$$ from some position $$$i$$$ uses two positions $$$j,k$$$ to its right that currently hold the two consecutive values just below it. In a permutation these exist until you reach $$$1$$$ (odd start) or $$$2$$$ (even start). Using them only increases those right-side values by $$$1$$$, so they never block making the first value as small as parity allows.
Now suppose the first value is even and we set it to $$$2$$$. The next question is: where should $$$1$$$ go? The correct choice is the leftmost value that can be reduced to $$$1$$$.
Two simple facts characterize when a value $$$v$$$ can be reduced to a target $$$cnt$$$:
- Parity: moves change a value by $$$\pm 2$$$ overall, so $$$v \equiv cnt \pmod 2$$$ is necessary.
- No smaller blockers to the left: to reduce $$$v$$$ down step by step, for each intermediate step you must pick the pair $$${x,x+1}$$$ with $$$x \in {cnt,cnt+1,\dots,v-1}$$$ to the right of $$$v$$$. So $$$v$$$ must lie to the left of all values in $$$[cnt, v-1]$$$.
This gives a clean greedy: at step $$$cnt=1,2,\dots,n$$$, among values $$$i\in[cnt\ldots n]$$$, scan by increasing $$$i$$$ and keep the running minimum position seen so far. Every time that minimum improves (a “record minimum”), if $$$i \equiv cnt \pmod 2$$$ then value $$$i$$$ is feasible to reduce to $$$cnt$$$; among all feasible ones, the final record minimum is leftmost and is the one we must take.
Why lexicographically optimal? The array first differs where we place $$$cnt$$$. Picking the leftmost feasible position ensures the smallest possible number at that index. Any alternative that puts $$$cnt$$$ strictly to the right makes the prefix strictly larger and cannot be beaten later.
After choosing the index cur where we install $$$cnt$$$, we conceptually “pull down” the current value $$$val=p[cur]$$$ to $$$cnt$$$ by a chain of moves. Each such move also raises exactly one of the values in $$${cnt,\ldots,val-1}$$$ by $$$1$$$. Over the whole chain, every number in $$${cnt,\dots,val-1}$$$ gets raised exactly once. So, in one shot, we can set $$$p[cur]=cnt$$$ and increase all values in $$$[cnt \ldots val-1]$$$ by $$$1$$$. The remaining numbers still form a permutation of $$${cnt+1,\dots,n}$$$, and we repeat with $$$cnt+1$$$.
Because each step is just a linear scan over values and a few local updates, this runs in $$$\mathcal{O}(n^2)$$$ per test.
#include <iostream>
#include <vector>
#include <algorithm>
#include <numeric>
#include <tuple>
#include <map>
#include <set>
#include <deque>
#include <queue>
#include <functional>
#include <cassert>
#include <immintrin.h>
using namespace std;
#define int long long
int p[100005];
int pos[100005];
signed main()
{
#if _LOCAL_
freopen("case.txt", "r", stdin);
#endif
int t;
cin >> t;
while(t--) {
int n;
cin >> n;
for(int i = 0; i < n; ++i) {
cin >> p[i];
}
for(int i = 0; i < n; ++i) {
pos[p[i]] = i;
}
for(int cnt = 1; cnt <= n; ++cnt) {
int cur = n;
for(int i = cnt, mn = n; i <= n; ++i) {
if(pos[i] < mn) {
mn = pos[i];
if((i & 1) == (cnt & 1)) {
cur = pos[i];
}
}
}
int val = p[cur];
p[cur] = cnt;
for(int i = cnt; i < val; ++i) {
p[pos[i]]++;
}
for(int i = val; i > cnt; --i) {
pos[i] = pos[i-1];
}
pos[cnt] = cur;
}
for(int i = 0; i < n; ++i) {
cout << p[i] << ' ';
}
cout << endl;
}
return 0;
}
A move is only possible on three consecutive values: there must exist an $$$m$$$ such that the values involved are exactly $$${m,m+1,m+2}$$$ at indices $$$i\lt j\lt k$$$.
"$$$p_i=\max(p_j,p_k)+1$$$ and $$$p_i=\min(p_j,p_k)+2$$$"
Those equations force the three values to be consecutive and the largest to sit at the leftmost index, which is exactly $$$c\lt \min(a,b)$$$ with values $$${m,m+1,m+2}$$$.
Each move strictly decreases lexicographic order: the first changed index is $$$i=c$$$, whose value drops from $$$m+2$$$ to $$$m$$$; earlier indices remain unchanged.
Each move removes at least two inversions among the triple (and never adds inversions before $$$i$$$), so the process is acyclic and must terminate.
Number of operations is $$$\le \mathcal{O}(n^2)$$$ (each removes $$$\ge2$$$ inversions). The pointer-walk advances by $$$+1$$$ when no operations, and jumps back by at most $$$2$$$ per op. Total steps are $$$\mathcal{O}(n+\text{ops})=\mathcal{O}(n^2)$$$ overall.
Please Read Hints
Maintain $$$pos[v]$$$. Iterate $$$m$$$ from $$$0$$$ to $$$n-3$$$:
- Let
- If $$$c\lt \min(a,b)$$$, apply the operation on indices $$${a,b,c}$$$:
- Set $$$i=c$$$, $$$j=\min(a,b)$$$, $$$k=\max(a,b)$$$.
- If $$$a\lt b$$$, write $$$(m,m+1,m+2)$$$ to $$$(i,j,k)$$$ and set $$$(pos[m],pos[m+1],pos[m+2])=(i,j,k)$$$.
- Else write $$$(m,m+2,m+1)$$$ to $$$(i,j,k)$$$ and set $$$(pos[m],pos[m+2],pos[m+1])=(i,j,k)$$$.
- Then set $$$m\leftarrow\max(m-2,0)$$$.
- Otherwise, set $$$m\leftarrow m+1$$$.
This process terminates when $$$m=n-2$$$, at which point every triple satisfies $$$pos[m+2]\ge\min(pos[m],pos[m+1])$$$, so no legal move exists anywhere. Because each step is lexicographically decreasing (when an operation is performed) and the space of permutations is finite and acyclic under these moves, the result is the lexicographically smallest reachable permutation. The pointer-walk guarantees total time $$$\mathcal{O}(n^2)$$$.
Challenge : Could you prove The solution?
for _ in range(int(input())):
n=int(input())
p=list(map(int,input().split()))
pos=[0]*n
for i in range(n):
pos[p[i]-1]=i
ch=1
while ch:
ch=0
for m in range(n-2):
a,b,c=pos[m],pos[m+1],pos[m+2]
if(c<min(a,b)):
i=c
if(a<b):
j,k=a,b
p[i],p[j],p[k]=m+1,m+2,m+3
pos[m],pos[m+1],pos[m+2]=i,j,k
else:
j,k=b,a
p[i],p[j],p[k]=m+1,m+3,m+2
pos[m],pos[m+2],pos[m+1]=i,j,k
ch=1
print(*p)
2156F2 - Strange Operation (Hard Version)
Suppose the operation changes indices $$$i, j, k$$$, other than $$$i$$$, all the other indices relative position doesn't change.
Use segment tree and fenwick tree
Iterate from $$$1$$$ to $$$n$$$, during the $$$t$$$-th iteration, find the leftmost element that can be decreased by $$$t$$$.
Acclerate the procedure by finding next smaller element.
You should Read Editorial of F1 First
Let the initial permutation be $$$p=(p_1,p_2,\dots,p_n)$$$, and let the final array be $$$q=(q_1,q_2,\dots,q_n)$$$. Initialize $$$q_i=0$$$ for all $$$i$$$, meaning every position in the output is still undetermined. During the $$$t$$$-th iteration, find the smallest index $$$i$$$ whose value can be decreased to $$$t$$$. Repeatedly choose this $$$i$$$, locate indices $$$j,k$$$ such that $$$p_i=\max(p_j,p_k)+1$$$ and $$$p_i=\min(p_j,p_k)+2$$$, and perform the updates $$$p_i\leftarrow p_i-2$$$, $$$p_j\leftarrow p_j+1$$$, $$$p_k\leftarrow p_k+1$$$. This is equivalent to removing $$$p_i$$$ and setting $$$q_i\gets t$$$. After removing $$$p_i$$$, the remaining elements $$$p_1,\dots,p_{i-1},p_{i+1},\dots,p_n$$$ close the gap (preserving relative order), forming a permutation of length $$$n-1$$$.
This remove-and-assign simulation can be supported by a Fenwick tree or a segment tree so that each update and query runs in $$$\mathcal{O}(\log n)$$$ time; however, naively finding the correct index $$$i$$$ could still cost $$$\mathcal{O}(n)$$$ per step. Define the rank of $$$p_i$$$ as the number of alive (not yet removed) elements that are not greater than $$$p_i$$$, including $$$p_i$$$ itself.
To speed up the process, keep a pointer at the smallest alive index $$$i$$$ and check its rank. If the rank is odd, remove this element immediately. If the rank is even, move the pointer to the next smaller element to its right, that is, to the smallest $$$ j \gt i$$$ with $$$p_j \lt p_i$$$. Continue until you reach an element with odd rank.
Although a single "move to the next smaller on the right" step may look linear, the total number of such moves over the whole algorithm is at most $$$2n$$$. If an element is first visited when its rank is even, after some element to its right is removed, its rank becomes odd the next time it is visited; if an element’s rank is already odd, it is removed immediately upon visit. Therefore each index is visited at most twice.
To implement this, we maintain two data structures:
A Fenwick tree over values, where a $$$1$$$ at position $$$p_i$$$ indicates that the value $$$p_i$$$ is still alive (and $$$0$$$ otherwise). This lets you query the current rank of $$$p_i$$$ in $$$\mathcal{O}(\log n)$$$ time.
Let $$$ip$$$ denote the array that is the inverse of $$$p$$$, i.e., $$$(ip)_{p_i} = i$$$. Store a range-min segment tree over the array $$$(ip)$$$ . When the value $$$p_i$$$ is removed, set $$$(ip)_{p_i}=\infty$$$. If the current visited index is $$$i$$$, then a range-min query on $$$[1,p_i-1]$$$ returns the minimum position $$$j$$$ such that $$$j \gt i$$$ and $$$p_j \lt p_i$$$ (the next smaller element on the right by value). Each query and update costs $$$\mathcal{O}(\log n)$$$, and each index is examined at most twice, so the overall complexity is $$$\mathcal{O}(n\log n)$$$, which is sufficient for the F2 constraints.
inf = 10**9
class fenwick(object):
def __init__(self, n):
self.n = n
self.cul = [0]*n
self.maxd = len(bin(n))-3
def update(self,index,diff):
i = index
while i<self.n:
self.cul[i] += diff
i += (i+1)&(-i-1)
def getaccu(self,index):
output = 0
i = index
while i>=0:
output += self.cul[i]
i -= (i+1)&(-i-1)
return output
class segment_tree(object):
def merge(self,num,accu):
return min(accu,num)
def __init__(self,n,initial):
self.n = n
self.inf = inf
self.arr = [self.inf]*(2*n)
for i in range(2*n-1,0,-1):
if i>=n: self.arr[i] = initial[i-n]
else: self.arr[i] = self.merge(self.arr[2*i],self.arr[2*i+1])
def update(self,index,target):
index += self.n
self.arr[index] = target
while index > 0:
if index & 1:
nexttarget = self.merge( self.arr[index], self.arr[index-1])
else:
nexttarget = self.merge( self.arr[index], self.arr[index+1])
self.arr[index>>1] = nexttarget
index = index >> 1
def addnum(self,index,diff):
self.update(index, self.arr[index+self.n] + diff)
def query(self,left,right):
i,j = self.n+left, self.n+right+1
output = self.inf # initial output should be changed if you want to change the merge function
while i<j:
if i&1:
output = self.merge(self.arr[i],output)
i += 1
if j&1:
j -= 1
output = self.merge(self.arr[j],output)
i = i >> 1
j = j >> 1
return output
def main(t):
n = int(input())
p = list(map(int,input().split()))
location = [0]*n
for i in range(n):
p[i] -= 1
location[p[i]] = i
seg = segment_tree(n, location)
fen = fenwick(n)
for i in range(n):
fen.update(i,1)
ans = [-1]*n
curr = 1
for i in range(n):
if ans[i] >= 0:
continue
index = fen.getaccu(p[i]) - 1
if index % 2 == 0:
fen.update(p[i],-1)
seg.update(p[i],inf)
ans[i] = curr
curr += 1
continue
j = i
while True:
j = seg.query(0,p[j]-1)
nextindex = fen.getaccu(p[j]) - 1
if nextindex % 2 > 0:
continue
break
ans[j] = curr
ans[i] = curr + 1
curr += 2
seg.update(p[i],inf)
seg.update(p[j],inf)
fen.update(p[i],-1)
fen.update(p[j],-1)
print(*ans)
T = int(input())
for t in range(T):
main(t)




