Author: NemanjaSo2005
We need to solve the same problem $$$t$$$ times. We will make use of loops to do so. From now we will focus on solving a problem for one test case, but remember that all of the following will be in a loop.
So the array $$$x$$$ is beautiful if there exists an array of distinct elements $$$y$$$ such that $$$x_i \cdot y_i$$$ is the same for all $$$1 \le i \le n$$$. We will try to make the condition into something more understandable. Suppose that the common product is some constant $$$C$$$. How can you express each $$$y_i$$$ in terms of $$$C$$$ and $$$x_i$$$?
It can be expressed as $$$y_i = \frac{C}{x_i}$$$. If all $$$y_i = \frac{C}{x_i}$$$ must be all different, what does that force upon the array $$$x$$$ for it to be beautiful?
Let $$$i$$$ and $$$j$$$ be indices of two arbitrary elements of $$$x$$$. Then for array $$$x$$$ to be beautiful the following needs to hold:
$$$y_i \neq y_j$$$
$$$\frac{C}{x_i} \neq \frac{C}{x_j}$$$, divide both sides by $$$C$$$
$$$\frac{1}{x_i} \neq \frac{1}{x_j}$$$, as neither $$$x_i$$$ nor $$$x_j$$$ are $$$0$$$, multiply by $$$x_i \cdot x_j$$$
$$$x_j \neq x_i$$$
Therefore, all values of $$$x_i$$$ must be distinct. But is it enough just that all values of $$$x$$$ are distinct?
It is, and here is how to easily construct the integer array $$$y$$$ that satisfies the constraints. Let $$$P$$$ be the product of all elements of $$$x$$$. Then make $$$y_i = \frac{P}{x_i}$$$ for all $$$1 \le i \le n$$$. As all values of $$$x$$$ are distinct and nonzero, it is guaranteed that all values of $$$y$$$ will also be distinct.
The solution is the number of distinct values in $$$a$$$. How can we count that?
We can only count the first appearance of each value. So the problem is now how to check if the appearance of value at some position $$$i$$$ is indeed the first appearance of $$$a_i$$$.
Read the hints.
The answer to the problem is the number of distinct elements in the array. There are many ways to find it; we will present one of the easiest.
We will count only the first appearances of every value.
That means that if array is for example [ $$$1$$$, $$$3$$$, $$$1$$$, $$$5$$$, $$$3$$$, $$$1$$$], we will only count $$$1$$$ at position $$$1$$$, $$$3$$$ at position $$$2$$$ and $$$5$$$ at position $$$4$$$. We will not count values $$$1$$$ at positions $$$3$$$ and $$$6$$$, nor the value $$$3$$$ at position $$$5$$$, as they are not the first appearance of those values.
We will iterate through every element in $$$x$$$ and for an element $$$x_i$$$, check if there exists some $$$x_j$$$ $$$(j \lt i)$$$ such that $$$x_i = x_j$$$. If we can't find such $$$j$$$, then that means it's the first appearance of $$$x_i$$$.
We can do it by having one loop iterate over $$$i$$$, and it will have a variable that says whether it is the first appearance, originally set to true. Then we will iterate in another loop over all indices $$$1 \le j \lt i$$$ and if at any point $$$a_i = a_j$$$, we will set the value of the variable to false. Then we increase the count of different values if the variable is still true.
Memory complexity is $$$O(n)$$$. Time complexity is $$$O(n^2)$$$ per testcase giving us a total time complexity of $$$O(tn^2)$$$.
#include<bits/stdc++.h>
#define ll long long
using namespace std;
const int maxn=105;
int n,arr[maxn];
int main(){
int T;
cin>>T;
while(T--){
cin>>n;
for(int i=1;i<=n;i++)
cin>>arr[i];
int ans=0;
for(int i=1;i<=n;i++){
int add=1;
for(int j=1;j<i;j++)
if(arr[j]==arr[i])
add=0;
ans+=add;
}
cout<<ans<<"\n";
}
return 0;
}
t = int(input())
for i in range(t):
n = int(input())
a = input().split(" ")
count = 0
for i in range(0, n):
add = 1
for j in range(0, i):
if a[i]==a[j]:
add = 0
count += add
print(count)
Come up with more ways to count the number of different elements in the array.
Author: Blagoj
Given the string $$$s$$$, how do we calculate the answer?
We know that we need to type a character (apply operation $$$1$$$ exactly $$$n$$$ times). How many switches do we do?
If the string $$$s$$$ starts with a 1, we need to do a switch immediately. From that point on, we only switch if the current number in the string changes. We can assume that we start with a 0, as the other case can be handled by appending 0 to the start and subtracting $$$1$$$ from the final answer.
We will assume that we only want to make the answer smaller, as for keeping the answer, we can pick any interval of size $$$1$$$. How does reversing some interval [ $$$l$$$, $$$r$$$ ] change the answer?
Notice that all switches before $$$l-1$$$, after $$$r+1$$$ and inbetween $$$l$$$ and $$$r$$$ will stay. So we only care about the values of $$$s_{l-1}$$$, $$$s_l$$$, $$$s_r$$$ and $$$s_{r+1}$$$.
Notice that if $$$s_l = s_r$$$, then the swap does not change the number of changes. So we can assume that $$$s_l \neq s_r$$$ in the optimal swap. Now, to evaluate how good the swap is, we can observe the following:
- if $$$s_{l-1} = s_l$$$, the number of changes increases by $$$1$$$, otherwise it decreases by $$$1$$$.
- if $$$s_{r} = s_{r+1}$$$, the number of changes increases by $$$1$$$, otherwise it decreases by $$$1$$$.
Those 2 changes stack.
From the previous hint, we can conclude that we can decrease the number of changes by at most $$$2$$$. For us to be able to decrease the number of changes by $$$2$$$, it shall hold that $$$s_{l-1} \neq s_l$$$ and $$$s_r \neq s_{r+1}$$$.
If we want to decrease the number of changes by $$$1$$$, it is enough to make $$$s_{l-1} \neq s_l$$$ and to make $$$r=n$$$.
If we cannot do either of the previous two, the number of changes cannot be decreased.
Read the hints.
As said in the hints, we will assume that string $$$s$$$ starts with a 0. Now we need to check if it is possible to pick indices $$$l$$$ and $$$r$$$ such that the number of changes after the swap decreases by $$$2$$$. With some casework, we can see that it is always possible if the original number of changes is at least $$$3$$$. If the original number of changes is $$$2$$$, we cannot decrease it by $$$2$$$, but we can decrease it by $$$1$$$. If the original number of changes is $$$0$$$ or $$$1$$$, we cannot decrease it.
Counting the number of changes can be done in a single pass of a loop. The time and memory complexities are $$$O(n)$$$ per testcase.
#include<bits/stdc++.h>
using namespace std;
void testCase() {
string s;
int n;
cin >> n;
cin >> s;
s="0"+s;
int ans = 0, cur = s[0];
for (int i = 1; i <= n; i++) {
int dig = s[i];
if (cur != dig)
ans++;
cur = dig;
}
if(ans>=3)
cout<< ans-2 + n<<"\n";
else if(ans==2)
cout<< ans-1 + n<<"\n";
else
cout<<ans+n<<"\n";
}
int main(){
int t;
cin>>t;
while(t--)
testCase();
return 0;
}
for _ in range(int(input())):
n = int(input())
a = input().strip()
cnt = 0
for i in range(n-1):
if a[i] != a[i+1]: cnt += 1
if cnt == 0:
print(n + int(a[0] == "1"))
elif cnt == 1:
print(n + 1)
else:
print(n+cnt-1 - int(a[0] == "0" and cnt > 2))
Solve the problem if you were given $$$n$$$ strings. You will swap their order however you want and then concatenate them all into one string $$$s$$$. What is the minimal number of operations needed to write $$$s$$$ obtained in that way?
Author: NemanjaSo2005
We split our array into $$$3$$$ subarrays, take their medians, and the final median needs to be $$$\le k$$$. That means that the median of at least $$$2$$$ of those subarrays needs to be $$$\le k$$$.
For a given array $$$x$$$ of length $$$m$$$, what is the condition that $$$\operatorname{med}(x_1, x_2, \ldots, x_m) \le k$$$?
There need to be at least as many elements in $$$x$$$ that are $$$\le k$$$ as those that are $$$ \gt k$$$. How can we simplify this?
We can replace all elements that are $$$\le k$$$ with $$$1$$$ and all elementsthat are $$$ \gt k$$$ with $$$-1$$$. Now the median of $$$x$$$ is $$$\le k$$$ if and only if the sum of elements in $$$x$$$ is non-negative.
Now that we replaced the elements of $$$a$$$, the question is if it is possible to split it into $$$3$$$ subarrays, of which at least $$$2$$$ have a non-negative sum. There are $$$3$$$ cases:
- The first and second subarrays have non-negative sums.
- The second and third subarrays have non-negative sums.
- The first and third subarrays have non-negative sums.
Use prefix/suffix sums and maximums/ minimums to help you compute the answer in linear time.
Read the hints.
Cases $$$1$$$ and $$$2$$$ from hint 5 are symmetric, so we will only explain how to do case $$$1$$$. Compute the prefix sum of the array, let it be array $$$p$$$. The subarray [$l$, $$$r$$$] has non-negative sum if $$$p_{l-1} \le p_{r}$$$. For each index $$$2 \le i \lt n$$$ compute value $$$msp$$$ (max suffix prefix), such that $$$msp_i = max(p_i, p_{i+1},\ldots, p_{n-1})$$$. Then iterate over index $$$1 \le i \le n-2$$$ that will be the end of the first subarray. If $$$p_i \ge 0$$$ and $$$msp_{i+1} \ge p_i$$$, then it will be possible to split the array such that the medians of the first and second subarrays are $$$\le k$$$.
Case $$$3$$$ is even easier. Find positions $$$x$$$ and $$$y$$$ such that [ $$$1$$$ , $$$x$$$ ] is the shortest prefix with median $$$ \le k$$$ and [ $$$y$$$, $$$n$$$ ] is the shortest suffix with median $$$\le k$$$. Then the split into three subarrays such that the first and third have median $$$\le k$$$ is possible if and only if [ $$$x+1$$$ , $$$y-1$$$ ] is a valid subarray, in other words, if $$$x + 2 \le y$$$.
As both cases can be checked with a few array passes, the time and memory complexities are $$$O(n)$$$ per testcase.
#include<bits/stdc++.h>
#define ll long long
using namespace std;
const int maxn=2e5+5;
int n,k,arr[maxn],suf[maxn],minsuf[maxn];
bool check_prefix_and_middle(){
suf[n]=minsuf[n]=arr[n];
for(int i=n-1;i>=1;i--){
suf[i]=suf[i+1]+arr[i];
minsuf[i]=min(minsuf[i+1],suf[i]);
}
int s=0;
for(int i=1;i+2<=n;i++){
s+=arr[i];
if(s<0)
continue;
if(suf[i+1]>=minsuf[i+2])
return true;
}
return false;
}
void solve(){
cin>>n>>k;
for(int i=1;i<=n;i++)
cin>>arr[i];
for(int i=1;i<=n;i++)
if(arr[i]<=k)
arr[i]=1;
else
arr[i]=-1;
int a=n+1,b=-1;
int s=0;
for(int i=1;i<=n;i++){
s+=arr[i];
if(s>=0){
a=i;
break;
}
}
s=0;
for(int i=n;i>=1;i--){
s+=arr[i];
if(s>=0){
b=i;
break;
}
}
if(a+1<b){
cout<<"YES\n";
return;
}
if(check_prefix_and_middle()){
cout<<"YES\n";
return;
}
for(int i=1;i<n-i+1;i++)
swap(arr[i],arr[n-i+1]);
if(check_prefix_and_middle()){
cout<<"YES\n";
return;
}
cout<<"NO\n";
return;
}
int main(){
int t;
cin>>t;
while(t--)
solve();
return 0;
}
def check_prefix_and_middle(n, arr):
suf = [0] * (n + 1)
minsuf = [0] * (n + 1)
suf[n] = minsuf[n] = arr[n - 1]
for i in range(n - 2, -1, -1):
suf[i + 1] = suf[i + 2] + arr[i]
minsuf[i + 1] = min(minsuf[i + 2], suf[i + 1])
s = 0
for i in range(n - 2):
s += arr[i]
if s < 0:
continue
if suf[i + 2] >= minsuf[i + 3]:
return True
return False
def solve():
n, k = map(int, input().split())
arr = list(map(int, input().split()))
# Transform the array based on the value of k
for i in range(n):
arr[i] = 1 if arr[i] <= k else -1
a, b = n + 1, -1
s = 0
for i in range(n):
s += arr[i]
if s >= 0:
a = i + 1
break
s = 0
for i in range(n - 1, -1, -1):
s += arr[i]
if s >= 0:
b = i + 1
break
if a + 1 < b:
print("YES")
return
if check_prefix_and_middle(n, arr):
print("YES")
return
arr.reverse()
if check_prefix_and_middle(n, arr):
print("YES")
return
print("NO")
t = int(input())
for _ in range(t):
solve()
Solve the problem if you need to find the smallest $$$k$$$ for which the answer is YES.
Author:NemanjaSo2005
Try to prove the claim that $$$a_i \le \lceil \log_2 n\rceil$$$. It also shall help you understand when the construction is possible, which is often a good way to think about constructive problems.
We can notice that among any two consecutive elements $$$p_i$$$ and $$$p_{i+1}$$$, one of them has to be deleted. That means that after each operation, around half of the elements are deleted, therefore giving us the bound $$$a_i \le \lceil \log_2 n\rceil$$$. It also means that for array $$$a$$$ to describe a possible permutation, it must not contain two consecutive elements on any layer that do not get deleted in next iteration.
The whole function on going to next layer is defined in recursive way. Also, WLOG we can assume that local maximums stay, as it is symmetrical for local minimums. It makes sense to think about how to arrange elements on current layer and select those that do and those that do not get deleted. Then we will apply recursive step on next layer to arrange those that do not get deleted.
But how can we guarantee elements at some position will not get deleted?
If there are $$$k$$$ positions at which elements that should not be deleted go, then we can just select the $$$k$$$ biggest elements to go there and the rest of the array we fill in with smaller ones. But how do we guarantee that one of the smaller ones does not become a local maximum?
We can sort the smaller elements in increasing/ decreasing order, so none of them will be local maximum. However, there is an edge case of the smallest elements that form a prefix/ suffix. The part of smallest elements that form a prefix must be sorted in increasing order and the part of smallest elements that form a suffix must be sorted in decreasing order. Otherwise we might get that elements at first or last positions are local maximum when we do not want them to be.
Read the hints.
We process each layer separately and WLOG assume that we delete those that are not local maximums. We will have some elements that go onto next layer, and we will get their order when we process the next layer, for now we just know their positions. Say exactly $$$k$$$ elements are going to next layer, then we can select them to be the largest $$$k$$$ elements, which guarantees they will be local maximums.
Now the question is how to place smaller elements. We can imagine that bigger elements divide the layer into subarrays of indices that will be deleted. It does not matter how we fill in those subarrays with smaller elements, as long as they are in sorted order. If the subarray is the prefix of a layer, it must be sorted increasingly, and if it is the suffix of the layer, it must be sorted decreasingly.
Depending on implementation time complexity is either $$$O(n)$$$ or $$$O(n log n)$$$, both of which are fast enough to pass. Memory complexity is $$$O(n)$$$.
#include<bits/stdc++.h>
#define ll long long
using namespace std;
const int maxn=2e5+5;
int n,a[maxn],p[maxn];
void construct(vector<int> pos,vector<int> nums,int layer){
sort(nums.begin(),nums.end());
if(pos.size()==1){
p[pos[0]]=nums[0];
return;
}
vector<int> keep,rem;
vector<bool> spec;
for(int x:pos)
if(a[x]>layer){
keep.push_back(x);
}
if(layer%2==1)
reverse(nums.begin(),nums.end());
vector<int> newnums;
for(int it=1;it<=keep.size();it++){
newnums.push_back(nums.back());
nums.pop_back();
}
construct(keep,newnums,layer+1);
reverse(nums.begin(),nums.end());
int last;
for(int x:pos){
last=x;
if(a[x]>layer)
break;
p[x]=nums.back();
nums.pop_back();
}
reverse(nums.begin(),nums.end());
for(int x:pos){
if(x<last)
continue;
if(a[x]>layer)
continue;
p[x]=nums.back();
nums.pop_back();
}
}
void solve(){
cin>>n;
for(int i=1;i<=n;i++)
cin>>a[i];
for(int i=1;i<=n;i++)
if(a[i]<=0)
a[i]=1e9;
vector<int> A;
for(int i=1;i<=n;i++)
A.push_back(i);
construct(A,A,1);
for(int i=1;i<=n;i++)
cout<<p[i]<<" \n"[i==n];
}
int main(){
int T;
cin>>T;
while(T--)
solve();
}
#include <algorithm>
#include <bitset>
#include <cassert>
#include <chrono>
#include <cmath>
#include <deque>
#include <iomanip>
#include <iostream>
#include <map>
#include <queue>
#include <random>
#include <set>
#include <string>
#include <vector>
typedef long long ll;
typedef long double ld;
using namespace std;
int main()
{
ios::sync_with_stdio(false);
cin.tie(0);
int t;
cin >> t;
while (t--)
{
int n;
cin >> n;
vector<int> a(n), p(n);
for (int i = 0; i < n; i++) cin >> a[i];
int mx = max(0, *max_element(a.begin(), a.end())) + 1;
for (int i = 0; i < n; i++) if (a[i] == -1) a[i] = mx;
int l = 1, r = n;
for (int k = 1; k <= mx; k++)
{
int mr = n - 1;
while (mr >= 0 && a[mr] <= k) mr--;
for (int i = 0; i < mr; i++)
{
if (a[i] == k)
{
p[i] = ((k & 1) ? r-- : l++);
}
}
for (int i = n - 1; i > mr; i--)
{
if (a[i] == k)
{
p[i] = ((k & 1) ? r-- : l++);
}
}
}
for (int i = 0; i < n; i++) cout << p[i] << " \n"[i == n - 1];
}
return 0;
}
I came up with this problem while writing the generator for 1900F - Local Deletions.
Author:NemanjaSo2005
When is the solution trivially -1?
When it is impossible to do any operation and array is not sorted in the beginning. Is that enough though?
It is enough. If we have even a single pair of values that sums up to $$$k$$$, we can make the array non-decreasing. From now assume there is only one such pair as we can ignore other pairs.
The constraint of $$$3n$$$ hints that a solution is linear. Maybe we are actually supposed to sort the array? Let's assume indices of values that sum up to $$$k$$$ are $$$a$$$ and $$$b$$$. How do we swap values at some other two indices $$$c$$$ and $$$d$$$.
We can do the following sequence of operations. It could change values of elements at positions $$$a$$$ and $$$b$$$, but their sum will remain $$$k$$$ and values at indices $$$c$$$ and $$$d$$$ will be swapped.
- We do operation on indices $$$a$$$ and $$$b$$$ and we make it so that $$$value[a] := k - value[c]$$$ and $$$value[b] := value[c]$$$.
- We do operation on indices $$$a$$$ and $$$c$$$ and we make it so that $$$value[a] := k - value[d]$$$ and $$$value[c] := value[d]$$$.
- We do operation on indices $$$a$$$ and $$$d$$$ and we make it so that $$$value[a] := k - value[b]$$$ and $$$value[d] := value[b]$$$.
Notice that after this sequence of operations we swapped values at positions $$$c$$$ and $$$d$$$ and $$$value[a] + value[b]$$$ is still $$$k$$$. We can now sort the rest of the array (without $$$a$$$ and $$$b$$$) and then adjust values of $$$a$$$ and $$$b$$$ at the end. However, there is still a problem.
It might be impossible to adjust values of $$$a$$$ and $$$b$$$. Take $$$k=5$$$ and array [$1$, $$$3$$$, $$$2$$$, $$$3$$$, $$$5$$$] for example, with $$$a = 3$$$ and $$$b = 4$$$. How do we avoid such problems?
If $$$a = 1$$$ and $$$b = n$$$ we can just make $$$value[a]=0$$$ and $$$value[b] = k$$$ and as all other elements are between $$$0$$$ and $$$k$$$, it is guaranteed that array is sorted. How do we do it?
We will assume that both $$$a$$$ and $$$b$$$ are neither $$$1$$$ nor $$$n$$$. If they are, then we will skip some steps. The following sequence of operations makes $$$a=1$$$ and $$$b=n$$$ a valid choice. Note that $$$a$$$ and $$$b$$$ are referring to their original indices.
- We do operation on indices $$$a$$$ and $$$b$$$ and we make it so that $$$value[a] := k - value[1]$$$ and $$$value[b] := value[1]$$$.
- We do operation on indices $$$1$$$ and $$$a$$$ and we make it so that $$$value[1] := k - value[n]$$$ and $$$value[a] := value[n]$$$.
After those two operations we can just sort the rest of the array with sequence of $$$3$$$ operations previously described. Then at the end, we do operation to make $$$value[1] = 0$$$ and $$$value[n] = k$$$. Due to $$$n \gt 4$$$ in total we use a bit less than $$$3n$$$ operations in the worst case.
#include<bits/stdc++.h>
#define ll long long
using namespace std;
const int maxn=2e5+5;
int n,k,arr[maxn],A,B,p[maxn];
map<int,int> pos;
vector<tuple<int,int,int>> V; /// All operations
void dooperation(int a,int b,int x){
arr[a]-=x;
arr[b]+=x;
V.push_back(make_tuple(a,b,x));
}
void setvalue(int pos,int b,int target){ /// Does operation on (pos, b) such that value of pos becomes target
if(arr[pos]==target)
return;
int dif=target-arr[pos];
dooperation(b,pos,dif);
}
void performswap(int x,int y){
int vx=arr[x],vy=arr[y];
setvalue(A,B,k-vx);
setvalue(x,A,vy);
setvalue(y,A,vx);
}
bool cmp(int a,int b){
return arr[a]<arr[b];
}
void solve(){
pos.clear();
V.clear();
A=B=-1;
cin>>n>>k;
for(int i=1;i<=n;i++)
cin>>arr[i];
bool issorted=true;
for(int i=2;i<=n;i++)
if(arr[i]<arr[i-1])
issorted=false;
if(issorted){
cout<<"0\n";
return;
}
for(int i=1;i<=n;i++){
if(pos.find(k-arr[i])!=pos.end()){
B=i;
A=pos[k-arr[i]];
break;
}
pos[arr[i]]=i;
}
if(A==-1){
cout<<"-1\n";
return;
}
if(A!=1){
setvalue(A,B,k-arr[1]);
setvalue(1,A,k-arr[B]);
A=1;
}
if(B!=n){
setvalue(B,A,k-arr[n]);
setvalue(n,B,k-arr[A]);
B=n;
}
vector<int> invp;
for(int i=2;i<n;i++)
invp.push_back(i);
sort(invp.begin(),invp.end(),cmp);
for(int i=0;i<invp.size();i++)
p[invp[i]]=i+2;
for(int i=2;i<n;i++){
if(p[i]==i)
continue;
performswap(p[i],i);
swap(p[p[i]],p[i]);
i--;
}
setvalue(B,A,k);
cout<<V.size()<<"\n";
for(auto x:V)
cout<<(get<0>(x))<<" "<<(get<1>(x))<<" "<<(get<2>(x))<<"\n";
return;
}
int main(){
int T;
cin>>T;
while(T--)
solve();
return 0;
}
Solve the problem in $$$\lceil \frac{3n}{2} \rceil$$$ queries.
Author: NemanjaSo2005
If problem had queries of form "What is nor of some interval [$l$, $r$]", how would you answer them?
You can look at all the bits separately and answer for each.
For each bit, we only care about the position of the last $$$1$$$ on it before $$$r$$$. Let that position be $$$x$$$. If $$$x \gt l$$$, then if $$$x$$$ and $$$r$$$ are of the same parity, the bit will be $$$0$$$; otherwise, it will be $$$1$$$.
If $$$x=l$$$, the bit will be $$$1$$$ if $$$l$$$ and $$$r$$$ are the same parity and otherwise $$$0$$$.
If $$$x \lt l$$$, the bit will be $$$1$$$ if $$$l$$$ and $$$r$$$ are different parity and otherwise $$$0$$$.
From the previous hint we conclude that for a fixed $$$r$$$, the values of interval [$l$, $$$r$$$] repeat when you increase $$$l$$$ by $$$2$$$, except at positions near where last $$$1$$$ is for some bit.
That means there will be only $$$O(k)$$$ different values of nor of an interval ending at some $$$r$$$.
Read the hints.
We can go from beginning to the end of the array and keep all the possible nor values of intervals ending at $$$r$$$ in some vector. For each value we will also remember the highest position $$$l$$$ for which we can obtain such value. When we find all those intervals for some $$$r$$$, we use segment tree to set answer to all positions in this interval to max of current answer and nor of our interval. Time complexity is $$$O(nk^2 + nk log n)$$$ and memory complexity is $$$O(n)$$$.
#include<bits/stdc++.h>
#define ll long long
using namespace std;
const int maxn=2e5+5;
int N,M,arr[maxn],pref[maxn][25],logg,st[4*maxn],stk;
int nor(int a,int b){
return M^(a|b);
}
int getlog(int x){
int ans=-1;
while(x){
ans+=1;
x/=2;
}
return ans;
}
int internor(int l,int r){
if(l<=0 or l>r)
return 0;
if(l==r)
return arr[r];
int ans=0;
for(int j=0;j<=logg;j++){
int bit=(1<<j);
if(pref[r][j]<l)
if((r-l+1)%2==1)
bit=0;
if(pref[r][j]==l)
if((r-l+1)%2==0)
bit=0;
if(pref[r][j]>l)
if((r-pref[r][j]+1)%2==1)
bit=0;
ans+=bit;
}
return ans;
}
void update(int lb,int rb,int x,int pos=1,int l=1,int r=stk){
lb=max(1,lb);
rb=min(N,rb);
if(rb<lb)
return;
if(l==lb and r==rb){
st[pos]=max(st[pos],x);
return;
}
int mid=(l+r)/2;
if(rb<=mid)
return update(lb,rb,x,pos*2,l,mid);
if(lb>mid)
return update(lb,rb,x,pos*2+1,mid+1,r);
update(lb,mid,x,pos*2,l,mid);
update(mid+1,rb,x,pos*2+1,mid+1,r);
return;
}
int get(int pos){
pos+=stk-1;
int v=0;
while(pos){
v=max(v,st[pos]);
pos/=2;
}
return v;
}
void solve(){
int K;
cin>>N>>K;
stk=1;
while(stk<N)
stk<<=1;
M=(1<<K)-1;
for(int i=1;i<=N;i++)
cin>>arr[i];
logg=getlog(M+1)-1;
for(int i=1;i<=N;i++){
for(int j=0;j<=logg;j++)
if(arr[i]&(1<<j))
pref[i][j]=i;
else
pref[i][j]=pref[i-1][j];
}
for(int i=1;i<=N;i++){
for(int j=0;j<=logg;j++){
int p=pref[i][j];
for(int add=-2;add<=2;add++)
update(p+add,i,internor(p+add,i));
}
update(1,i,internor(1,i));
}
for(int i=1;i<=N;i++)
cout<<get(i)<<" \n"[i==N];
}
void reset(){
for(int i=0;i<=N+2;i++){
arr[i]=0;
for(int j=0;j<=logg+2;j++)
pref[i][j]=0;
}
logg=0;
for(int i=0;i<=stk+stk;i++)
st[i]=0;
}
int main(){
int T=1;
cin>>T;
while(T--){
solve();
reset();
}
return 0;
}
Nor and Nand are well known for being able to mimic any bitwise operation if applied enough times. Make a circuit to add three $$$1$$$-bit numbers (those are zero and one) using $$$9$$$ NAND gates.








oh wow... quick editorial before system test finish... thanks
The Bonus thing here is Cool :)
It turned out to be a good competition. Task E is interesting
You literally cheated. https://mirror.codeforces.com/contest/2103/submission/316575935 and https://mirror.codeforces.com/contest/2103/submission/316574644 have too large of an edit distance for a three-minute gap. Also, the macros at the top completely change.
Yeah, it looks kind of suspicious. Also, If you take a look at his recent competitions, there is literally one competition, where all of his submissions got skipped.
Yeah, it's this one
WTF is that username?! Is that even legal :/
Why do people keep asking about my username? Are y'all racist or something
If someone is racist, it's the person putting the N-word in their username
Maybe he's just a gangsta-rap enjoyer?
It's an accident
could you kindly explain why you are using a different template in every submission?
I don't have a common template. Each type of task has its own templates and macros. The macros in each template are sometimes different. If it's interesting, I can send you all my templates that I use
I'm not buying it. We are talking about two submissions for the same problem. Why would you switch to a completely different template for the same problem?
And again, the submission times are only 3 minutes and 4 seconds apart, which means practically you had less than 3 minutes to fix your code. In that small amount of time, not only did you manage to completely rework your solution, but you made a ton of totally unnecessary changes like:
range()macrorng()(why?)tctoT(why?????)stagetoa(why???)roundstoW(why?)None of these changes had anything to do with fixing your solution. The differences might make sense if you started from scratch with a different template, but in that case it would have taken significantly longer than 3 minutes.
And that's just one submission. The previous 316563964 has a bunch of problems too: apparently this is some topological sort like approach. But why do you define a struct FastIn that you don't even use? You didn't use this in any of your other solutions. And there's a
biggerIterlambda that unnecessarily maps INFVAL to INT_MAX. Why? None of this looks like a human wrote it.Hey! I get where you're coming from — the differences in style and structure between the two submissions are noticeable. But there's nothing shady going on here
The first submission was just a quick draft. I wanted to test the idea fast, so I threw together a minimal version. When it failed on a pretest, it immediately pointed me to the issue — the grouping wasn't handled properly at certain stage boundaries.
I have a pre-written template that includes a lot of useful utilities:
clog2, doubly linked list handling withL/R, level-based grouping, etc. Once I saw what needed to be fixed, it took me less than 3 minutes to open my template, drop the core idea into place, tweak a few conditions, and submit. Totally doable when you already know what the bug is and have the infrastructure ready.As for the renaming — things like
rng,T,a,W,ans, andmarkare just part of my regular template. I stick with the naming for consistency and readability. No particular reason to rename them back to what I used in the draft.And again, this wasn’t a rewrite from scratch. It was more like "okay, I know the issue, let’s rebuild it properly using my usual tools."
Regarding unused things like
FastInorbiggerIterin other submissions — same story. My template includes them by default, and I don’t always clean up unused parts when I’m in a rush to submit. Doesn’t mean the code wasn’t written manually.So yeah — it’s all legit and hand-written. Just a quick fix, plugged into a familiar and efficient structure.
dude...
Omg, I use translators with AI assistants to express my thoughts normally for people like you. We can communicate in my native language
I accidentally filled in the wrong solution, for which I received a WA. I had a solution at that time
You have very suspicious and AI-like variable names
But how would you explain the difference in your code-style for each of the problems you have solved? It's very unusual to have a difference in code style if you have been programming for a while. Also, some of your problems have verdicts "Wrong answer on pretest 1". Like, who wouldn't want to test out their solution before submitting it. It's kind of strange to not test your solutions before submitting, and as the other guy have mentioned above, the edit distance between your two submissions is huge. How do you even type that fast, it's impossible.
I checked the first code with AI code checker and it said the code was 85-90% written by AI
The second code was 90-95% written by AI
If you want to try it for yourself
This is really a shit. It says that all my code is generated by AI( >= $$$85\%$$$ )
UPD: It says the writer's code of prob E is $$$70\%\sim 80\%$$$ likely AI generated
Well, true because it's just an AI judging code
I provided a solution of my own to check. and it said, "This code is very likely generated by an AI model, possibly ChatGPT"
so, these ai detections are not reliable at all.
I faced something like this before. not related to coding though. I wrote a paragraph on a topic(an assignment) and checked for ai detection. then it said that it was ai generated too. then i provided the the paragraph to gpt and asked to rewrite it so the ai doesn't mark it as ai generated. and guess what this time it got around 30% ai generation score, whilst the first one(written by me) got over 80%. so they are not reliable at all.
thank you sir I had a great laugh
Bro...to be honest, I laughed even harder when I saw this (in the picture) and your comment. Good job
can somebody explain what is this loser trying to elaborate on
you having wrong answer on pretests for 9 attempts straight i guess
I think getting roasted constantly for blatantly cheating is funnier
The F question is really terrible。
Can you explain why you do not like it? If there's something I can change in the future I will make sure to do so.
I believe most people passing it are not even proving the result, and AI can simply guess without giving it any second thoughts.
Majority of none-AI solvers solved it in a much harder way, because it is not easy to guess the solution. Also please don't blame the authors with AI solving the problems.
I want to be clear that I don’t blame the author at all—in fact, I’m truly grateful for all the creativity and hard work that went into these problems. The F question was certainly a tough nut to crack, but wrestling with it taught me a lot. Thank you again to everyone who contributed!
Thank you for the feedback! Yes, I do agree that such stuff is not fun, but we couldn't find a way to stop it.
I really appriciate you actually providing the reason as for a lot of people the only reason for disliking round is that they did bad in it.
This is indeed not the fault of the author. I would suggest that perhaps the authors could consider providing the problems to AIs, such as O3 and so on, during the testing phase and observe their feedback — maybe this could be effective?
Non AI-proof is never a reason that a problem is bad/shouldn't appear in a contest.
Maybe he wants to say that there so many people slove it with AI, I guess
I think my solution for E works in $$$\frac{3n}{2}$$$ operations lol
Edit: Didn't see the bonus section. Cool!
Great learning experience. will upsolve C, i was trying out a completely different approach :)
PS: will the answer to C bonus be a binary search approach?
Yes.
yes I also think binary search should work.
I have solved D by simulating the process and contstructing graph on elements (There is a directed edge from u -> v when u < v) and then order in topological sort is the answer. Overkilled I guess.
That is, in fact, a valid solution, and there was a harder version that needed it. But we did not use it as it was not interesting enough.
i tried this during the contest but there were some inconsistencies which I couldnt clear. for example:
let adjacent indices during some iteration be 0, 1, 2 and index 1 is being removed (let it be an odd iteration). Now, we get either a[1] > a[0] or a[1] > a[2] or both. how do we verify which edge to add to our graph? Should we add both? I couldn't come up with a solid solution to resolve this
You need both edges, otherwise 2 will also be removed during that iteration. Assume you have elements from 0 to 6 and only 2 and 5 remain after that odd iteration, the condition will be like this:
$$$0 \gt 1 \gt [2] \lt 3 \lt 4 \gt [5] \lt 6$$$
Note that you are free to change it to $$$3 \gt 4$$$, as long as the pattern between 2 remaining elements must be $$$ \lt \lt \lt ... \gt \gt \gt $$$
Can you explain your solution? I did not understand it. Edit : I understood it from another comment. Thanks for the new idea.
Missing corner cases/ not doing casework properly cost me a lot of time and WAs... (problem B and C)
Anyway still a great problemset. Good caution for a while for people just submit without being carefully checking through code and cases like me.
Just in the hurry of submitting fast, I submitted B's solution to C lol. Luckily, CF doesnt penalise on WA in 1st case :)
I'm too, was guilty of submitting the wrong files/wrong problems.
I switched my habit to copy the code to submit pages and manually select problems. This no longer happens.
Another way you could think about D is that we're trying to construct an acyclic graph where we have an edge from $$$a$$$ to $$$b$$$ if $$$p_a \lt p_b$$$. We process each layer individually. Assume that we are removing all non-local maximums. If an element $$$i$$$ is not removed in this layer, then we know that $$$p_i$$$ is a local maximum, then $$$p_{i+1} \lt p_i$$$ and $$$p_{i-1} \lt p_i$$$. We also have to insert edges for elements that are going to be removed, that will be done the same way the editorial describes it. Then we calculate a topological sort. The complexity will be $$$O(n*log(n))$$$. 316594585
That is, in fact, a valid solution, and there was a harder version that needed it. But we did not use it as it was not interesting enough.
What would the harder version look like?
Find permutation satisfying the array with smallest possible inverse (it also might be smallest such permutation, but I couldn't prove that)
Can anyone hack my solution for C since I believe it has worst case complexity of O(n^2): 316589874
Can you please tell me what you did?
If for cases $$$1$$$ and $$$2$$$ you iterated over start of the first one and then when you get good prefix you iterate at the end of the second one, it actually ends up being $$$O(n)$$$, because if you start the second loop more than 2 times, you would have already gotten yes in one of the previous runs.
I didn't quite understand the explanation. In each of the last two nested loops , for each good prefix, it run from
l+2ton, and in worst case, can the number of good prefixes be close to n and none of the pairsl,rsatisfy the conditions, so shouldn't it beO(n^2)? What is it which guaranteesO(n)?Let a and b be ends of first 2 good prefixes. Then [1, a] [a+1, b] [b+1, n] split works. There might be some edge cases, but they are not existant if there's like 5+ good prefixes. So the complexity is $$$O(n)$$$ if you actually do the break/return after finding first good split.
Oh right, got it. Thanks!
cool contest
F hint 2:
Why is that? Let's assume for the oldest bit I found a segment $$$[l, r]$$$ covering $$$i$$$. For the next bit there may exist a segment $$$[l',r']$$$ covering $$$i$$$ that is not "compatible" with $$$[l, r]$$$. This can (?) happen if there is a 1 between $$$i$$$ and both $$$r$$$'s and number of zeroes in the suffix has different parity.
I am referring to finding nor of a single interval.
For problem F, I think the complexity of official solution should be $$$O(nk^2 + nk \log n)$$$ since calculating range nor takes $$$O(k)$$$ time.
btw, using push down on sparse table instead of segtree can reduce time complexity of processing range max updates from $$$O(nk\log n)$$$ to $$$O(nk + n\log n)$$$
That is correct. Will fix it. Also, weird that I never saw that use of sparse tables but it obviously makes perfect sense.
can anyone explain why it says -2
but it was actually 
You submitted twice before AC.
With a tougher implementation it is possible to solve E in (3n/2) moves instead of 3n moves. (the bonus version of the problem)
Idea is somewhat like:
So basically, we can use P moves to put (P-1) elements in their right place. Since the smallest P is 3, in the worst case, it'll take 3 operations to put 2 elements in their correct position. Notice that we only need to put (n-2) elements in their place.
This would mean that the element was already in the correct position before we did our operation, so there was no need to do this operation.
That's the solution!
As a complete newbie, I also confess that the contest was so good! Will upsolve B; it was so damn interesting (though I couldn't solve)!
C was too implementation heavy. Took 1 hour for 1 question. Damn!
Can somebody please help me finding out edge case for this solution 316608879 i have been thinking since 2 hours but i didnt get any edge cases, i would really be thankful, Thank you in advance
you are not doing complete search
there can be some segements $$$(l^\prime, r^\prime)$$$ where $$$l \lt = l^\prime$$$ or $$$r^\prime \lt = r$$$ or both
You should include Editorial link into "Contest Materials"
Thank you, fixing it now
In problem D , for the test case:
1 6 2 1 1 -1 2 1
The author's constructed perumtation is:
1 4 5 3 2 6
Why is this a valid permutation , will 3 not get deleted in the first iteration itself since its not a local minima?
The array you provided does not satisfy the constraint of there existing such a permutation.
For anyone wondering about the solution of the Bonus Problem for Task C, We can simply binary search on k and use the current solution as the check function. It will be an O(n * log(n)) Solution!
F is really nice!
hello my fellow competitive programmers, this morning i was looking for some youtube videos to understand and upsolve the problems which i could not solve in 1019 Div 2 contest, then suddenly i saw this youtube channel. this man has uploaded all the solutions till problem F excluding problem E, solved all of them, and i am damn sure he has cheated, also he runs a telegram channel which is mentioned in his youtube channel where he uploads the source code of all the problems, and he has just registered some hours ago, he is — 777dimasik777. and even if he has not cheated, whats the need to upload solutions on youtube and telegram during the contest. such people ruin our community and this platform. please, someone take strict action against such cheaters, so this platform is free from them.
Solution: https://www.electronicshub.org/wp-content/smush-webp/2014/08/Full-Adder-using-NAND-Gates.jpg.webp
Yes. It is a well known problem. But I still think it's fun if you didn't see it before. (And I couldn't think of CP problem as a bonus)
nice editorial
My methods to do A:
O(n log n): Sort the array and if a[i] != a[i+1] then result += 1
O(n log n): Insert all elements into an array and then print out the size of the set
O(n): Make an array to store the number of appearances of a certain element in an array. Then count the number of i such that count[i] > 0
They all just count number of distinct elements in the array.
can anyone send me a answer for median splits using dp
https://mirror.codeforces.com/contest/2103/submission/316558597
Solving the questions without participating in the contest, it was a bit shocking to see how pretty much no one solved E, given the solves for F. For reference, E took me about 15 minutes to solve on paper where F took a couple hours or so.
I also think that E is easier than F for humans. Sadly, that's not the case for AI, which lead to more ACs on F and therefore also more legit solves on F.
But when it comes to a person who's extremely bad at greedy/constructives(like me) its exactly the opposite case xD
NemanjaSo2005
Check this TripleMO + Khalid_Kamal_
One of them abuses the OCaml language to avoid plagiarism, while the other manages to think, solve, and code a Div2 F problem within 20 minutes.
Impressive ngl, could you check them? NemanjaSo2005
FOR PROBLEM C can anyone tell me why my code is giving WA i am basically trying to partition it from first left side then right side and count the partition on left and right and make sure it doesnot overlaps and also of the length of the partition is odd i can add the next element if its not <=k cause it wont change my median.
For D you can just sort indices based on
p[i]and break ties based on distance to the last element (with value $$$-1$$$). Then, iterate the sorted array and assign the highest number to odd values and lowest to even values.https://mirror.codeforces.com/contest/2103/submission/317067496 , like this right ?
No, I clarified the comment
In problem D O(n) implementation, we are sorting the array, wouldn't that make the implementation O(nlog(n))?
Yeah, I forgot that part. But from reading the code, you can see that it is easy to handle that, as the array will always be sorted in increasing or decreasing order. The main point is that it does not iterate over deleted elements, doing at most $$$\frac{n}{1} + \frac{n}{2} + \frac{n}{4} + \ldots = 2 \cdot n$$$ operations.
So my submission will be O(n) right, I thought it would be O(nlog(n)) but after your explanation it seem O(n)
Yeah. Doing each layer in linear time (regarding how many are on the layer) is $$$O(n)$$$.
Thanks :)
Great Editorial....those hints really helps
I can solve the third bonus.I was happy to enter this div.2 round.
010 <-- i can't focus on problem B cause i keep noticing this little face
can someone provide me easier implementation of C problem of this contest