Thank you for participating in our round. We apologise for C2 being too hard compared to C1.
UPD : Added an alternative solution to B, which contains the original solution with a diagram for better understanding.
Idea: ritam1234, Preparation: ritam1234
We observe that
Now observe that the sequence $$$n, n-1, \dots, 1$$$ satisfies the problem's condition (since $$$1 \ge 1 \ge \dots \ge 0$$$).
Hence, the sequence $$$n, n-1, \dots, 1$$$ works.
2210B - Simply Sitting on Chairs
Idea: ritam1234, Preparation: ritam1234, Argentum47
Suppose the game ends at chair $$$k$$$ (or chair $$$n+1$$$ if the game ends after successfully visiting the $$$n-th$$$ chair) , try to find the maximum number of chairs you can visit as a function of $$$k$$$.
We hope you have read the hint because the solution continues from that.
The maximum number of chairs which you can visit if your game ends at chair $$$k$$$ is given by $$$\left| { i : 1 \le i \lt k, p_i \le i \ or \ p_i \ge k } \right|$$$. We can obviously visit all chairs such that $$$p_i\le i$$$ and we can also visit all chairs with $$$p_i\ge k$$$ because since our game ends at chair $$$k$$$, marking any chair from $$$k,k+1,k+2,\ldots n$$$ would not matter.
Let's call this function $$$f(k)$$$. So our answer is given by
$$$max\left(f(k)\right)\forall k \; , \; 1 \le k \le n+1$$$.
For now consider $$$k \lt n+1$$$.
Let $$$ct$$$ denote the count of chairs such that $$$p_i\ge k,i \lt k$$$.
Now according to exchange argument, there should be atleast $$$ct$$$ chairs to right of k such that $$$p_i\le i$$$, i.e., $$$count\left(p_i\le i,k\le i\le n\right)\ge ct$$$
Therefore it is always optimal to choose $$$k=n+1$$$ for sitting on the maximum number of chairs possible.
Thus the answer is count of indices such that $$$p_i\le i$$$.
You can refer to the following for a better idea of the exchange argument, credits dipamsen

2210C1 - A Simple GCD Problem (Easy Version)
Idea: simplelife, Preparation: Argentum47
Do we really need to consider all subarrays?
We only need to consider adjacent elements of the array.
Let's say we have gcd($$$a_{i-1}, a_{i}$$$) and gcd($$$a_{i}, a_{i+1}$$$), What is the minimum possible value of $$$a_i$$$ satisfying this.
Since $$$b_i \le a_i$$$, the problem essentially asks for the number of indices $$$i$$$ where $$$a_i$$$ can be strictly decreased ($$$b_i \lt a_i$$$) without changing the GCD of any subarray.
Claim: To preserve the GCD of all subarrays, it is sufficient to preserve the GCD of all adjacent elements (subarrays of length 2).
Proof: Assume we modify the array such that the GCD of adjacent elements remains unchanged for all $$$i$$$. We can show this preserves the GCD of any subarray of length $$$k \ge 3$$$. For a subarray of length 3, using the properties of the GCD operator, we know that,
Since the GCD of adjacent elements is preserved by our assumption, the GCD of the triplet is also preserved. By induction, we can show that this holds for all subarrays of length $$$\ge 2$$$.
Now for each element $$$a_i$$$ ($$$1 \lt i \lt n$$$) let,
For $$$a_i$$$ to maintain these GCDs, it must be a multiple of both $$$A$$$ and $$$B$$$. The smallest such value is $$$\text{lcm}(A, B)$$$. $$$a_i$$$ can only be reduced iff $$$\text{lcm}(A, B) \lt a_i$$$. For the endpoins, we need to consider only one adjacent pair. For $$$a_1$$$, it can be reduced iff $$$\gcd(a_1, a_2) \lt a_1$$$ and for $$$a_n$$$, it can be reduced iff $$$\gcd(a_{n-1}, a_n) \lt a_n$$$.
The answer is just the number of indices that can be reduced. The time complexity is $$$O(n \log(\max a_i))$$$.
2210C2 - A Simple GCD Problem (Hard Version)
Idea: simplelife, Preparation: Argentum47
Thanks to Intellegent for misreading $$$C1$$$ and Argentum47 for fully solving it.
If you are able to set the value at a particular index < $$$a_i$$$, do it.
What can you say about remaining indices whose value is unchanged?
We can only increase the value of the indices with unchanged values, and while increasing, we can only change them to their multiples
Do we need to consider all multiples? No!
We only need to consider prime multiples. Do we need to consider all prime multiples? No!
We only need to consider first few prime multiples, 20 should suffice, but why?
Note that as discussed in previous part, we only need to consider adjacent elements. How can we use this to implement our logic?
Use $$$dp$$$!. Let $$$dp[i][j]$$$ denotes max changes till index $$$i$$$, if we multiply index $$$i$$$ by $$$j^{th}$$$ prime.
Refer to the editorial of C1 first. We consider only indices $$$i$$$ ($$$1 \lt i \lt n$$$). For the endpoints, we can do a similar analysis as well. Let $$$A = \gcd(a_i, a_{i-1})$$$ and $$$B = \gcd(a_i, a_{i+1})$$$.
First of all, it is trivial to observe that if $$$b_i \lt \text{lcm}(A, B)$$$, or if $$$ a_i = lcm(A,B)$$$. It is impossible to "reduce" the value of $$$a_i$$$, we can only increase it. Furthermore, if we can reduce the value of $$$a_i$$$, it is optimal to always reduce it to minimum possible value.
Now lets consider the unchanged values so far. They fall in either of the $$$2$$$ categories: $$$\text{lcm}(A,B) \gt b_i$$$ $$$\text{lcm}(A,B) = a_i$$$
If an index falls in category $$$1$$$, it is easy to see that it is impossible to change it without altering subarray GCD. However if the index falls in category $$$2$$$, then we can multiply it by some number $$$x$$$ such that $$$a_i \cdot x \lt b_i$$$ and GCD is preserved.
It is easy to see that it is optimal to choose a prime number as $$$x$$$, since any composite number can be written as product of primes, and choosing either of those primes will result in same subarray GCD.
Now notice that we can only choose prime numbers which are NOT a factor of $$$a_{i-1}/ \gcd(a_i, a_{i-1})$$$ and $$$a_{i+1}/\gcd(a_i, a_{i+1})$$$, since if the prime number was present in those values, the subarray GCD will increase.
Also notice that we don't have to consider all the primes, just the first $$$20$$$ prime numbers will do, since it can be easily seen that the product of first $$$20$$$ primes > $$$10^{18}$$$. This is important because the product of adjacent elements can be atmost $$$10^{18}$$$, and hence cannot cover all these primes.
Now we can use dp to solve from here, where $$$dp[i][j]$$$ = maximum number of changes in array from index $$$1$$$ to $$$i$$$, if we multiply $$$a_i$$$ with $$$j$$$. Also for $$$j = 0$$$, define $$$dp[i][0]$$$ = maximum number of changes in array from index $$$1$$$ to $$$i$$$, if we keep a[i] unchanged. Initially assign all values negative infinity.
Create a new array $$$c_i$$$, such that $$$c_i = \text{lcm}(A,B)$$$ for all $$$i$$$. Furthermore, if ($$$c_i \gt b_i$$$), set $$$c_i = a_i$$$ only. The reason for this is that if $$$a_i$$$ can be successfully reduced, then we just use that value in our analysis. Else we use $$$a_i$$$ only.
Now lets focus on base case. For $$$i = 0$$$, if $$$c_0 \neq a_0$$$, then $$$dp[0][0] = 1$$$, else $$$dp[0][0] = 0$$$. Now for $$$j = 1$$$ to $$$j = 20$$$, let $$$x = c_i \cdot p_j$$$. if $$$x \leq b_0$$$ and gcd($$$x, c_1$$$) = gcd($$$a_0, a_1$$$), and $$$x \neq a_0$$$ then multiplying is a valid option, and hence set $$$dp[0][j] = 1$$$, else it is invalid and let it stay negative infinity. We take gcd($$$x, c_1$$$) here and in subsequent test cases as well, since we want to consider the least possible value in next index, to minimize prime collisions.
Now for remaining indices, let's say we multiplied previous element by $$$k^{th}$$$ prime, and are trying to multiply the current element with $$$j^{th}$$$ prime, then the following conditions must hold true (let $$$x = c_i \cdot p_j$$$ and $$$y = c_{i-1} \cdot p_k$$$)
- $$$x \le b_i$$$
- gcd($$$x, c_{i+1}$$$) = gcd($$$a_i, a_{i+1}$$$)
- gcd($$$x, y$$$) = gcd($$$a_i, a_{i-1}$$$)
- $$$x \neq a_i$$$
If these $$$4$$$ conditions hold, then multiplying by $$$j$$$ is a valid combination, considering we multiplied previous number by $$$k$$$. In this case the answer will be $$$dp[i][j]$$$ = max($$$dp[i][j], dp[i-1][k] + 1$$$).
Note that we will handle $$$j = 0$$$ case explicitly, by setting $$$dp[i][0] = \text{max}_k(dp[i-1][k]) + 1$$$ if $$$c_i \lt a_i$$$, and $$$dp[i][0] = \text{max}_k(dp[i-1][k]$$$, otherwise. This is to ensure we always have a valid transition (number is unchanged) in case of impossibility, (the case where $$$a_i$$$ can never be changed) and dp[i] values are not just all negative infinity in that case. (Basically even if we can't change the value, we have to keep the number as $$$a_i$$$ only).
The overall time complexity of this logic is $$$O(n \cdot L^{2} \cdot log(max(A_i)))$$$, where $$$L$$$ is the number of primes used in our logic. This comfortably passes within the time constraints, in both python and cpp. For larger L, like 25-30, some basic optimizations, like exiting the loop if $$$c_i \cdot p_j \gt b_i$$$ might be required depending on the implementation.
What we described was the simplest solution to this problem that passes comfortably within these time constraints. Can you find a more efficient algo, which lets say passes comfortably for $$$n = 2 \cdot 10^{5}$$$ and for $$$n = 10^{6}$$$.
#include <bits/stdc++.h>
using namespace std;
typedef long long ll;
void solve()
{
ll n;
cin >> n;
vector<ll> a(n), b(n), c(n);
for(ll i = 0; i < n; i++) cin >> a[i];
for(ll i = 0; i < n; i++) cin >> b[i];
vector<ll> primes = {1, 2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, 37, 41, 43, 47, 53, 59, 61, 67, 71, 73};
for(ll i = 0; i < n; i++)
{
if (i == 0) c[i] = gcd(a[i], a[i+1]);
else if (i == n-1) c[i] = gcd(a[i-1], a[i]);
else c[i] = lcm(gcd(a[i], a[i-1]), gcd(a[i], a[i+1]));
if (c[i] > b[i]) c[i] = a[i];
}
ll L = primes.size();
vector<vector<ll>> dp(n, vector<ll>(L, -1e18));
//set up the base cases
for(ll i = 0; i < L; i++)
{
if (i == 0)
{
if (c[0] == a[0]) dp[0][0] = 0;
else dp[0][0] = 1;
continue;
}
ll val = c[0] * primes[i];
if (val <= b[0] and gcd(val, c[1]) == gcd(a[0], a[1]) and val != a[0]) dp[0][i] = 1;
}
//make the transitions
for(ll i = 1; i < n; i++)
{
for(ll j = 0; j < L; j++)
{
for(ll k = 0; k < L; k++)
{
//j = 0, special case, we are keeping value unchanged, and always want something to be assigned here
if (j == 0)
{
if (c[i] == a[i]) dp[i][j] = max(dp[i][j], dp[i-1][k]);
else dp[i][j] = max(dp[i][j], dp[i-1][k]+1);
continue;
}
//we check for all conditions, that multiplying this by prime j and prev by prime k keeps everything in check
ll val1 = c[i] * primes[j];
ll val2 = c[i-1] * primes[k];
if (val1 <= b[i] and gcd(val1, val2) == gcd(a[i], a[i-1]) and val1 != a[i])
{
if (i < n-1)
{
if (gcd(val1, c[i+1]) == gcd(a[i], a[i+1])) dp[i][j] = max(dp[i][j], dp[i-1][k] + 1);
}
else dp[n-1][j] = max(dp[n-1][j], dp[n-2][k] + 1);
}
}
}
}
cout << max(0LL, *max_element(dp[n-1].begin(), dp[n-1].end())) << '\n';
}
int main()
{
ll n = 1;
cin >> n;
while(n--) solve();
return 0;
}
Idea: simplelife, Preparation: simplelife
Thanks to awesomeguy856 for helping me in writing the editorial for this problem.
Think of some invariants.
Notice that the operation performed is reversible, so if you can convert $$$s$$$ to $$$t$$$ , you can also convert $$$t$$$ to $$$s$$$.
First, consider the tree representations of $$$"("+s+")"$$$ and $$$"("+t+")"$$$. The tree representation of an $$$RBS$$$ can be found with a stack, where each bracket pair is a node, and its parent is the bracket pair at the top of the stack when it is initially added.
The swapping operation becomes the following on a tree: Either
Take two nodes $$$u$$$ and $$$v$$$ and swap the subtrees of a contiguous range of children of u with the disjoint subtrees of a contiguous range of children of $$$v$$$.
Swap the subtrees of two disjoint contiguous ranges of children of $$$u$$$ for any $$$u$$$.
We observe two invariants in this process
the number of leafs (since entire subtrees are swapped, no new leafs are generated and no leafs are destroyed)
the depth of the highest node u with multiple children (or the lack thereof). This is because for any node v in the fixed chain above u, no swaps can be made as every non-fixed node is in the subtree of the only child of $$$v$$$. Furthermore, u will always have multiple children: if at any point $$$u$$$ has only 1 child, then reversing the operations would turn a fixed node into a nonfixed node, contradicting the previous sentence.
If these two invariants differ between $$$s$$$ and $$$t$$$, we immediately know the answer is NO, and these conditions can be checked easily on a tree.
We claim that if $$$s$$$ and $$$t$$$ have the same values for these invariants, then it is sufficient for the answer to be YES.
Proof:
Let $$$u$$$ be the highest node with multiple children (i.e. the lowest fixed node). Consider the process of turning the number of children of $$$u$$$ into the number of leafs. If the number of children of $$$u$$$ is less than the number of leafs, then there must exist a lower node $$$v$$$ that has multiple children. Swap the subtrees of the children of $$$v$$$ with the subtree of any child of $$$u$$$ that is not the ancestor of $$$v$$$. Then, the number of children of $$$u$$$ has increased. Continuing this process eventually leads a tree where $$$u$$$ is the only node with multiple children (a bunch of chains attached to $$$u$$$). Then, we make it so that only the leftmost child $$$c$$$ of $$$u$$$ is allowed to have a child. If any other child $$$v$$$ of $$$u$$$ has a child, then swap the subtree of $$$v$$$ with the lowest node in the chain attached to $$$u$$$. Then, $$$v$$$ is replaced with a leaf. Continuing this process turns every child of $$$u$$$ besides $$$c$$$ into a leaf, as desired.
This process ends with a unique tree based on the two invariants we found, so both $$$s$$$ and $$$t$$$ can reach this tree through some swaps. Since operations are reversible, $$$s$$$ can be transformed into this tree, then be transformed into $$$t$$$. So the answer is YES.
#include<bits/stdc++.h>
using namespace std;
void solve(){
int n;
cin>>n;
string s,t;
cin>>s>>t;
auto count_adjacent_pairs = [](string &s) -> int {
int m=s.size();
int ct=0;
for(int i=0;i<m-1;i++){
if(s[i]=='(' and s[i+1]==')')ct++;
}
return ct;
};
auto count_outer_brackets = [](string &s) -> int {
int m=s.size();
vector<int> match(m,-1);
stack<int> indices;
for(int i=0;i<m;i++){
if(s[i]==')'){
match[indices.top()]=i;
indices.pop();
}
else{
indices.push(i);
}
}
int ct=0;
int l=0,r=m-1;
while(match[l]==r){
l++;
r--;
ct++;
}
return ct;
};
if(count_adjacent_pairs(s)==count_adjacent_pairs(t) and count_outer_brackets(s)==count_outer_brackets(t)){
cout<<"YES\n";
return;
}
cout<<"NO\n";
}
int main(){
int t;
cin>>t;
while(t--){
solve();
}
return 0;
}
2210E - Binary Strings are Simple?
Idea: Argentum47, simplelife, Preparation: ritam1234, __baozii__, Argentum47
Thanks to wuhudsm for buffing this problem.
Consider a binary string of length $$$n$$$. Cyclically rotating the string to the left removes the character from front and places it on the back. Analyse the change in inversions modulo n if the front character is '$$$0$$$' and do the same for '$$$1$$$'.
Let $$$c_1 =$$$ number of $$$1$$$'s in the binary string and $$$c_0 =$$$ count of $$$0$$$'s in the binary string.
Lets consider the inversions due to each individual $$$0$$$ and $$$1$$$ in the string. Lets say at index $$$i$$$, we have a $$$1$$$. Then the number of inversions caused by it = number of $$$0$$$s in the binary string from index $$$i+1$$$ to $$$n$$$. If we have a $$$1$$$ at that index, it contributes $$$0$$$ inversions.
What happens if we cyclically rotate a string? If the first character in the string was a $$$1$$$, it essentially created $$$c_0$$$ inversions, which will no longer be there, and hence number of inversions decrease by $$$c_0$$$. If the first character in the string was a $$$0$$$, then what we are essentially doing is placing a $$$0$$$ at the front, increasing inversions by $$$c_1$$$ ($$$1$$$ inversion will increase for each index $$$i$$$ having a $$$1$$$.
Now lets say the initial inversions are $$$x$$$. After $$$1$$$ cyclic rotation, it will become either $$$x + c_1$$$, or $$$x - c_0$$$, and we will add this value to the set. Now observe that $$$x - c_0$$$ (mod $$$n$$$) $$$=$$$ $$$x + n - c_0$$$ (mod $$$n$$$) $$$=$$$ $$$x + c_0 + c_1 - c_0$$$ (mod $$$n$$$) = $$$x + c_1$$$ (mod $$$n$$$).
It essentially means that no matter what we doing, number of inversions increase by $$$c_1$$$ modulo $$$n$$$. Therefore the array contains $$$x$$$, $$$x + c_1$$$, $$$x + 2 \cdot c_1$$$.... $$$x + (n-1) \cdot c_1$$$ (mod $$$n$$$). It can be shown that this sequence contains exactly $$$n/ \gcd (n, c_1)$$$ distinct elements. Hence $$$f(l, r) = n/ \gcd (n, c_1)$$$
It isn't really useful to directly work with $$$\operatorname{gcd}$$$ , Try to find some useful properties of $$$\operatorname{gcd}$$$ and work with them.
Let's say you know the $$$i$$$-th character of the string. How do you determine the $$$i+1$$$-th character while minimising the cost of query ?
First it's obvious to see that the binary string and its complement form returns the same answer for all queries so we need atleast two guesses. Hence we will first assume the first character of the string , build the string using this first character and then if this string is not correct, its complement form will be definitely correct.
Then we can see if we query ranges of only even lengths , we can determine the parity of '$$$1$$$'s present in the range.(Given $$$m$$$ is even,$$$\operatorname{gcd}(k,m)$$$ is odd if $$$k$$$ is odd else even if $$$k$$$ is even)
Thus we can first find the string using the following method ---
- We can start determining all characters from left to right.
- At a position $$$i$$$, we can query the longest prefix which includes the $$$i$$$-th position (basically if $$$i$$$ is odd , we query $$$(2,i)$$$ else $$$(1,i)$$$).
- Since we can already know the characters upto $$$i-1$$$-th position, we can choose whether $$$i$$$-th character is '$$$1$$$' or '$$$0$$$' based on the parity of the query we receive.
However the cost of the queries is $$$\sum_{i=1}^{n} \frac{n}{i}$$$ $$$\approx$$$ $$$n\cdot ln(n)$$$ which is greater than $$$3\cdot n$$$ for sufficiently large $$$n$$$.
To reduce the cost, we will have to query from both ends.
To determine $$$i+1$$$-th character assuming we know $$$i$$$-th character , we can treat $$$(i,i+1)$$$ as a pair.
For this pair , we can choose the longest prefix or the longest suffix(similar to what we did earlier) and make two queries which minimise the cost. Since we know the parities of both queries and the $$$i$$$-th character , we can determine the $$$i+1$$$-th character.
We can see the cost of the total queries turn out to be approximately
The bound was set to $$$3\cdot n$$$ to allow some other weaker solutions to pass using the same idea.
There also exists a solution using trees , which basically uses the same query pattern as I mentioned above (We can determine the longest even prefix or longest even suffix for each position and take the longer of these queries) .
We make a tree out of all these queries , assigning the edges as the parity of the queries. Then assuming the $$$1$$$-st character , we find the rest of the characters using a $$$DFS/BFS$$$.
#include <bits/stdc++.h>
using namespace std;
map<pair<int,int>,int> quers;
int ask(int l,int r){
if(l>r)return 0;
if(quers.count({l,r})){
return quers[{l,r}];
}
cout<<"? "<<l<<" "<<r<<endl;
int x;
cin>>x;
return quers[{l,r}]=((r-l+1)/x)%2;
}
void solve() {
int n;
cin>>n;
string s;
for(int i=0;i<n;i++)s+='0';
quers.clear();
if(n<=6){
int cu;
for(int i=2;i<=n;i++){
cu=ask(i-1,i);
s[i-1]=((s[i-2]-'0')^cu)+'0';
}
cout<<"! "<<s<<endl;
int x;
cin>>x;
if(x==1)return;
for(auto &x:s){
if(x=='0')x='1';
else x='0';
}
cout<<"! "<<s<<endl;
cin>>x;
return;
}
for(int i=2;i<n;i++){
// to guess i
// 1 + (i)%2 i
// 1 + (i)%2 i-2
// i+1 n-(n-i)%2
// i-1 n-(n-i)%2
int quer1=i-(i)%2;
int quer2=n-(n-i)%2-i;
if(quer1>quer2){
int type1=ask(1+(i)%2,i);
int type2=ask(1+(i)%2,i-2);
s[i-1]=((s[i-2]-'0')^type1^type2)+'0';
}
else{
int type1=ask(i+1,n-(n-i)%2);
int type2=ask(i-1,n-(n-i)%2);
s[i-1]=((s[i-2]-'0')^type1^type2)+'0';
}
}
int la=ask(1+(n&1),n);
int ct=0;
for(int i=(n&1);i<n;i++){
ct+=(s[i]=='1');
}
ct&=1;
s[n-1]=(la^ct)+'0';
cout<<"! "<<s<<endl;
int x;
cin>>x;
if(x==1)return;
for(auto &x:s){
if(x=='0')x='1';
else x='0';
}
cout<<"! "<<s<<endl;
cin>>x;
}
signed main() {
ios_base::sync_with_stdio(false);
cin.tie(NULL);
int t; cin >> t;
while (t--)
solve();
return 0;
}
#include <bits/stdc++.h>
using namespace std;
void dfs(int u, int p, vector<vector<pair<int,int>>> &g, vector<int> &pref)
{
for(auto [v,parity]:g[u])
{
if(v==p)continue;
pref[v]=pref[u]^parity;
dfs(v,u,g,pref);
}
}
void solve() {
int n;
cin>>n;
vector<vector<pair<int,int>>> g(n+1);
vector<int> p(n+1);
string s;
for(int i=2;i<=n;i++)
{
int f=0;
if(i%2)f=1;
int l=n;
if((i%2)!=(l%2))l--;
if(i-f>l-i)
{
cout<<"? "<<f+1<<" "<<i<<"\n";
cout.flush();
int x;
cin>>x;
int par=((i-f)/x)%2;
g[i].push_back({f,par});
g[f].push_back({i,par});
}
else
{
cout<<"? "<<i+1<<" "<<l<<"\n";
cout.flush();
int x;
cin>>x;
int par=((l-i)/x)%2;
g[i].push_back({l,par});
g[l].push_back({i,par});
}
}
p[1]=0;
dfs(0,-1,g,p);
dfs(1,-1,g,p);
for(int i=1;i<=n;i++)
{
int r=p[i]^p[i-1];
if(r==0)s+='0';
else s+='1';
}
cout<<"! "<<s<<"\n";
cout.flush();
int w;
cin>>w;
if(w)return;
for(int i=0;i<n;i++)
{
if(s[i]=='0')s[i]='1';
else s[i]='0';
}
cout<<"! "<<s<<"\n";
cout.flush();
cin>>w;
}
int main() {
int t=1; cin >> t;
while(t--)
solve();
return 0;
}
Idea: simplelife, Preparation: __baozii__
Thanks to __baozii__ for solving this problem.
For the simplicity of editorial , denote $$$\max(b_1,b_2,\ldots,b_i)$$$ as prefix-max($$$i$$$) and $$$\min(b_1,b_2,\ldots,b_i)$$$ as prefix-min($$$i$$$).
You may think that first choosing prefix-max and then choosing prefix-min is always optimal , but is it always correct?
Suppose at an index $$$i$$$ , you choose prefix-max($$$i$$$) or a prefix-min($$$i$$$) , how does it affect the inversions from the left side and from the right side? Analyse all the cases properly.
First we will solve this problem for a single query and then build up the optimal solution step by step.
Suppose we choose a prefix-max($$$i$$$) at index $$$i$$$.
Then,
- It will never create an inversion with the left side of array.($$$j \lt i$$$)
- It will always create an inversion with any prefix-min on the right side of array and never creates any inversion with prefix-max.($$$j \gt i$$$)
Now suppose we choose a prefix-min($$$i$$$) at index $$$i$$$.
Then,
- It will always create an inversion with the left side of array except when we choose a prefix-min whose value is equal to this prefix-min.($$$j \lt i$$$)
- It will always create an inversion with any prefix-min on the right side of array that is smaller than this prefix-min and never creates any inversion with prefix-max.($$$j \gt i$$$)
Thus we can see our choices at an index $$$i$$$(choosing whether we take prefix-max($$$i$$$) or prefix-min($$$i$$$)) is independent of any index $$$j$$$ where the prefix-min($$$j$$$)$$$\neq$$$prefix-min($$$i$$$).
Thus we can divide this array into blocks having equal prefix-min.
Since choices in each block is independent of other blocks , we can maximise the number of inversions in each block independently and add the contributions of each block to final answer.
Now for each block , Let $$$s$$$ be the length of the array that comes before this block and $$$t$$$ be the length of the current block.
We can easily prove that in this block , that first choosing prefix-max and then prefix-min is the optimal choice to maximise the number of inversions.
Let $$$x(x\leq t)$$$ be the number of elements we choose to be prefix-max in this block. Then the number of inversions in this block is given by $$$(s+x)\cdot(t-x)$$$.
This is a concave down quadratic function with its value maximised at $$$x=\frac{t-s}{2}$$$. Then for this block , we can just choose the closest non-negative integer value of $$$x$$$ to be the optimal $$$x$$$.
For the single query version , we can just divide the array into blocks , choose the optimal $$$x$$$ for each block independently and sum the contributions over all blocks.
Now for optimising the solution ,
We will divide the blocks into two cases.
- Case 1: $$$t-s \gt 0$$$
- Case 2: $$$t-s\leq 0$$$
For blocks in Case $$$2$$$ , we can just choose $$$x=0$$$ i.e , all elements in the block are prefix-min.
Now notice that for Case $$$1$$$ , $$$t$$$ must be greater than $$$s$$$ i.e , the size of current block should be as large as the size of the array that comes before it. This means whenever we encounter a case of type $$$1$$$ , the current array length doubles. This means Case $$$1$$$ only happens at most $$${log_2(n)}$$$ times , where n is the length of the total array.
Now for implementation details,
We can handle all cases of type $$$1$$$ in $$$O(1)$$$ time.
Now we need an efficient method to jump over blocks of type $$$2$$$.
An efficient way to do this is by using binary lifting.
Consider the following subproblem ---
The current array length is $$$s$$$ and you have to jump over $$$k$$$ blocks of length $$$b_1,b_2,\ldots,b_k$$$, which are of type $$$2$$$.
Let $$$tot=\sum_{i=1}^{k} b_i$$$.
The number of inversions due to these $$$k$$$ blocks will be ---
We can precompute the square sum.
We also need to store the maximum block length we can jump over in case of type $$$2$$$. This can also be done using binary lifting.
The intended time complexity of the solution is $$$O((n+q)\cdot \log(n))$$$ but fairly implemented $$$O(n\cdot \log(n) + q\cdot \log(n)^2)$$$ also passes.
The following can also be implemented using a data-structure , like segment tree or fenwick tree , refer to Dominater069's code.
#include <bits/stdc++.h>
using namespace std;
using i64 = long long;
constexpr int B = 21;
void solve() {
int n, q;
cin >> n >> q;
vector<int> a(n);
for (int i = 0; i < n; i++) cin >> a[i];
vector<i64> next(n, n), st;
for (int i = 0; i < n; i++) {
while (!st.empty() && a[i] < a[st.back()]) {
next[st.back()] = i;
st.pop_back();
}
st.push_back(i);
}
vector<vector<int>> g(n + 1);
for (int i = 0; i < n; i++) g[next[i]].push_back(i);
vector<array<int, B>> up(n + 1), pa(n + 1);
vector<i64> sum1(n + 1), sum2(n + 1);
for (int i = 0; i <= n; i++) up[i].fill(n), pa[i].fill(n);
auto dfs = [&](auto &&self, int u) -> void {
for (auto v : g[u]) {
int d = u - v;
for (int i = 0; i < B; i++) {
up[v][i] = (d >= 1 << i) ? v : up[u][i];
if (i == 0) pa[v][i] = u;
else pa[v][i] = pa[pa[v][i - 1]][i - 1];
}
sum1[v] = sum1[u] + d;
sum2[v] = sum2[u] + 1LL * d * d;
self(self, v);
}
};
dfs(dfs, n);
while (q--) {
i64 l, r;
cin >> l >> r;
l--;
i64 u = min(r, next[l]);
i64 x = (u - l - 1) / 2;
i64 ans = x * (u - l - 1 - x);
i64 t = u - l;
while (u < r) {
int i = (t <= 1) ? 0 : (31 - __builtin_clz(t));
assert(i < B && i >= 0);
i64 v = up[u][i];
if (v > r) {
v = u;
for (int j = B - 1; j >= 0; j--) if (pa[v][j] <= r) v = pa[v][j];
}
i64 d1 = sum1[u] - sum1[v], d2 = sum2[u] - sum2[v];
ans += t * d1 + (d1 * d1 - d2) / 2;
t += d1;
u = v;
if (u < r) {
i64 s = min(r, next[u]) - u;
i64 x = clamp((s - t) / 2, 0LL, s);
ans += -x * x + (s - t) * x + s * t;
t += s;
u = min(r, next[u]);
}
}
cout << ans << "\n";
}
}
int main() {
ios::sync_with_stdio(0);
cin.tie(0);
int t;
cin >> t;
while (t--) solve();
}
#include <bits/stdc++.h>
using namespace std;
#define int long long
#define INF (int)1e18
mt19937_64 RNG(chrono::steady_clock::now().time_since_epoch().count());
struct FenwickTree{
int n;
vector <int> f;
vector <int> b;
inline void add(int i, int x){
b[i] += x;
for (int j = i; j <= n; j += j & (-j)){
f[j] += x;
}
}
inline void modify(int i, int x){
add(i, x - b[i]);
}
inline void init(int nn, vector <int> a){
n = nn;
if (a.size() == n){
vector <int> a2;
a2.push_back(0);
for (auto x : a) a2.push_back(x);
a = a2;
}
f.resize(n + 1);
b.resize(n + 1);
for (int i = 0; i <= n; i++) f[i] = 0, b[i] = 0;
for (int i = 1; i <= n; i++){
modify(i, a[i]);
}
}
inline int query(int x){
int ans = 0;
for (int i = x; i; i -= i & (-i)){
ans += f[i];
}
return ans;
}
inline int query(int l, int r){
return query(r) - query(l - 1);
}
};
void Solve()
{
// make edges l -> nx[l]
// binary lifting
// cost(i) needs to be computed
// sequence length > prefix => balances at (seq + prefix) / 2
// otherwise balances at seq length * prefix
// the first can happen only log times
// for all active i, need to add (i - l) * val[i]
// fix the sum for i = l and i such that sequence length > prefix
// find first val[i] > current, is it guaranteed to be active?
int n, q; cin >> n >> q;
vector <int> p(n + 1);
for (int i = 1; i <= n; i++){
cin >> p[i];
}
vector<vector<pair<int, int>>> adj(n + 1);
for (int i = 1; i <= q; i++){
int l, r; cin >> l >> r;
adj[l].push_back({r, i});
}
vector <int> nxs(n + 1, n + 1);
{
stack <pair<int, int>> st;
for (int i = 1; i <= n; i++){
while (!st.empty() && st.top().first > p[i]){
nxs[st.top().second] = i;
st.pop();
}
st.push({p[i], i});
}
}
vector<vector<int>> jump(21, vector<int>(n + 2));
for (int i = 1; i <= n + 1; i++){
jump[0][i] = (i > n) ? i : nxs[i];
}
for (int i = 1; i <= 20; i++){
for (int j = 1; j <= n + 1; j++){
jump[i][j] = jump[i - 1][jump[i - 1][j]];
}
}
vector <int> val(n + 1);
for (int i = 1; i <= n; i++){
val[i] = nxs[i] - i;
}
// fenwick for i * val[i] and val[i]
stack <pair<int, int>> st; // stack for active values
FenwickTree f1, f2;
vector <int> vv(n);
f1.init(n, vv); f2.init(n, vv);
for (int i = 1; i <= n; i++){
f1.modify(i, val[i]);
f2.modify(i, val[i] * i);
}
set <int> ss;
vector<vector<int>> at(n + 1);
for (int i = 1; i <= n; i++){
int r = i - val[i];
if (r >= 1){
at[r].push_back(i);
}
}
auto deac = [&](int x){
ss.erase(x);
f1.modify(x, 0);
f2.modify(x, 0);
};
vector <int> ans(q + 1);
for (int l = n; l >= 1; l--){
while (!st.empty() && st.top().first > p[l]){
deac(st.top().second);
st.pop();
}
st.push({p[l], l});
ss.insert(l);
for (auto x : at[l]){
ss.erase(x);
}
for (auto [r, i] : adj[l]){
ans[i] += f2.query(l, r) - l * f1.query(l, r);
auto calc = [&](int x){
// cout << "UPD " << x << "\n";
int vv = min(val[x], r - x + 1);
if (x == l) vv--;
if (vv > x - l){
vv += (x - l);
ans[i] += (vv / 2) * ((vv + 1) / 2);
} else {
ans[i] += vv * (x - l);
}
};
int pos = l;
// while (pos <= r){
// if (nxs[pos] > r && val[pos] <= (pos - l)){
// ans[i] -= val[pos] * (pos - l);
// calc(pos);
// }
// pos = nxs[pos];
// }
for (int j = 20; j >= 0; j--){
if (jump[j][pos] <= r){
pos = jump[j][pos];
}
}
assert(nxs[pos] > r);
if (!ss.count(pos)){
ans[i] -= val[pos] * (pos - l);
calc(pos);
}
for (auto x : ss){
if (x <= r){
ans[i] -= val[x] * (x - l);
calc(x);
} else {
break;
}
}
}
}
for (int i = 1; i <= q; i++){
cout << ans[i] << "\n";
}
}
int32_t main()
{
auto begin = std::chrono::high_resolution_clock::now();
ios_base::sync_with_stdio(0);
cin.tie(0);
int t = 1;
// freopen("in", "r", stdin);
// freopen("out", "w", stdout);
cin >> t;
for(int i = 1; i <= t; i++)
{
//cout << "Case #" << i << ": ";
Solve();
}
auto end = std::chrono::high_resolution_clock::now();
auto elapsed = std::chrono::duration_cast<std::chrono::nanoseconds>(end - begin);
cerr << "Time measured: " << elapsed.count() * 1e-9 << " seconds.\n";
return 0;
}









Quick Editorial, Thanks!
For F. A Simple Problem, the editorial says
I'd like to provide a counter example.
arr = $$$[5, 4, 1, 2, 3, 6]$$$ max = $$$[5, 5, 5, 5, 5, 6]$$$ min = $$$[5, 4, 1, 1, 1, 1]$$$ opt = $$$[5, 4, 5, 1, 1, 1]$$$
The optimal choice leads to 10 inversions, while all the combinations of prefix maximas followed by prefix minimas yield only upto 9 inversions.
I made a video editorial for the key idea behind problem F. A Simple Problem (minus the optimization part).
The editorial already has good number of hints. But if you're interested, here's an alternate set of hints for each problem on CF Step
C2 has too big of a jump in difficulty
ok wow, thanks for superfast editorial
I had hard time in this contest because of C2 and D ...
for the whole 1hr i thought of a dp solution of the problem B, i even got the recurrence relation for that but that would have taken O(N ^ 2), later I realized it was just greedy after 2hrs :(
btw C1 was easier than B, at least for my dumb-ass
In B, you can actually calculate points "if ends at chair $$$k$$$" for every $$$k$$$. (And don't need to prove greedy)
The biggest problem to calculate count of $$$p_i \gt k$$$, where $$$i \lt k$$$. It can possible with idea like difference array. (submission: 368708366)
You can also use an ordered/indexed set
In B I have a very simple solution: for every i from 1 to n,once we arrive at a chair,we can immediately sit on it and mark a[i] and let cnt+=1,we can use an array like
int mark[n+1];to mark itthan if the chair was marked before,we regret what we did before,that is,we assume we didn't sit on the chair that marked chair i,but we don't need to know which chair marked chair i,we just need to cnt-=1 and go to i+1, so the code will like this:
you can notice that cnt won't decrease though the progress,so the correctness is trivial
The gap between C1 and C2 ↓
not that small
fr
True
Good contest
wtf!! I submitted C2 13 sec before the end of the contest and after 13 sec it shows you can not submit now..
1 is not prime a number
It is just there in the prime array to handle the special case where the value is unchanged, basically for ease of implementation
Argentum47 Why not using primes also gets me accepted ... Submission : 368781155
check the part where I am making the poss (possible) choices by just brute force checking some no.s from p = 1 to b[i]
I think what you are doing kind of collecting a small "valid" set of numbers and running dp on them. I think it works since the valid set of number in itself will contain a lot of prime numbers and their multiples, which cover all the cases.
There might be a deeper reason behind why it works, but it kind of involves the solution/observation for additional challenge part, which I kind of want others to figure out on their own.
Hydrogen Bomb vs Dying Ant ahhh c1-c2. Regardless, very cool problems.
Congrats to the authors for a flawless contest o/ I believe C2 being a bit harder than expected is not that much of an issue ? Like, there was still C1 before. Also, D is a very cute problem.
C2 is a L problem btw
I think the tests for C2 are weak. My submission got AC but it's wrong: https://mirror.codeforces.com/contest/2210/submission/368766437
For this test case:
1
4
1 12 2 2
1 1 6 1
My code outputs 1 which is wrong (correct answer is 0).
Ste Yeah may be this is the reason why not using primes also passes
Using primes : 368758454 ... This one gives 0 for your input though !!
Not using primes : 368781155 ... This one also gives 0 for your input !!
2nd one shows Accepted don't know why!
Very Good Contest by the authors. C2 and D could have been swapped in order because of the difficulty of C2.
how to check whether the contest have hacking phase or not ?
C2 was great!
name checks out
For the additional challenge on C2, my code should run in $$$O(nL\log A)$$$ (and I think it's trivially optimizable to $$$O(n(L + \log A))$$$ if needed) which suffices to pass with $$$n = 10^6$$$; it passes the current tests in 93ms.
368744412
Your solution is the original intended solution for this problem :)
C2`s editorial is TLE
For C2 my solution should work in O(n * L * log(max(a))) which is sufficient enough to pass the additional challenge. I considered L = 26 but that can be reduced.
Link :- https://mirror.codeforces.com/contest/2210/submission/368751072
There is no button to rate the problem but B is a really good one! I think this idea could have been a foundation of a much more difficult problem, if the setting was a bit different. Nevertheless, I've learned some new idea (exchange principle to prove the function is optimal at the border point), so I'm happy!
Exchange argument are you saying ??
Or something new ?? would you mind sharing ?
The smart solution is saying that the game is ending at index i, therefore in the prefix of 1 to i-1 you can only take chairs when p[j] <= j || p[j] >= i, 1 <=j < i (notice this solves all array p and not only a permutation)
but we can actually show that just counting how many j there are in the whole array where p[j] <= j is enough for a perm
why? because let's look some p[j] >= i, because p is a perm there exists a k where p[k1] = j, if k1>= i we stop, otherwise we continue it back with k1 to get k2... now we got kt, kt-1 such that kt-1 <i && kt >= i, notice k0 = j.
therefore we can swap j with this kt cause p[kt] = kt-1 < kt.
notice because p is a perm the functional graph it's described is a line and circle, so we get that this is a one to one matching
Yeah the same thing i did
B editorial:
The maximum number of chairs which you can visit if your game ends at chair x is given by |i:1≤i<k,pi≤i or pi≥k|. I believe it should be "chair k" instead of "chair x"
Thanks for pointing that out. We have corrected it :)
in c2, how does one get the intuition for considering only 20 primes? Is this an advanced technique?
Not an advanced thing Like maximum distinct primes that a number less than equal to 10^9 can have is 9 you can test it by multipliying 2 * 3 * 4 *.....23 and this will be less than 10^9
so for k1 = a[i-1] / x we have 9
k2 = a[i+1] / y we have 9.
so total 18 distinct primes now if suppose all 18 are unusable meaning it divides k1 or k2, then you need 2 more distinct primes so 20 is the number you arrive on. Although I tested some codes and for passing the question you don't need 20. Even 16 is working, Possibly because you never reach theoritical worst case as it depends on gcd and its neighbours Also the testcases are weak in this question as pointed above by a user so that can also be a reason. This is more of a observation thing rather than some advance technique or proof.
from c1 you get that there is this annoying points where you need to multiply them but you cannot ruin the gcd, so ofcourse you choose prime, now you start to think when does multiplying by p is bad, and the answer is if the previous or the next number have p as their divisor more than you. but one can only have small number of prime divisors and that the intuition
Yeah I understood that logic(thanks for your comment, though).
However the realization that you only need to consider 20 primes for the dp is what I was struggling with. the commenter above has given good reasoning but it seems to me that to even think of doing such calculations (that product of nine primes exceeds 10^9) requires some experience
a moment of silence for anyone with
gcdmodein their C2 solution...I had a feeling C2 had something to do with primes, but it made too little sense as if I was "skipping" over a lot of possible solutions. The first three were good, but the gap between C1 and C2 and then the later ones was too much in my opinion, maybe a harder C1 could've made it more even? In any case, good job to the problemsetters and contest organizers for another good contest and fast editorial.
Let me rephrase the solution of B. Hopefully this is easier to understand.
It's clear that you can always sit the chair $$$i$$$ if $$$i \ge p_i$$$. In other words, sitting the chair $$$i$$$ makes some other chairs unavailable if $$$i \lt p_i$$$.
Let's think of a directed graph $$$G = (V, E)$$$, where $$$V = \{ 1, 2, \ldots, n \}$$$ and $$$E = \{ (i, p_i) \; | \; i \in V, i \lt p_i \}$$$. Since $$$p$$$ is a permutation, every vertex of $$$G$$$ has at most one indegree and at most one outdegree. Thus, the graph $$$G$$$ is split to several components, where each component has the following form:
where $$$c$$$ is the size of the component $$$(c \ge 1)$$$.
Out of the chairs in one component, you can sit only one chair (let's call it $$$i_k$$$), because:
For each component, it's always advantageous to pick the last chair in the sequence $$$i_c$$$, because that's the only chair that won't make other chairs unavailable.
Therefore, the answer to the problem is the number of components, which is equal to the number of chairs satisfying $$$i \ge p_i$$$.
nice explanation !
An interesting solution for C2 using a cap of 2 primes , observed that if anyone has 3 different prime factors then it will always be used so did a dp over the indexes that had less than 3 prime factors 368804443
One can also approach B by considering the visiting order from right to left, and seeing whether it makes sense to sit on this chair or not.
It always makes sense to sit on the last chair. Now sitting on some ith chair would disqualify you from sitting on every chair after and including p[i], including the last one. So the number of chairs sat on you gain by sitting on the ith chair is offset by the last chair that you lose. So it only makes sense to sit on the ith chair where p[i] <= i.
okay maybe this wasn't my first pupil contest but hey im a pupil now :D
Why did I get TLE when following the solution approach for Problem C2???
I think the bottleneck was your custom gcd implementation, I removed the gcd implementation in your code and used inbuilt std::gcd and now it passes easily. I think it is because the inbuilt implementation is very optimized, while your implementation took a lot more time. https://mirror.codeforces.com/contest/2210/submission/36882423 (I am not sure if you can access it, but just submit your implementation after removing the gcd function)
I couldn't open your submission, but I followed your method and changed my own gcd function to the inbuild std::gcd. However, it still TLE just like before. Why is that?
Try using std::gcd instead of __gcd, that should pass. I still don't understand why is there a crazy difference between both implementations (I thought __gcd is faster but here normal std::gcd is working a lot better)
whats the proof of C1, like we can change a(i-1), a(i+1) and if we change ai to LCM(gi,gi+1), why the gcd of new a(i-1) and LCM is still g1 and new(ai+1) and LCM is still g2 ??
try prime factorizing and see
Given 4 numbers $$$a_1, a_2, a_3, a_4$$$.
Let $$$g_1 = \gcd(a_1, a_2)$$$, $$$g_2 = \gcd(a_2, a_3)$$$, and $$$g_3 = \gcd(a_3, a_4)$$$.
If we set $$$a_2^\prime = \text{lcm}(g_1, g_2)$$$ and $$$a_3^\prime = \text{lcm}(g_2, g_3)$$$, we need to prove that $$$\gcd(a_2^\prime, a_3^\prime) = g_2$$$.
Hence,
thanks for the perfect proof, but I really doubt that if most of the people really proved this or not as it was not very easy to prove.
I really like the tree-invariance solution for D, great problem!
we can use dp in problem B dp[i][0] not pick i ,max ans dp[i][1] pick i ,max ans
A faster (time-complexity-wise) and simpler (or at least in my opinion) solution for C2 is to precompute which primes you can multiply A[i] by before doing DP. This operation is only O(N*P) where P is the number of primes. In my case, even 17 primes were able to get AC. Then, the DP becomes much easier, where dp[i][j] is the maximum number of A[x]s (0 <= x <= i) you can multiply by a prime number without collisions. You multiply A[i] by the jth prime number (you also have to include the case where you don't multiply A[i] by anything). This is at most O(N * P^2). Then, the answer is the maximum of all j from 0 to N-1 in dp[N-1][j] + the number of A[i]s that you could originally change without multiplying by a prime. If N <= 10^6, then this would still work under the 6s time limit (P <= 17, although I believe it can be lower than this).
C2 we can actually solve with only 10 primes
let's solve c1 and detonate indexes where the lcm is ai and 2 <= bi/lcm < 31 as hard points (if bi/lcm = 1 then there is nothing we can do, if it's bigger than 31 we an always solve, notice 31 is the 11th prime)
now lets look at continuous blocks of this hard indexes we as the editorial says will do dp[i][j] solve for the ith prefix and we put in the i index the j prime, now we can only put in i something that i-1 didn't have, and then put other prime in i-1, this is all checked just like the editorial just without the i + 1 things so I wrote it less formal.
so we can make the solution run in O(n*100*log(maxA)))
This is not true; having too few primes will fail test 7. In particular, if you have $$$a = [625045033, 1, 983752770]$$$ then you need to multiply the $$$1$$$ by a prime at least $$$53$$$.
You are more than welcome to look at my submission, IDK how to add a link and it passed.
secondly notice that in the i step I am only keeping the gcd(ai, ai-1), therefore only the first 10 prime needed and not the first 20 because bi is smaller than 1e9 so there is no point of checking more than that
because if we have 10 primes for each, one can only posses 9, therefore there must be one who we can use.
in your example the array transforms to 1,1,1 so it's not really intresting
Edit: my previous comment was wrong, here's a correction.
Consider the array $$$a = [625045033, 625045033, 1, 983752770, 983752770]$$$. Now you cannot reduce any of the elements. The third element requires a prime at least $$$53$$$.
I see what your code is doing now; you just got lucky with the weak tests (several other comments also noted the weak tests). Indeed, your code is hackable with the following case:
where your code outputs $$$1$$$ but the desired answer is $$$0$$$.
OH yeah I get it now, I was wrong in my analysis.
well good thing I got lucky :) but I think if I were to get WA in contest I will just raised the number of primes anyways but it's nice not to get WA
but I think I do have a solution in n*log(max(A)), actually it's not_amir after a discord call
we keep for each one all of the primes he is allowed to do up to 20 this time, and we go from left to right on this blocks, if our size is not 0, we check the prev, if it's size is 0 we can do anything so we add 1, if it's size is bigger than 1 we can also do anything, cause for everything we do he can do the one we didn't do, if it's size is 1 than ofcourse we prefer to do him, so we remove it's prime from our option, (if we can do it's prime).
would like to get your comment if you think thats works but I am pretty sure that better simplelife
Sorry , I am not able to really understand what you mean (perhaps you can clarify better), but it seems you are on the right track.
Ideally this solution shouldn't pass but the reason it passes is because of weak test cases :(. In test cases 7,8,9, meant for catching solutions using less primes, the b_i is set to 1e9, which indirectly allows this incorrect solution to pass. I didn't consider this specific case while making test cases, it was an oversight on my part, sorry for that.
Consider the test case, as mentioned by reirugan, $$$a=[625045033,625045033,1,983752770,983752770]$$$ and $$$b = [1,1,32,1,1]$$$. Your solution should give WA here
Upd : i didn't notice reirugan had posted this already, srry for repost.
Here in this code block for Problem C2 in Editorial
why is it safe to skip
"else dp[i][j] = max(dp[i][j], dp[i-1][k])"
after
if (gcd(val1, c[i+1]) == gcd(a[i], a[i+1])) dp[i][j] = max(dp[i][j], dp[i-1][k] + 1);
I thought of the following for an O(n(L + log(max(A)))) solution of C2...
So we have our C[i] where A[i] must be some x times C[i], let that x be K[i], so A[i] = C[i]K[i]. Let the original K[i] be ak[i]. I then define D[i] = B[i]/C[i], and that's the upper bound of K[i]. Now, just like in the editorial, if D[i] != 0, or similarly B[i] >= C[i], then immediately set K[i] = 1, intuitively to let each value be as unrestrictive as possible to other elements affected by it which are adjacent elements.
We now just need to change the indices where ak[i] = 1 and D[i] != 0. For this, we have the following constraints:
Where those A[i-1] and A[i+1] are the possibly-modified A[i]s based on K[i]s. As in if we set K[i] = 1, then that's what we use in those constraints. We let that value on the last constraint be F[i], thus the last constraint becomes gcd(K[i], F[i]) = 1.
wait i genuinely forgot why C[i-1] and C[i+1] is enough rather than C[x]K[x] there since what if C[x]K[x] for x=i-1 or x=i+1 isn't modified to become C[x]? then it's contribution must be higher than C[x]. maybe weak tc or im wrong but just change it to C[x]K[x].
We consider these constraints for all indices at once without yet modifying anything. By doing this, we can consider the set choice[i] with nchoice[i] elements to be all prime numbers <= D[i] satisfying the constraints above, which are all of the possible choices of K[i] with the absolute least possible constraints it could ever have which is one of the key things in here. We only need to consider three of such values, because the only thing constraining K[i] are two values, K[i-1] and K[i+1], and since we can simply choose 1 prime, the constraint is simply a not-equal-to constraint. So three values is enough and is used when the left and right values are within i's set.
Now, consider a subarray where we are able to change everything on that subarray, in other words that subarray satisfies D[i] != 0, and ak[i] = 1. Then, if nchoice[i] is at least two for all elements of that subarray, there definitely exists a way to set every K[i] in there where adjacent elements are distinct. I'm just trying to say that in that case it wouldn't even be three anymore.
So the idea here is we simply set all indices where nchoice[i] is precisely 1. Of course, we absolutely must set that value to choice[i][1], so we do that, but then we propagate left and right to remove choice[i][1] from the left and the right, and if their nchoice[i] becomes 1, do the same there and continue propagating. Now we're left with nchoice[i] >= 2 for all indices that we can and need to change, or also of course nchoice[i] = 0 but that is within the set of indices that we can't change there. Anyways for the remaining indices i where nchoice[i] >= 2 it's already guaranteed that we can set all of those a[i] to the value we want so we let them be what they are.
Used the following way:
368881815
I think this will pass the additional challenge, no? I mean, submission time is 62ms with n=5e4, and 1e6/5e4 * 62ms should be less enough than 6s too.
Your logic and solution looks correct to me, although dfs seems to a bit of overkill to me, I think you can implement this in a single linear pass only, but yeah great job :)
In problem C1, If I get an array:
a = [1, 2, 1, 2, 1],b = [3, 3, 3, 3, 3].I can only reduce i=1 and i=3.
But, i can also change the array completely to
a' = [2, 1, 2, 1, 2]?And the GCD of every subarray should be same (l<r)?
My bad, I misread the question. I didn't read the bold part, where bi = ai. I was actually solving for C2 :(
I didn't get smth in C2. In first hint you tell if you can do less than a[i] then do it, but what if b[i] is less than that value you try to set to?
B. Simply Sitting on Chairs
A better solution for problem B is :
include <bits/stdc++.h>
using namespace std;
The logic is that we only need to sit on the good chairs (where pi is less than or equal to the actual position of the chair).
There is no need for DP in C2. The solution could be much simpler.
We know that it is always better to decrease $$$a_i$$$ whenever possible. Notice how decreasing any $$$a_i$$$ depends only on $$$\gcd(a_i, a_{i \pm 1})$$$ which should stay constant. Thus, we can start by applying all possible decreases and then moving on to possible increases, this is also important because increasing any $$$a_i$$$ doesn't only depend on the $$$\gcd$$$ but on adjacent values too, i.e., it is optimal when $$$a_{i \pm 1}$$$ is minimal, so decreasing first is crucial.
When increasing, we are now sure that $$$a_i = LCM(A, B)$$$. My claim is that if there are at least $$$2$$$ primes we can multiply $$$a_i$$$ by, then it is always possible to increase $$$a_i$$$ independent of other elements in the array. It is not possible when we have two adjacent elements that can only be multiplied by the same prime (let's call such an element stinky), i.e., multiplying one of them by that prime disallows us from multiplying the other by that same prime because they are adjacent.
Basically, we can iterate through the indices from left to right, and do $$$a_i := a_i \cdot onlyPrime$$$ whenever we have a stinky element so we remove the possibility of choosing this prime in the next iteration. Since, we only need to check the first $$$2$$$ primes that work, we can just use the $$$O(\sqrt n)$$$ version of the $$$\text{check_prime}$$$ algorithm.
Code: 370020187
Could someone please help me with C1 — [a,b,c] My solution is that considering the GCD of (a,b,c) must be <= b, so if I check that the GCD of (a,b) & (b,c) is not b then I can decrease b. In order to check if this I simply need to check if b divides c or a, if it doesn't divide either then I can surely decrease b. Handling the boundaries I should be able to check from i -> 1 to n — 2 if any of them can be decreased or not —
Stupid solution for E that has a total query cost of about $$$2.38 \cdot n$$$ (for non-trivially large $$$n$$$): 371644325
Define $$$g(l, r)$$$ to be the parity of $$$((r - l + 1)/f(l, r))$$$. One can notice that for even $$$(r - l + 1)$$$, $$$g(l, r)$$$ is equal to the parity of the number of ones in $$$s[l, r]$$$.
Since the query order here depends solely on $$$n$$$ (and not $$$s$$$), one can easily check the query cost on all values of $$$n$$$ offline.
As it turns out, the step with $$$m$$$ is actually unnecessary. You can just run the low-IQ procedure and then unite $$$(1, 2)$$$ to get a worst case query cost of $$$2.6 \cdot n$$$: 371648816.
This used to be the intended solution, before we came up with the current solution given in the editorial.
We didn't reduce the cost limit to kill this solution because we thought the limit would be too tight and it was not worth to do it.