A — Indirect Sort
Authors: mejiamejia, Ecrade_
We claim that we can sort the array if and only if $$$a_1 = 1$$$.
Necessity
We can notice that index $$$1$$$ cannot be affected by any swap operation.
Let's see what happens to the value $$$1$$$. According to the definition of the operation, it can either increase or be swapped. In order to be increased, there must exist some $$$k$$$ such that $$$1 > a_k$$$, but since $$$1$$$ is the minimum possible value, it will never be true as other values in array $$$a$$$ can only increse as well. Since index $$$1$$$ can not be affected by a swap operation and $$$a_1>1$$$, we conclude that if $$$a_1 \neq 1$$$, the answer is No.
Sufficiency
Let's focus on the second operation. Since we have $$$a_1 = 1$$$, we can always choose $$$i=1$$$ and the operation then turns into picking some pair $$$2 \le j < k \le n$$$ and swapping $$$a_j$$$ with $$$a_k$$$. It's trivial to see we can always sort with such an operation.
B — Maximum Substring
Author: tibinyte
Considering that if we want to find the max value of $$$x \cdot y$$$, then the whole string is the best to calculate, for it $$$0$$$ s and $$$1$$$ s are the max.
Then considering $$$x \cdot x$$$ and $$$y \cdot y$$$ : what we need to do is to calculate the max continuous number of $$$0$$$ or $$$1$$$, compare its square to the first condition, then take the bigger one as the answer.
C — Complementary XOR
Author: tibinyte
For each operation, the interval changed by a sequence and b sequence is complementary, so you must judge whether all $$$[a_i=b_i]$$$ are the same at the beginning. If they are different, you can't have a solution.
Now, if $$$a = \neg b$$$, we can do an operation on $$$a$$$ and have $$$a=b$$$. Now suppose $$$a_i=b_i=1$$$ for some $$$i$$$ and try to make $$$a_i=b_i=0$$$ without changing anything else. If $$$i>1$$$, then this is very simple, we can just do an operation with $$$(1,i)$$$ and an operation on $$$(1,i-1)$$$. If $$$i=1$$$ we can make $$$(1,n)$$$ and $$$(2,n)$$$. Since $$$n>1$$$, this can always be done, thus we found a solution using $$$2 \cdot n + O(1)$$$ operations. To optimize it to only use $$$n + O(1)$$$ operations, note that we only care about the parity of the number of operations did at any index.
D — Count GCD
Author: tibinyte
We can notice that if for some $$$2 \le i \le n$$$, $$$a_{i-1}$$$ is not divisible by $$$a_{i}$$$, then the answer is $$$0$$$. Else, note that all the prime factors of $$$a_1$$$ are also prime factors in all the other values. Thus after factorizing $$$a_1$$$ we can quickly factorize every other value.
Now let's find the number of ways we can select $$$b_i$$$ for every $$$i$$$. The answer will be the product of these values since each position is independent. It's easy to see that there is only one way to select $$$b_1$$$, that is $$$a_1$$$. Now for $$$i > 1$$$ we need to find the number of values $$$x$$$ such that $$$gcd(a_{i-1},x)=a_i$$$. Let $$$x=a_i \cdot k$$$, we can rephrase as $$$gcd( \frac{a_{i-1}}{a_i} \cdot a_i, a_i \cdot k) = a_i$$$, which is also equivalent to $$$gcd( \frac{a_{i-1}}{a_i}, k) = 1$$$. We have $$$k \le \frac{m}{a_i}$$$, thus the task reduces to a simple principle of inclusion-exclusion problem.
Time complexity $$$O(2^9 \cdot 9 \cdot log + \sqrt{m})$$$ per testcase.
E — Bracket Cost
Author: tibinyte
Let $$$a_i=1$$$ if $$$s_i=($$$, $$$a_i=-1$$$ if $$$s_i=)$$$ and $$$b_i$$$ be the prefix sum of $$$a_i$$$.
Theorem: the cost of $$$s[l+1,r]$$$ is $$$max(b_l,b_r)-min(b_l,b_{l+1},...,b_r)$$$
Necessity: after one operation, $$$max(b_l,b_r)-min(b_l,b_{l+1},...,b_r)$$$ decrease at most one.
Sufficiency: If $$$b_l<b_r$$$, we can do operation 2, add a right bracket at the end of string.
If $$$b_l>b_r$$$, we can do operation 2, add a left bracket at the beginning of string.
If $$$b_l=b_r$$$, assume x be the largest x that $$$b_x=min(b_l,b_{l+1},...,b_r)$$$, then $$$s_{x+1}=($$$, so we can do operation 1, cyclic shift $$$s[l,x+1]$$$ to right.
Under any condition, $$$max(b_l,b_r)-min(b_l,b_{l+1},...,b_r)$$$ decrease one after one operation.
We can use binary index tree to calculate $$$max(b_l,b_r)$$$ and $$$min(b_l,b_{l+1},...,b_r)$$$.
F — Majority
Author: tibinyte
First off, let's try to reduce the given operation to a simpler form. We claim that if it is possible to make the string $$$111...111$$$ using the specified operation, we can make it $$$111...111$$$ by doing the following new operation : Select two indices $$$(i,j)$$$ such that the substring $$$s_{i...j}$$$ looks like this $$$111..100...001...111$$$ and make it all one. The proof is just to show that if we can perform operation $$$(i,j)$$$, then it must exist some substring of $$$s_{i...j}$$$ respecting the propriety of the new operation.
Let $$$dp_{i,j}$$$ be the number of binary strings of length $$$i$$$ that in the final form ( after no more operations can be made ) begin with a prefix full of ones of length $$$j$$$. For transitions, we have to iterate through the length of the next $$$0$$$ sequence and the length of the fixable prefix right after it ( such that we cannot perform an operation to make a bigger prefix, because that would break the definition ), but there is one problem -- we cannot compute $$$dp_{i,i}$$$ using any recurrence relation. Fortunately, we can compute it by subtracting the ways we get roadblocks, because, by definition $$$dp_{i,i}$$$ can be turned into $$$111...111$$$ which doesn't have any roadblocks at all. This is $$$O(n^4)$$$ if implemented naively, but one can optimize it to $$$O(n^2)$$$ by keeping prefix sums over prefix sums.
Challenge: Solve the problem in $$$O(nlog^2)$$$ or $$$O(nlog)$$$ ( modulo is prime )
G — Doping
Author: tibinyte
Consider a permutation $$$p$$$ that lexicographically smaller than given permutation, assume the first different position is $$$k$$$, if we fix $$$p_k$$$, the remaining numbers $$$a_{k+1},a_{k+2},...,a_n$$$ can be arranged in any order. Denote $$$S$$$ as the set of remaining numbers. Let $$$m$$$ be the number of consecutive pairs, $$$m=|{x|x \in S,(x-1) \in S\cup{p_k} }|$$$, $$$p_k$$$ is included because it is possible that $$$p_{k+1}=p_k+1$$$. Fix the number of positions that $$$p_i=p_{i-1}+1$$$, this consumes some consecutive pairs, the remaining consecutive pairs should not increase the number of positions. Define a sequence $$$p$$$ good if and only if it has no $$$i$$$ that $$$p_i=p_{i-1}+1$$$.
Consider $$$dp[i][j]$$$ = number of good arrangements of $$$i$$$ values with $$$j$$$ consecutive pairs. Consider one consecutive pair, if it is the only wrong pair, the number of values decrease by one and this situation should be excluded, otherwise this pair can be ignored, so $$$dp[i][j]=dp[i][j-1]-dp[i-1][j-1]$$$, with boundary $$$dp[i][0]=i!$$$. After enumrating $$$k$$$, let $$$S'={a_k,a_{k+1},...,a_n}$$$, $$$p_k$$$ is chosen from $$$S'$$$ and $$$p_k<a_k$$$. Assume $$$p_k=x$$$, if $$$x-1 \in S'$$$, the number of consecutive pairs decrease by one, otherwise it stay the same. Additionally, if $$$x \ne p_{k-1}+1$$$, the number of subarrays increase by one. For every situation, count the number of $$$p_k$$$ and then calculate answer.
Time complexity: $$$O(n^2)$$$.
Since we have to deal with lexicographically smaller condition, it's natural to fix the common prefix $$$i$$$. Now, we can mismatch $$$p_{i+1}$$$ with some of the left values. After that, we are left with permuting the other elements.
Let's say we are trying to find the number of ways to permute the remaining values {$$$a_1,a_2,a_3,...,a_k$$$} such that $$$a_i \neq a_{i+1}-1$$$ for $$$1 \le i < k$$$.
For each $$$i$$$, we define $$$b_i$$$ to be a boolean value denoting the existence of $$$a_i - 1$$$ in set $$$a$$$.
We can formulate the following 2 claims:
- The number of ways to permute array $$$a$$$ only depends on $$$k$$$ and sum of $$$b$$$
- For some pair $$$(i,j)$$$, the number of ways to permute $$$a$$$ starting with $$$a_i$$$ is equal to the number of ways to permute $$$a$$$ starting with $$$a_j$$$ if and only if $$$b_i = b_j$$$
With these observations, we can formulate a $$$dp$$$ solution for counting permutations. Let $$$dp_{i,j,flag}$$$ be the number of ways to permute if $$$k=i$$$, $$$j$$$ is the sum of $$$b$$$, starting with a value with $$$b$$$ value equal to $$$flag$$$. Let's conclude transitions:
- $$$dp_{i,1,true} = dp_{i-1,1,false}$$$
- $$$dp_{i,1,false} = (dp_{i-1,1,true} + dp_{i-1,1,false}) \cdot (i-1)$$$ $$$-$$$ By claim 2, we can think of ending in the minimum value and we are left with another permutation we can transition 2.
- $$$dp_{i,j>1,true} = (dp_{i-1,j-1,true}+dp_{i-1,j-1,false}) \cdot j$$$ $$$-$$$ By claim 1, we can think of an equivalent array $$$a$$$ in which all values with $$$b_i = 1$$$ are left alone. Combining with claim 2, all distributions are equal, and selecting a single value makes the permutation have $$$j-1$$$ values with $$$b_i=1$$$.
- $$$dp_{i,j>1,false} = (dp_{i-1,j,false} + dp_{i-1,j,true}) \cdot (i-j)$$$ $$$-$$$ By claim 2, we can select any value with $$$b_i=0$$$ and combining with claim $$$2$$$, we can select the first value in the last chain.
Let $$$sum$$$ be the sum of $$$b$$$.The number of ways to permute $$$a$$$ such that it respects the given demands, is $$$dp_{k,sum,true} + dp_{k,sum,false}$$$. Now the number of ways to permute $$$a$$$ with $$$t$$$ positions such that $$$a_i \neq a_{i+1}-1$$$ is $$$\binom{k-sum}{t-sum} \cdot (dp_{k,t,true} + dp_{k,t,false})$$$. This is because we can choose the values that contribute, $$$sum$$$ of them are always contributing, so we are left with choosing $$$t-sum$$$ values out of $$$k-sum$$$ possibilities.
Now thinking about the values we missmatch $$$p_{i+1}$$$ with, looking at claim $$$2$$$, a lot of values are equivalent. We can count each value type separately since there are $$$O(1)$$$ types, and have a $$$O(n)$$$ complexity to update the results accordingly. Also be careful with the case where a past range of consecutive values is continued after the missmatch. It's easy to handle this because $$$p'_{i+1}$$$ is fixed and we can count it independently in $$$O(n)$$$.
Thus we have achieved $$$O(n^2)$$$ total time complexity
H — BinaryStringForces
Author: tibinyte
We call a maximal sequence of $$$0$$$ a $$$0$$$ block, a maximal sequence of $$$1$$$ a $$$1$$$ block.
For each $$$i$$$, store some disjoint intervals with the following property:
For each $$$j$$$ that $$$i<=j$$$ and $$$s[j]=1$$$, j is in one of the intervals if and only if $$$(i,j)$$$ is a good substring.
We can prove the number of intervals for each $$$i$$$ is $$$O(\log n)$$$.
Let's start from $$$(i,i)$$$ to $$$(i,n)$$$, and consider when. good substring become bad and bad substring become good.
Assume we are at $$$(i,j)$$$.
If it is a good substring and ending with 1, it become bad when it meet a longer 0 block, so we get one interval and go to a bad substring ending with 0.
If it is a bad substring and ending with 0, suppose the next 1 is at k, then the next good intervals start at the smallest j' with three properties: (1) $$$(k,j')$$$ is good
(2) $$$s[j']=1$$$
(3) $$$(k,j')$$$ is not shorter than the last 0 block.
Proof: Property 2 is obvious because we only care about substrings ending with 1.
If $$$(k,j')$$$ is shorter than the last $$$0$$$ block, it is impossible to change this $$$0$$$ block, so $$$(i,j')$$$ is bad.
If $$$(k,j')$$$ is bad, if it starts with a 1 block $$$(k,j")$$$ not shorter than the last $$$0$$$ block after some operations, then $$$j"$$$ is smaller than $$$j'$$$ and has those three properties, otherwise it is impossible to change the last $$$0$$$ block.
If it has all three properties, in substring $$$(i,j')$$$, $$$(k,j')$$$ is good and can change the last $$$0$$$ block $$$1$$$, and then change $$$(i,j')$$$ to $$$1$$$, so we go to a good substring ending with 1.
When good substring become bad, the length is doubled, so the number of intervals for each i is $$$O(logn)$$$.
To know when good substring become bad ,for every 0 block $$$(j,k)$$$,if $$$(j,k)$$$ is longer than $$$(i,j-1)$$$, then $$$(i,k)$$$ is bad, we can preprocess those i in $$$O(n)$$$.
To know when bad substring become good, we go with $$$i$$$ from $$$n$$$ to $$$1$$$ and for each $$$i$$$, search in $$$O(\log n)$$$ intervals $$$O(\log n)$$$ times.
So we get those intervals in $$$O(n\log^2 n)$$$
Now we can check each substring ending with $$$1$$$ is good or not. Do the above algorithm on the reversed string, so we can check each substring starting with 1 is good or not. But we can not check substring starting and ending with 0.
Suppose $$$(i',j')$$$ is the longest $$$0$$$ block in $$$(i,j)$$$, then $$$(i,j)$$$ is good if at least one of $$$(i,i'-1)$$$ and $$$(j'+1,j)$$$ is good and not shorter than $$$(i',j')$$$.
Proof: After changing the longest $$$0$$$ block, we can change all other $$$0$$$ blocks. So for any substring, it is good if and only if we can change its longest $$$0$$$ block.
If $$$(i,i'-1)$$$ is good and not shorter, it can change $$$(i',j')$$$, so $$$(i,j)$$$ is good.
If $$$(i,j)$$$ is good assume the longest 0 block is changed from left. So after some operations in $$$(i,i'-1)$$$, there is a substring $$$(k,i'-1)$$$ become all $$$1$$$ and not shorter than $$$(i',j')$$$. Then $$$(k,i'-1)$$$ can make $$$(i,i'-1)$$$ good as there is no longer $$$0$$$ block.
Similarly, we can prove for $$$(j'+1,j)$$$.
So for each substring, we consider its longest $$$0$$$ block (substrings with no $$$0$$$ are always good). For convenience, if there are multiple maximum, we take the rightmost.
The longest $$$0$$$ block in substrings can be either a $$$0$$$ block in the original string or part of $$$0$$$ block in the original string. We can ignore substring in one $$$0$$$ block, so the longest $$$0$$$ block can only be a prefix or suffix, the number of possible longest $$$0$$$ block is $$$O(n)$$$.
For each possible 0 block $$$(i,j)$$$, assume the longest substring that $$$(i,j)$$$ is the longest $$$0$$$ block is $$$(i',j')$$$, we need to count answer for substring $$$(l,r)$$$ that $$$i' \le l \le i,j \le r \le j'$$$, it is equivalent to count the number of $$$l$$$ that $$$i'\le l \le i-(j-i+1)$$$ and $$$(l,i-1)$$$ is good and the number of $$$r$$$ that $$$j+(j-i+1) \le r \le j'$$$ and $$$(j+1,r)$$$ is good.
Notice that $$$s[i-1]=s[j+1]=1$$$, we can calculate them by above intervals.
We can use persistent segment tree directly or use segment tree or binary index tree after sorting queries. We need $$$O(nlogn)$$$ modifications and $$$O(n)$$$ queries. Thus we have solved our problem in $$$O(n\log^2 n)$$$.
#LeafTON
The Solution code is not revealed
tibinyte orz
omg green editorial
Omg green editorial
omg green editorial
omg green editorial
break the chain
omg green editorial
Omg green editorial
Great contest! Seems like the editorial was prepared before the contest :D
tibinyte orz
orz
I use divide and conquer to solve E. Assume that
(
is $$$+1$$$ and)
is $$$-1$$$. Let $$$t$$$ be the total balance of the string and let $$$m \leq 0$$$ be the minimum prefix balance of the string. Then there are two main observations:Using this, we may just compute sum of $$$|m|$$$ and of $$$\max(0, t)$$$ for every sub-string separately by splitting a string in two equal halves, computing answers on them recursively and then computing answers for strings passing the middle point by sorting prefixes of right half by $$$t_2$$$ and by $$$m_2$$$, and by using some prefix sums on them.
can you explain more that how you got ans for merged string str1+str2 from answers of str1 and str2 and t1 t2 m1 m2 ,ok that answer of this string is |m| + max(0,t) but how to get answers of all substrings that pass from the concatenation point of str1 and str2
Balance of the concatenation is just the sum of their balances. And minimum prefix balance is either in the left half, in which case it's $$$m_1$$$ or in the right half, in which case it's $$$t_1 + m_2$$$. You just pick the minimum.
To compute $$$\max(0, t)$$$ contribution, note $$$t = t_1 + t_2$$$ and:
To compute $$$|m|$$$ contribution, note $$$m = \min(m_1, t_1 + m_2)$$$ and:
Actual contributions then may be computed with prefix sums. My sol is 179624832. Now that I think about it, my complexity is actually $$$O(n \log^2 n)$$$ because of sorting... Though one could've used counting sort instead.
got you ORZzz!!!
I don't understand the part $$$max(0, t)$$$ clearly.
I mean to destroy the part $$$t \geq 0$$$, we just put the ')' at the end $$$t$$$ times. To destroy the $$$|m|$$$, we do the cyclic shift.
But I don't know what will happen when $$$t < 0$$$.
When $$$t < 0$$$, we put $$$|t|$$$ of
(
in the beginning. It will reduce $$$m$$$ by $$$t$$$ and we will do $$$|m| - |t|$$$ cyclic shifts to get rid of remaining imbalance, making it a total of $$$|m|$$$ operations.Thank you so much!
Can anyone explain in problem D, how to use the inclusion-exclusion principle to find the count of numbers in range [1, m / a[i]] that are co-prime to a[i — 1] / a[i] ?
That's essentially what we are trying to do for each i, right? To find out the count of numbers that are co-prime to a given number x in a given range starting from 1.
Let the prime factors of
a[i - 1]/a[i]
bep1, p2, ..., pk
. The goal is to compute the number of numbers in range[1, m / a[i]]
such that they are not divisible by any ofpi
. To do this, letA_i
be the set of numbers in range[1, m / a[i]]
such that they are divisible bypi
. We can apply PIE on allA_i
to find the total number of in-range numbers that are divisible by somepi
.m / a[i]
minus this number is the desired resultThanks! I missed the point the total number of possible pi is small.....
.
Video Solution for Problem D
You only need to compute coprime numbers in range
[1, m/a[i]]
for around~30
times. As for most of the indexi
ina
,a[i] = a[i-1]
.How to apply inclusion exclusion?
Say
f = a[i-1]/a[i]
. Thenf
will only have~10
distinct prime factors. You can easily generate a subset mask for all possible combinations of prime factors off
and compute their multiples in[1,m/a[i]]
along with the appropriate sign depending on the number of factors.Thanks. Very helpful. Can you suggest some good resources for Inclusion exclusion and combinatorics? + How to master the number theory problem?
First BinaryStringForces round on codeforces.com
I really couldn't make all the tasks about binary strings, but that was the intention...
I was inspired by antontrygubO_o's xor contest idea on codechef.
Problem statements were short and quite simple to understand. No unnecessary stories. Thank you.
The unneccessary stories were the best part :,(
Quickest Tutorial Ever, WOW!
Good Job boys ..
My solution for D is similar but instead of factorizing a1, I use the fact that a[i]/a[i+1] is amortized:
We notice that b[0] = a[0] and let's look for transition from b[i] to b[i+1]:
Exist a number b[i+1] s.t. it is possible to do the i -> i+1 transition <=> gcd(a[i], a[i+1]) = a[i+1]
b[i] must fulfill that gcd(a[i], b[i+1]) = a[i+1] <=> b[i+1] € {x s.t. gcd(x/a[i+1], a[i]/a[i+1]) = 1 and a[i+1]|x and x <= m}
Those are the number of co-primes numbers of a[i]/a[i+1] in range [0, (m/a[i+1])]
To get the number of co-primes number of a[i]/a[i+1] in range m/a[i], we can go through the factorization of a[i]/a[i+1], keep unique primes in a vector, let's call it "decomposition", and then go through all the 2^(|decomposition|) subsets with inclusion-exclusion to get the number of non-co-primes numbers of a[i]/a[i+1]. (we get the number of multiples of every subset of decomposition by (m/a[i]) / LCM(subset) and add it or subtract depending on the parity of the cardinal of the subset (this comes from inclusion-exclusion)).
Considering 2 sqrt(m)/logm the number of primes in [2, sqrt(m)] we get this is O(n*(2 sqrt(m)/logm) + sqrt(m)log(m)), but it turns out that a[i]/a[i+1] is amortized and then (2 sqrt(m)/logm) is a constant, so we get O(n + sqrt(m)logm).
Sorry for not using LaTex, I should definitely learn LaTex
"add it or subtract depending on the parity of the cardinal of the subset (this comes from inclusion-exclusion))." Can u please elaborate more on this part?
In problem C if all elements of both arrays are equal to 1, the editorial solution will do 2*n operations , right?
wasn't it suppose to make at most n+5?
or am I mistaking something?
nvm I'm dumb
could you please c's logic i'm finding it hard to understand
Notice that every operation can be inverted, so you just need to move to a case where it is trivial to check if it is possible
Video Solution for Problem C.
Both strings will either have all the bits equal or unequal, otherwise the answer is impossible. Example:
11
,10
One bit is equal, other is not. So the answer is NO.Now try to split strings into two types. One which has all bits equal and other which has all bits unequal and see how operations effect these.
I want to plug my solution for Problem E here which does not use a BIT, just prefix sum, a map (which couldve been a vector, but I was too lazy for negative indices...) and two for-loops.
Solution: 179633595
Explanation: Link to comment in Announcement.
can someone explain C logic .
Observe that answer is only possible if both the string is the same or if we can get
b
after inverting each character ofa
.After each operation, you can get either the same string or an inverted version of each other.
Make every character of
a
1. Then you get b either in the form of 00...0 or 11...1. Now can simply perform an operation (1, n) for the former case and (1, 1) + (2, n) for the latter.why you are doing l++ when flag1 is true
For problem D, I am generating the prime factors of all the numbers but it is not giving me a TLE.
Example:
$$$n = 2$$$x$$$10^{5}$$$
$$$arr = [10^9, 1, 10^9, 1, 10^9,....]$$$
Then isn't the time complexity $$$O(10^5$$$ x $$$\sqrt{10^9})$$$ and won't this give TLE?
179686581
Your testcase is wrong, because you check in the very beginning, that $$$a_i$$$ divides $$$a_{i-1}$$$
And since every next number divides the previous one, then you'll need to factor a number except 1 only at most $$$log(10^9)$$$ times. So we can limit the complexity to $$$O(n) + O(log(10^9)\sqrt{10^9})$$$. And an even better limit would come from the fact that the product of all the numbers we decompose is $$$a_1$$$, and the fact that $$$\sqrt{a\cdot b\cdot c ...} \geq \sqrt{a} + \sqrt{b} + ...$$$ gives us the total complexity of $$$O(n)+O(\sqrt{10^9})$$$
Thanks for the clarification.
Why $$$\sqrt{a \cdot b \cdot c \cdot ...} \geq \sqrt{a} + \sqrt{b} + ...$$$?
This is not true, for example, for $$$a = b = 2$$$ : $$$\sqrt{4} < 2 \sqrt{2}$$$
Well, you got me there
It works only for numbers starting from 4
I guess you wouldn't have a problem factoring numbers 1, 2 and 3 in $$$O(1)$$$
Can you please tell why this solution 249691884 does not give tle, even thought tc is
Well, why wouldn't it? $$$n \cdot 2^9 \cdot 9 \approx 4E8$$$. Given that C++ speed is around $$$1E9$$$ simple operations per second, 400ms sounds just about right.
Isn't $$$10^8$$$ operations fesible in one second in C++ . Also since sum of n over all test cases is bounded by $$$2\cdot10^5$$$ , so $$$n\cdot2^9\cdot9$$$ should be $$$\approx 9E8$$$ .
The condition A[i-1] % A[i] != 0 handles that
I generated primes for the first element and going forward from a[i — 1] to a[i] deleted the primes that are not present in a[i]. But it was not needed.
Okay I'm a bit annoyed (and it certainly is my fault) that I FST'd Problem D and once again (yes, previous times also I was reaching Master and then FST'd) I have to wait to touch that yellow colour again.
Compare these two codes to find out:
WA on test 25 code: https://mirror.codeforces.com/contest/1750/submission/179611048
AC code: https://mirror.codeforces.com/contest/1750/submission/179704899
To make things simpler, I shouldn't have modded the value of cc, since it's required in later stages as the original value.
A funnier fact is that using MOD = 1e9 + 7 instead of 998244353 would've actually worked because cc has an upper bound of m = 1e9.
At this point, it seems like my fate doesn't want me to reach master again :(
Anyways, it was a great contest! A big thank you to everyone involved in the problem setting team. (Although I believe C was a bit harder than usual :P)
No one cares.
blue_princess
I care!
i care too
I also care!
No one cares.
blue_princess
I care!
Is the runtime complexity of problem D wrong ? Not sure what is the log in 2^9 * 9 * log + sqrt(M). Shouldn't it be 2^9 * 9 * N + sqrt(M) per test case, where N is the length of the input array A's length per test case ?
There are at most $$$LogN$$$ factors of a Number , so $$$a[i]/a[i+1]$$$ can yield a value not equal to 1 only $$$LogN$$$ times , The correct time complexity is $$$O(N+2^9*9*LogN + M )$$$
Why this solution of the first problem didn't get accepted?? Solution: 179614121
Because you have out-of-bounds in line
int pos[n]; for(int i=0;i<n;i++) pos[a[i]] = i;
That leads to undefined behaviour so program can output arbitrary data.
Ohh that's why after deleting this line my code got accepted. Thank you.
Myself, after seeing the solution code for C: The hell is the purpose for making a Problem C with a 67-line intended solution code?
UPD: Saw the code for H. 500 lines. I swear THERE IS NO DAMN WAY SOMEONE COULD TYPE THAT DURING THE CONTEST.
You can obviously tell my coding style is shit. Live with it.
Omg you added dollar sign to count strings from 1...
You should not be allowed to write editorials
(jk, great contest)
Problems Were really nice :)
The links in your blogpost are a bit weird, since clicking on the caption "A -- Indirect Sort" sends you to Problem B :) But it is a great contest and editorial!
For problem D- Why does this submission https://mirror.codeforces.com/contest/1750/submission/179652391 give TLE?
when you find that ans is zero( v[i-1]/v[i] is not integer ) you can break,but if you don't the value of v[i-1]/v[i] keeps jumping ex 1 1000000000 1 1000000000 1 1000000000 1 1000000000... so so your actual complexity is now N*sqrt(m) instead of sqrt(m)
Damnn I often don't break out of laziness basically. Terrible mistake. Thanks for your time!
What was the point of setting non constant $$$mod$$$ in problem F? I had a very funny bug in my non-static $$$mod$$$ template. I forgot that the $$$mod$$$ isn't always prime, so for calculating $$$a$$$ to the power of $$$b$$$ I took $$$b$$$ modulo $$$mod - 1$$$. And due to the small constraints on $$$mod$$$ I didn't find $$$2^n$$$ correctly and didn't get accepted ;)
Anyway the problem was good in my opinion, and would be better with the constant $$$mod$$$.
Otherwise you'd be able to precalculate the answer and submit a solution with an array of size 5000.
You can precalculate answer with some slow solution with constant $$$mod$$$
There's a new trend to set problems with variable mod without an explicit reason just because "why not, it might help with something that we don't know".
Or so I've been noticing. Not the case in this problem, but I've seen it happening at least twice.
How to make observations such as "The cost of s[l + 1, r] is max(bl, br) — min(bl, ..., br)?"
Really bad at binary strings or balanced sequence stuff.. Can someone please share what is the intuition/strategy for those problems?
I can share what I did, it may be a bit complicated.
So, let's divide all of our sequences into two types of groups — positive and negative. (positive have balance > 0, negative — <= 0) For a bracket sequence b, let's call P the absolute value of the minimum prefix balance that is achieved in it (ex, for )(()) it is 1) I claim that the cost for making a permutation good is P for negative BS's and P + balance for positives. How to prove? Let us prove at the start that for a bracket sequence with balance = 0 the cost is P. Let's take first P opening brackets and rotate them to the start — that is the construction. And given that one operation increases a given prefix value by no more than 1, this proves that less than P is impossible. Now, for positive BS's you always need at least BAL closing brackets, which you simply append (the optimality of this is trivial), and then you need P rotations. For negatives — we simply assign the required opening brackets to the start and repeat the proof. Now, you can either simply code this approach (two ST's + deque, my submission = https://mirror.codeforces.com/contest/1750/submission/179686679), or you could reorder the formula some more and get the author's one.
A tough contest that ended my positive delta streak for 10 rounds, however it was well prepared and I learnt a lot, Thanks. Orz tibinyte
In editorial of C, "For each operation, the interval changed by a sequence and b sequence is complementary, so you must judge whether all [ai=bi] are the same at the beginning."
Can someone explain in more detail?
Maybe this will be easier:
You can see that this
$$$a_i {\oplus} a_j = b_i {\oplus} b_j$$$
will be true after every operation, so if we make every $$$a_i = 0$$$, then $$$b_i = b_j$$$ will hold for every i and j
bashkort Can you explain how to further approach the solution after this observation?
If initially there are i, j such as $$$a_i \oplus a_j \neq b_i \oplus b_j$$$, then answer is clearly NO, because if we could make a's and b's equal to zero, then there were no i, j such as $$$a_i \oplus a_j \neq b_i \oplus b_j$$$.
After we made every $$$a_i = 0$$$, then every $$$b_i = 0$$$ or every $$$b_i = 1$$$, so if b's are equal to zero, then we solved the problem, or if every b is equal to 1, then we do 3 operations:
(1, 1), (2, 2), (1, 2)
Can someone explain the solution to the problem F properly?
I updated some stuff in the editorial. Does it make more sense now ?
Let $$$dp_{ij}$$$ be the number of sequences of size $$$i$$$ whose maximum electrifiable prefix is of size $$$j$$$. For example, $$$dp_{i0} = 2^{i-1}$$$, corresponding to any sequence starting with $$$0$$$, and $$$dp_{nn}$$$ is the solution to our problem. Also note that $$$dp_{ii} = 2^i - \sum_{j \in [0, i)} dp_{ij}$$$.
To calculate $$$dp_{ij}$$$ for $$$0 < j < i$$$ we use a recursive formula. Such a sequence is made by a electrifiable sequence of length $$$j$$$ followed by either all zeros or enough zeros then a one. In the second case, there are at least $$$j + 1 + b$$$ zeros before the one, where $$$b$$$ is the maximum electrifiable prefix of the subsegment starting at the one. Thus, if $$$a$$$ is the length of that subsegment, we have $$$j + j + 1 + s + b + a = i$$$, where $$$s \ge 0$$$ is any amount of extra zeros in that segment. For each $$$a,b,s$$$ satisfying $$$a + b + s = i - 2j - 1$$$ we have a set of sequences of size $$$dp_{jj} \cdot dp_{ab}$$$. Since we can determine $$$s$$$ in terms of $$$a$$$ and $$$b$$$ (having fixed $$$i$$$ and $$$j$$$), the sets given by every pair $$$(a, b)$$$ are disjoint because they either have a different number of zeros before the one or a second fully electrifiable subsegment of different size.
The condition $$$a + b + s = i - 2j - 1$$$ is equivalent to $$$a + b \le i - 2j - 1$$$. Thus, the formula is
To compute the sum we can something similar to 2d prefix sums but for a triangle instead of a square.
Code: Submission
I really like the name "electrifiable" :))
Ajutaţi-l pe Jolteon să determine câte subsecvenţe electrizante are şirul pe care l-a primit.
If somebody still does't understand the solution for F, I hope this would be helpful.
Let's fix a binary string (assuming that the first and the last elements are ones). And let's imagine that wile we can make the following operation we make it. Of course it's always good for us. Then let's look to the final string:
It looks like $$$\underbrace{1\ldots 1}_{a_0} \underbrace{0\ldots 0}_{b_0} \ldots \underbrace{1\ldots1}_{a_k}$$$. When this string could actually be final (i.e. we can't make any operation)? It's not hard to see that $$$b_i > a_i + a_{i + 1}$$$ must holds. And it's not hard to see that if it holds then it's not possible to make an operation.
Let $$$dp_n$$$ be the answer for the length $$$n$$$. Then if we know $$$a_i$$$ and $$$b_i$$$ then for how many strings this string will be final? Answer: $$$\prod\limits_{i = 1}^{k}{dp_{a_i}}$$$. Let's call it contribution of the final string.
Let $$$bad_n$$$ be the sum of contributions over all final strings of length $$$n$$$, such that $$$k > 1$$$ (there're zeroes in the final string). Then $$$dp_n = 2^{n - 2} - bad_n$$$ (cause as we said we only consider strings that starts and ends with one).
To calculate $$$bad_n$$$ we can write the following dp: let $$$q_{n,\text{ }balance}$$$ be the sum of contributions over all final strings of length $$$n$$$ that ends with zero and we can insert at most $$$balance$$$ ones to it's end so it still would be final. $$$balance$$$ can be negative, but $$$balance \in [-n, n]$$$.
Note that in this dp we only consider strings that ends with zero and starts with one.
To calculate $$$q$$$ there're following transitions:
$$$q_{n,\text{ }balance}$$$ += $$$q_{n - 1,\text{ }balance - 1}$$$. Here we insert one zero to the end.
For all $$$1 \le m$$$ such that balance $$$balance \ge m$$$: $$$q_{n,\text{ }-m}$$$ += $$$q_{n - m - 1,\text{ }balance} \cdot dp_m$$$. Here we just insert $$$\underbrace{1\ldots1}_{m}0$$$ to the end.
If we knew $$$dp_n$$$ we could calculate this in $$$O(n^2)$$$, because there're $$$O(n^2)$$$ transitions of the first type and for second transitions we can fix $$$m$$$ and use suffix sums to make them fast.
Final step: to calculate $$$q_{n,\text{ }balance}$$$ we need to know only $$$dp_i$$$ for $$$i < n$$$. And if for all $$$balance$$$ we've already calculated $$$q_{n,\text{ }balance}$$$ then we can calculate $$$bad_n$$$. To do this we need to iterate over the number of ones in the end of the final string and using the same suffix sums update $$$bad_n$$$. Knowing $$$bad_n$$$ we can calculate $$$dp_n$$$. So we can calculate $$$dp$$$ and $$$q$$$ at the same time.
Total time complexity: $$$O(n^2)$$$. Code
Can someone please explain why in problem D,
a[i-1]
has to be divisible witha[i]
?consider a[i-1]=2*2*3 and the current b[i]=2*3*7.
Then a[i]=gcd(a[i-1],b[i]) = 2*3 since 2*3 is their common factors.
That mean that prime factors(a[i]) should be a subset of prime factors(a[i-1]) and this mean that a[i-1]%a[i]==0.
In problem D ,I used inclusion-exclusion principle but why my code gives wrong answer in test 3 in the sample cases?
In problem C. Why does a[i] have to be different from b[i]?
In test case:
2
10
01
Is not a valid solution:
1
1 1 ?
The solution of problem B can be better by the count of 0 = n — count of 1
Can you tell me why when $$$b[l] > b[r]$$$ respect then we can just use right bracket at the end of string?
In case of $$$()(())))))($$$,1-Index,let $$$l=3, r=11$$$. It's easy to see that $$$b[l] = 1,b[r] = -2$$$ 。But we should do operation 1 one time,and add two left bracket on the left. If we just work as the edtorial, we can make the last bracket be matched.
Sorry for my bad English. But can anyone teach me? Thank you very much.
For E, you don't need to use BIT to calculate the answer, you can split it into two parts (I concatenated 0 to the start of prefix sums)
$$$\sum_{l=0}^{n} \sum_{r=l+1}^{n} max(b_l, b_r) - min(b_{l..r}) \;=\; \sum_{l=0}^{n} \sum_{r=l+1}^{n} max(b_l, b_r) \;-\; \sum_{l=0}^{n} \sum_{r=l+1}^{n} min(b_{l..r})$$$
Which is just sum of max over all pairs in array (which you can calculate by sorting) minus the sum of minimums for all subarrays (which you can calculate using contribution technique and stacks(next and previous smaller element))
Submission Link
Why problem 2 gave TLE for same approach written in Java. I have done both work in one for loop , still it gave me TLE.
Java scans and prints stuff very slowly, maybe scanner from here will be helpful for you
114016483
How to solve F in $$$\mathrm{O}(n * polylog)$$$ time ?$
I think we can ask errorgorn to share the orz $$$O(nlog)$$$, since I only know the $$$O(nlog^2)$$$ solution.
Let's say we know $$$ans[1,n]$$$. We will demonstrate how to find $$$ans[1,2n]$$$ in $$$O(n \log n)$$$.
Same as editorial we want to count number of bad configs of form $$$\texttt{1}^{a_1} \texttt{0}^{b_1} \texttt{1}^{a_2} \ldots \texttt{0}^{b_{k}} \texttt{1}^{a_{k+1}}$$$. Where $$$b_i > a_{i}+a_{i+1}$$$. Also, the contribution of term $$$\texttt{1}^{a_i}$$$ is $$$ans[a_i]$$$. For bad string of length $$$2n+3$$$, the maximal $$$a_i$$$ we need is $$$n$$$ for the string of form $$$\texttt{1}^{n} \texttt{0}^{n+2} \texttt{1}^{1}$$$. So knowing $$$ans[1,n]$$$ is sufficient to find $$$ans[1,2n]$$$.
We can use genfuncs to calculate this value. Imagine the string instead as $$$(\texttt{1}^{a_1}\texttt{0}^{a_1} \texttt{0}^{b_1}) (\texttt{0}^{a_2}\texttt{1}^{a_2}\texttt{0}^{a_2} \texttt{0}^{b_2}) \ldots (\texttt{0}^{a_k}\texttt{1}^{a_k}\texttt{0}^{a_k} \texttt{0}^{b_k}) (\texttt{0}^{a_{k+1}}\texttt{1}^{a_{k+1}})$$$. Where instead of $$$b_i > a_{i}+a_{i+1}$$$, the conditions are $$$b_i>0$$$ instead. It is easy to find OGF of first, middle and last terms which we denote $$$A(x),B(x),C(x)$$$. Then bad configs of length $$$n$$$ is $$$[x^n] A(x) (1+B(x)+B(x)^2+\ldots)C(x) = [x^n] \frac{A(x)C(x)}{1-B(x)}$$$. All these operations can be done in $$$O(n \log n)$$$.
Code: 180044094 (sorry for cringe FFT, it was hacked together from here and here).
Can anyone explain for C :
Observe that answer is only possible if both the string is the same or if we can get b after inverting each character of a.
How this stataement is true , i not able to understand
Every operation flips the Boolean values of all
a[i] == b[i]
together, and the final goal requires alla[i] == b[i]
to be true. So the initial values ofa[i] == b[i]
must be all true or all false.This is best explanation for problem C I found.
I can hardly understand the editorial for Problem G.
Can anybody explain it in a more detailed way?
Added a second solution. The model uses this idea.
Here is my implementation of E. It uses the same idea as the official solution, but only requires an std::set instead of a BIT.
By inserting the indices to a set in sorted order of prefix sum, we can find the possible starting and ending points of a segment whose minimum is located at each index. The other part of the summation is easier to deal with; simply count how many times each element is the maximum of a pair.
BIT seems very handy but I don't understand some parts of code in editorial. Can anyone explain how to use BIT to count Max and Min in Problem E?
Actually, we can solve E in O(n) instead of O(n log n). 179627446 this is my solution for the problem. Now we can see that we can replace maps with arrays, because all keys in first map are from -n to n, and from 0 to n in the second one.
Could anyone help with my code for C- Complementary Sort?
https://mirror.codeforces.com/contest/1750/submission/179955063
It gives me WA because for some line at test 3, l = 0 and it is out of bounds. I have checked and that is not true. Any clue?
Take a look at Ticket 16422 from CF Stress for a counter example.
AC, what a gigachad you are
can anyone explain the complexity of solution D? For each adjacent element we have to generate the power set of primes also. So It'll be around 2^10 . for each adjacent element 2^10 would be a lot. I get the thing that there won't be much 2^10 as the primes will exhaust very fast. But can anyone prove it mathematically? Like for the factorisation part we can get an amortised analysis. Similarly for power set generation how to get amortised analysis?
It took me a while to understand the solution for C. The problem is interesting though.
Can anyone give an insight as to why the same code TLEs in C, yet passes comfortably in C++?
Alsalam Alykom,My solution for A,B is skipped and i swear i didn't use ideone or another account or anything its my clear solution ! I swear to god i didn't cheat why something like that happen to me ? how to avoid this MikeMirzayanov tibinyte
For me, A was harder than B. I don't know why.