Thank you for participating in our contest! We hope you enjoyed it.
This is the first time some of us are problemsetting. I apologise in case some of you did not have a good experience with the contest. I realise B was probably too hard for its position, we had a 2 hour 15 minutes contest and I expected the extra time to compensate for that. I believe some people did not like B being easily guessable. We were actually in a position where we could punish guessers who submit without proof. A lot of testers had submitted the sum over the entire grid modulo 3 to be equal as an initial guess (the samples were weak back then). I added a lot of samples to B (and, for that matter, to C, too) to make the problem easier instead of going the other way and punishing guessers. I really do not know if going the other way would have been appreciated, either.
I really like observation-based problems, like the format we have on CF nowadays. I did not realise all these constructives might cause a bad experience. I've never prepared for math olympiads or related stuff, and it has been CF that introduced me to all these sorts of problems, so I have no idea what would count as math-heavy and what would not.
For D I'm very surprised people managed to guess permutation parity without proving. We wanted to introduce new and uncommon problems, like the usage of permutation parity here would have been for a div2D. D and G are my personal favourites from this round. For G, too, it is possible to solve it without any heavy data structures, just plain observations.
It had been a complaint in last year's ByteRace round that the problems in the Div1 range were not hard or interesting enough. This time, we did try to ensure that, hopefully. I really like problem G, as said before. Actually, we were planning to have a 3-hour long 8 problem round, but we had to get down to this.
We ended up overlooking some mistakes in the statement of E. I'm sorry about that, too. Many of us preparing the contest have been busy interning, and sometimes it gets hard to manage time.
I believe the observations required for B and D make them really good and unique problems. We just did not put them in ways that would, like, prevent guessing. Are we supposed to? E was one math-heavy problem, is that a lot? I'm genuinely curious, can any experienced problemsetter (and not random greys echoing each other) explain what did not go well with this contest, if anything?
Idea: BlazingDragon
Preparation: Pew_Pew.
Editorial: Swap-nil
Think simple.
If $$$a$$$ and $$$b$$$ both are divisible by $$$c$$$ then, $$$a + b$$$ will also be divisible by $$$c$$$.
Idea: Swap-nil
Preparation: Swap-nil
Editorial: Swap-nil
$$$1 + 2 \equiv 0 \pmod{3}$$$.
Can you notice something that does not change, an invariant?
1983C - Have Your Cake and Eat It Too
Idea: DC17
Preparation: Swap-nil
Editorial: BlazingDragon
Idea: MrSavageVS
Preparation: MrSavageVS
Editorial: MrSavageVS
Can we convert this complicated operation to something simpler?
We can convert the operation to basically swapping adjacent elements and it still works. Try to prove why?
Swapping adjacent elements in a distinct array is basically trying to equate two permutations using adjacent swaps. When is it possible?
Inversions in a permutation
Idea: DC17
Preparation: BlazingDragon
Editorial: BlazingDragon
Idea: TimeWarp101
Preparation: MrSavageVS
Editorial: MrSavageVS
Can we fix an upper bound to count the number of subarrays having xor-pair less than that?
We can use binary search to fix an upper bound and calculate all subarrays with xor-pair less than the given value.
In order to count the number of subarrays we can use a popular data-structure used for a lot of xor-operations.
We can binary search on the upper bound and use tries to count the number of subarrays with xor-pair less than given value. This way we can find the kth-smallest value.
Idea: AwakeAnay,TimeWarp101
Preparation: ShivanshJ, Swap-nil, Pew_Pew.
Editorial: Swap-nil
Try looking at the problem bit by bit.
What does the contribution of a particular bit over a path look like?
Can you think of some simpler queries you can easily solve the problem for?
Root the tree.
For a query from a node $$$u$$$ to its ancestor $$$v$$$, the countribution from the counting term $$$(0,1,2,3, \ldots)$$$ changes for the first time at an ancestor at a jump of a power of two.
Let $$$dp[u][k]$$$ be the contribution of bit $$$k$$$ to the query from $$$u$$$ to the root.
Keep track of a few additional nodes for each query. Read hint 5 again.
First. It does seem like B has a nice proof. I guessed it during the contest, so seeing the proof is very nice.
ya proof is nice,
another proof: in a row after modifying all first n-1 elements equal to the first n-1 element in Matrix B, the last element should be equal to its respective element in B, if it is not we can't make it equal, because if we want to make this equal to its respective element in B, we have to select a previous element in that row ( which is already equal to its respective element in B) for a rectangle selection, which will leads to change the that previous element, which will not equal to its respective element in B and then if we going to make this element equal we have to change another element which is already equal to its respective element in B ...so 1 element is always remain unequal as in B.
similarly in column..
hence the last elements of row and columns should be equal to their respective elements in B after all modifying the A_n-1xm-1 equal to B_n-1xm-1
Oh nice. This is a more intuitive proof I think, thanks for sharing.
very good proof
me too
I also guessed at time of contest. After proof by submission I came up with the actual proof.
For B I just checked each row and column to see if their sum mod 3 was equal in starting and ending array. If this was true for all rows and columns, answer was Yes, otherwise it was No. Haven't proven it yet but it passed.
(ANOTHER) SOLUTION FOR PROBLEM D
If array 'b' can be transformed into array 'a' with an even number of swaps (swapping two arbitrary elements in array 'b'), then, the answer for the problem is 'YES'. Else, the answer is 'NO'.
Example 1:
a = [1, 2, 3, 4];
b = [4, 3, 2, 1].
In this example, array 'b' can be transformed into array 'a' with 2 operations. First, swap the 1st and the 4th elements in 'b'. The array will be: b = [1, 3, 2, 4]. Then, swap the 2nd and the 3rd ones: b = [1, 2, 3, 4]. Here, 2 is an even number, so the answer is 'YES'.
Example 2:
a = [1, 2, 3, 4];
b = [1, 2, 4, 3]
In this example, however, 'b' will be transformed into 'a' with 1 operation only. 1 is odd so, 'NO' is the answer.
PROOF: I'll do that tommorrow
Code: 269334590
I had the exact same approach as you :)
Code : 269276300
Had 6 wa during contest :/
Same approach as me, but a little bit different in the detail.
Nice approach. Basically every swap can be represented using an odd number of adjacent swaps, so parity of total number of swaps and parity of total number of adjacent swaps are same.
Can you please tell the logic behind it's working
It goes like this, might be slightly different, than op's approach. I like it better that "permutation-parity" solution in editorial.
It should be sufficient to make both $$$(l,r)$$$ and $$$(p,q)$$$ kinds of swaps on array A alone. Why ? Look at the very last $$$(p,q)$$$-swap that we would have done on array $$$B$$$ after which $$$A = B$$$ would be achieved. If we made the same swap on array $$$A$$$ we would still achive $$$A = B$$$. And so on all $$$(p,q)$$$ operations could have been made directly on $$$A$$$ instead of $$$B$$$.
Now, we are making swap operations on $$$A$$$ only and the constraint is that every $$$k$$$-length swap has to be accompanied by another $$$k$$$-length swap as well. So, in general the count of swaps of any length $$$k$$$ should be even. Singly-used $$$k$$$-swaps are not allowed.
But, a single $$$2k$$$ length swap could be broken down into $$$2 \times k$$$ swaps. Similarly, two different singly-used odd swaps could be combined into a bunch of even swaps + $$$2 \times 1$$$ swaps. So, we see there are a lot of variations possible and its hard to choose the correct set of swaps.
Resolution: How about we decompose all required swaps into a common "unit-swap" and then see if we can satisfy the even-ness constraint ?
The "unit-swap" should be "adjacent-position" swaps of $$$[x, x+1]$$$ form.
Any swap $$$[l,r]$$$ can be decomposed into: $$${[l,l+1], [l+1,l+2], \cdots [r-1,r]} \cup {[r-2,r-1], [r-3,r-2], \cdots [l,l+1]}$$$.
The number of these unit-swaps is $$$2*(r-l) - 1$$$.
One can see that the final state of array $$$A$$$ after these swaps is exactly the same as if a single $$$[l,r]$$$ was done.
Thanks
Thanks for the explanation.
here more simple implementation ,thankyou phatdapchai
int n; cin >> n; vi a(n), b(n); fr(i, n) cin >> a[i]; fr(i, n) cin >> b[i];
}
code : 272511253 I think I use the same logic as you. I use dfs to find the groups that member amount are even. So if the even number group is even I can find the answer, otherwise no answer. But this solution got TLE, can you tell me why? I think the time complexity is O(NlogN), logN is besause of using set.
I'm not sure that we have similar idea but I can tell why it exceed the time limit. Your declaration of vector<vector> g(mxN+5); takes roughly 2e5 operations. Imagine a test with 1e5 testcases then your code would have to do about 2e10 operations, that is O(N^2).
i have had the same idea but i'm getting WA
I want to ask then why in the problem A output is different than the given output.Can anyone tell me?
There can be multiple beautiful arrays for a given n.
DEF are good problems for me.But I think C is a little boring because it requires a lot repeating codes.
You may want to look into std::next_permutation.
Even after looking at the next permutation, we still have to print the answer in the same order ( l[a] to r[a], and l[b] to r[b] , and l[c] to r[c] ) . That's where things got complicated.
If we had to simply print "YES" or "NO" . Or we had to simply print any three partition irrespective of the order ( which person the partition belongs to ) , next_permuatation implementation was quite straight forward.
Sounds like a skill issue to me.When looking at $$$p_i$$$, the $$$i$$$-th element of our permutation, use $$$p_i$$$-th row of prefix sums to find the range and write it to $$$p_i$$$-th index of the answer. Maybe looking at my code 269239985 will help? Unlike the editorial, I didn't hardcode the number of people, so I assume it's cleaner.harsh
Honestly is often harsh. Either we can accept and learn from it,
Or we can ignore it, and blame it on others by saying they are rude.
It was skill issue on myside, I couldn't think in that direction.
You can use a vector to put the answers into before printing them. Maybe take a look at my code: 269257208
My submission might help: Submission
Since there are only 3 people, A simple loop would also work.
UPD: Learned that shouldn't have suggested a kinda rudimentary method. Viva la
next_permutation
!It is worse because $$$(i, j, k)$$$ are not in a collection and cannot be processed uniformly with a for loop.
It's a good point. I agree it's the best practice to use next_permutation if feasible, but anyway I'm talking about a simple substitute to it when the processing is not very complex
Actually,you can use an array to store the index of each person instead of copying the same code for 6 times. Like me,I used
next_permutation
to finish this work. Here is my code.this is one way to do it
...
269300957 Pretty cute implementation for C id say.
i am not able to think what am i missing i have done the same thing as editorial just precalulated values for all possible cases so where am i wrong it failed on test case 2
can anyone help bcz i cant even see which test case i am wrong on 269290282
Take a look at Ticket 17456 from CF Stress for a counter example.
oh thank u i didn't knew we can generate test cases
In E I had a different solution. First a O(nk) DP is ez to get. The DP is $$$f_{n,k}$$$ denoting the expect value of special balls Alice get when there are n balls in total and k balls special. When u brute force it and print a chart, it has a clear pattern( u can try it yourself,remember print fractions,not double). Simply implement this passed the pretests. You can see the submission here,in which the
calc
function is the pattern.Here's an alternative way to count subarrays with value $$$\le m$$$ in 1983F - array-value:
If $$$x < y < z$$$ then $$$\min(x \oplus y, y \oplus z) < x \oplus z$$$.
The minimum pair xor of a (sub)array occurs among pairs of consecutive (in sorted order) elements.
Use a sliding window to find the shortest good subarray (with value $$$\le m$$$) ending at each position $$$1 \le r \le n$$$.
Maintain a multiset of subarray elements to quickly find adjacent elements in sorted order, and maintain a multiset of adjacent pair xors to find the smallest pair xor quickly.
Code: 269281742.
In problem E, where did the formulae come from? I would assume that the people who are reading the editorial for problem E couldn't exactly come up with the formulae themselves, therefore could the editorial please give some more explanation about this?
Also how does the expected number of normal balls that alice takes not affected by the number of special balls there are? (the only thing that changes is
n
$$$\rightarrow$$$n - k
, which is just the change in the number of normal balls)EDIT: I understand why the expected result did not change, it's because the turn changes everytime someone picks a normal ball, and as long as they keep picking the special balls, their turn repeats. Hence per turn each player picks exactly one normal ball (except perhaps the last turn) and hence the formula for expected number of normal balls picked my alice remains the same.
I also don't understand the formula
Suppose you have $$$n$$$ normal balls and $$$k$$$ special balls.
In every turn, you would pick any one of the normal balls and at least one normal ball. Out of those $$$n$$$ runs of picking a normal ball and passing of turns, $$$\left\lceil \frac{n}{2} \right\rceil$$$ times Alice would pick balls. Total score gained by Alice = $$$\left\lceil \frac{n}{2} \right\rceil \times \text{expected_value_per_ball}$$$
Now, for simplicity, consider there is only one special ball. Here the chance that it is picked up on the first turn is $$$\frac{1}{n+1}$$$ (this gets picked up before the other balls). Similarly, picking it in the second turn has probability $$$\left(\frac{n}{n+1}\right) \times \frac{1}{n}$$$ (not picked in the first, picked in the second). For the third turn: $$$\left(\frac{n}{n+1}\right) \times \left(\frac{n-1}{n}\right) \times \frac{1}{n-1}$$$. Now comes an observation that all these evaluate to $$$\frac{1}{n+1}$$$. There will be a total of $$$n+1$$$ chances of picking up a special ball ($$$n$$$ times before some normal ball and once when no balls are available) with equal probability each.
Total number of turns = $$$n+1$$$
Alice has a probability of $$$\frac{\left\lceil \frac{n+1}{2} \right\rceil}{n+1}$$$ to pick this special ball.
For the case with multiple special balls: Suppose we want to calculate the probability of the first special ball (or any particular special ball) being picked on a turn. Here it only matters if the first special ball was picked before any normal ball was picked, because picking another special ball still leaves the same situation and the first special ball can still be picked up in this turn.
That makes sense, thank you so much!
can anyone explain why tutorial just use first k to calculate avg_special thanks
read second last line in input format
thansk
Yes, exactly. First we can observe that the value of every normal ball can be replaced by the expected value of the normal balls(average basically) and same for special balls. Now we need to compute the number of normal balls Alice gets, and expected number of special balls Alice gets.
Let's say that we randomly shuffle the N balls and arrange them in a line. Now Alice will keep picking up balls until she gets the 1st normal ball. Then Bob does the same until he gets the 2nd normal ball. We can see that Alice will get ceil(X/2) normal balls(this is exact, not expected). X is the number of normal balls (n — k)
So it is like the normal balls create sections in between which there are special balls. If there are X normal balls then we get X+1 sections and the expected number of special balls in each section is K / X+1, if K is the number of special balls.Now Alice will pick up balls from ceil(X/2) sections. So expected number of special balls Alice gets is ceil(X+1/2)*K / X+1.
Now that we know the expected number of special balls Alice gets, and the number of normal balls, we can easily calculate the answer. You can take a look at the implementation in the editorial, given the context I'm sure it would be easy to understand.
PS: This solution can be extended for any number of players as well.
Ah true, thats quite a neat way of thinking about it, thanks!
Could you explain why can we replace the value of every ball for it's expected value?
I'm having trouble understanding why we don't need to compute something like this for every ball (using $$$p(X)$$$ as "probability of $$$X$$$"): EV of choosing ball $$$i$$$ on the $$$j$$$-th turn $$$=$$$ $$$p($$$ ball $$$i$$$ survives $$$j-1$$$ turns $$$) \times p($$$ choosing ball $$$i$$$ on the $$$j$$$-th turn $$$) \times v_i$$$.
You can think about all the permutations of the balls. If you think about how many times we can get each ball at certain place, it turns out that out of all $$$n!$$$ permutations they can draw the balls in, $$$v_{1}$$$ is drawn as a 1st ball $$$(n-1)!$$$ times ($$$(n-1)!$$$ is the number of ways in which you can draw the rest of the balls), $$$v_{2}$$$ is drawn as a first ball also $$$(n-1)!$$$ times and so on which shortens into the $$$\frac{\sum^{n}_{i=1}v_{n}*(n-1)!}{n!} = \frac{\sum^{n}_{i=1}v_{n}}{n}$$$ which is also the $$$EV$$$. You can think similarly for all the next draws as well as for the special draws.
Thank you, the solution is entirely clear for me now. We can represent the normal and special balls picked as two permutations during a game, so the expected value of the amount of special or normal balls picked by Alice/Bob is independent from the expected value of the ball's value at any turn, so we apply $$$E[XY] = E[X] \times E[Y]$$$
It depends on the implementation, it's not entirely necessary. The problem boils down to computing the expected number of balls taken out of every bundle (the special and the non-special one). Given the expected number of balls picked from each we can get the original answer, regardless of the distribution of values on the balls. Below is a quick justification.
Let $$$B$$$ be all balls of a bundle, $$$v(\cdot)$$$ their values and $$$X$$$ be a random variable on the (equiprobable) selection of $$$B$$$. Then
If $$$(v_{i})_{i \in B}$$$ were random variables then since they would be independent of $$$X$$$ we would get the result
I think this is a wonderful contest,and I love BD very much. Thank you!
NO issue bhaiya problems were good I enjoyed them(got -ve delta) it was great learning from B too.
For me, B and D have the weirdest solutions ever. It took me a lot of time to solve both of them. The idea behind B and D was quite tricky, confusing, and vague. Honestly, I don't really know how I got Accepted.
Why was my submission 269283722 TLE? Can anyone explain, please.
Nice round. Actually, I got the idea for B in 5 minutes, but I couldn't solve D quickly. I didn't guess the solution mentioned in tutorial and solved D in a different way. However, I solved E in 15 minutes because the idea was too simple (I finished coding when the contest was over for 1 minute and passed after the system test was over). C is not fun enough, I didn't find any difference to "fix the order they choose". It just made the implementation more difficult.
I feel you should have swapped the order of B and C, if B was to be left in the set at all.
For me it is very hard to submit without proving, and B wasn't provable all that easily (for me and other people, judging by comments) — generally I think it is better if problem solutions are reached in a way along the path of a proof, and not have to guess the solution from samples.
I had to guess B and the time it took just made me lose motivation to solve the rest of the problemset.
I liked problem E a lot, but next time please make sure to write essential information like the special balls being the first k ones on the actual problem statement, and not at the end of the input section.
The way editorial of E written is similar to the way it is in the note section of the problem, LOL. Likewise, the problem statement confused people by putting a crucial detail in the input section.
can anyone plz describe why the sum of all coloumns and rows in problem B must have to be same?
Sum should not be same sum%3 should be same. This is because whenever we apply an operation the sum of any row or column changes by 0 or 3. That is modulo remains same. Thus if modulo is not same its not possible to convert A to B. Its a necessary condition I was not able to prove that it's sufficient.
In problem C, has anyone solved it using concept of sliding window? My approach was finding all the windows of sum k for alice, bob and charlie. After that, finding the non-overlapping ranges bw them.
But I was not able to implement it fully. Kindly help me out with my query or tell if I am thinking in wrong direction.
Thanks.
My approach was very similar I stored all possible indexes for each a,b,c such that sum is greater than equal to k and then I found out 3 no overlapping pairs. But I got memory limit exceeded on test case 3. Anyone help?
My Submission
I used 6-pointers technique which i made up during contest
Using permutation parity isn't new for Div2D, I've seen it twice already: here and here.
Not only did i go blank in B, frankly speaking, i just got lazy in C, after writing the first 2 cases, and realising how long it will go.
https://youtu.be/uRURZo52Iws
C question detailed video editorial
Can anyone look into my solution of c and tell me what I am missing or in which case my code is failing to pass 269290282? thank you for your time
For problem D, I just checked that both the arrays have the same inversions parity rather than number, yet it still passes, can anyone hack it: link?
Also, although I initially really disliked problem B, but reading your proof I think it's a pretty good problem tbh.
Same inversion parity is enough.
Lets do operations with $$$r-l=q-p=1$$$, now the no of operations required to make array sorted is equal to no of inversions.
Lets say no of inversions in A is 10, and inversions in B is 20. After first 10 swaps in A, we can keep swapping $$$A_{n-1}$$$ and $$$A_{n}$$$
My post contest discussion stream
Problem A
Problem B
Problem C
Problem D
Problem E
Problem F
I was already confused in question 3 and then you guys wrongly answered my query during the contest Link
can anyone tell me where I can practice problems like E I have no idea about the expected value
This problem is good: 1925D - Good Trip
After reading the personal note, I want to leave my comments here.
I think problem B wasn't bad, although it might be a little hard to prove for a B. I personally liked this problem.
Same for problem D. I think it wasn't necessarily bad, although it involves a quite typical observation that appears in many similar problems. I want to ask though, why it couldn't just be an actual permutation instead of random numbers $$$\le 2*10^5$$$. These required tedious exception handling such as checking if the set of elements is the same, and some of the inversion count methods prefer compressed numbers (just like how I did it with a Fenwick tree).
The one I disliked most was problem C, because this required almost no observation and made me choose between "copy-pasting 6 times carefully" and "running over a dizzy permutation".
I also think problems like A are not very suitable for their position, because the best way to first approach this problem is not by "observing mathematical facts", but by "guessing that the answer must be the simplest pattern in the world because it's an A problem". It's very easy to get stuck if you just forget that this is 2A, and to be honest if the position of this problem was hidden, this problem would be probably too hard for a 2A.
I don't think that the contest was too math-heavy though. I'm not sure if people are calling it math-heavy because of observations, but these kind of maths are fine for me. I prefer them to problems full of pure math equations.
Nice review, I completely agree with you.
Can someone explain why my submission (269286766) for problem C is wrong? I'm not able to find the issue.
Can anyone tell me what I am doing in this solution of Ç giving wrong ans on tc3 This is the submission link Solution
nvm got it
Hey guys i couldn't enter the contest yesterday. This screen appeared for over 15 minutes after which i decided to not give the contest can anyone pls help how to avoid this.
It was saying their was no internet connection pls connect to internet. But i could use google and also started other apps to check whether my internet connection was working fine
G is too pain to implement
in B i got this part
We see that it is always possible to make all the elements in the first n−1 rows and m−1 columns equal using this method.
but what about the last row and last column ?
My guess: If we are going to fix them then we have to change at most 2 of the already fixed value. So after all operations, the N'th row and M'th column need to be the same as the B grid.
Can we solve E using DP?
Had a similar thought during the contest, let me know if you have any ideas
Really cool B. Although my solution was based on intuition, I really like the proof and how the problem setters came up with the problem.
And about the contest being not good, that is very much not true. Really good contest. Really good for developing observational skills.
Solution to D becomes trivial if you've solve this Problem before. (Essentially same problems)
Can we use hungarian algorithm in C?
Now, we will prove that when both arrays have the same number of inversions, then it is definitely possible to make them equal
for D i think it should be when both arrays have the same parity of inversion?
https://mirror.codeforces.com/contest/1983/submission/269401066 this one is giving tle https://mirror.codeforces.com/contest/1983/submission/269401494 this one getting accepted
just change in template gives complete different results
ruined my contest yesterday
help anyone
fast io
You missed a closing bracket while taking array B. sadge:(
can anyone help me with more problems that have proofs like b ?
Can anybody please tell whats wrong with this Solution ?
Is there something wrong with finding the inversion count?
What I did for B was a little bit different, I just had a strong hunch that if you fix all of the rows naively up until the last row, the last row will either be fixed = 'YES' or not fixed = 'NO', I tried it out a lot but since I war already really late, didn't have the time to prove it.
D was nice but was solved a lot surprisingly.
I would've switched E and F tbh.
I am having some difficulty solving Problem F, I used a struct for defining my nodes of the Trie. Initially I was not deleting the nodes used for the binary searched value so got MLE, but after deleting all the nodes used for a particular searched value I am getting TLE. Can someone please explain me why and help resolve the issue.
Link to MLE solution : 269444765
Link to TLE solution : 269456951
for E can someone explain how the INV function is working exactly ?
I have another (probably) solution for problem F. It only uses a trie which can add elements, delete elements and find the minimal xor for given value.
Do binary search on the answer. Then we want to find maximal $$$l$$$ for each $$$r$$$, so that min xor on segment $$$[l, r]$$$ is not greater than $$$mid$$$ ($$$l = 0$$$ if min xor on prefix $$$r$$$ is greater than $$$mid$$$). Then the number of segments with min xor not greater that $$$mid$$$ will be equal to sum of all $$$l$$$. In order to find this $$$l$$$ for each right edge $$$r$$$ we'll use two pointers. When moving from $$$r$$$ to $$$r + 1$$$, we only need to move $$$l$$$ while there exists $$$a_i (l \le i \le r), a_i \oplus a_{r+1} \le mid$$$. The solution works in $$$O(n \ log^2 A)$$$
Why should we only check that there exists $$$a_i$$$ such that $$$a_i⊕a_{r+1}≤mid$$$? I mean, isn't it possible that this condition doesn't hold, but the minimum XOR of the pairs is less than $$$mid$$$?
I'm moving $$$l$$$ while there is a xor that is $$$\le mid$$$, so when I finish moving it for $$$r$$$, all xors are bigger than $$$mid$$$. Then, it might happen that $$$a_{r+1} \oplus $$$ some element on current segment is $$$\le mid$$$, so we need to further move $$$l$$$. All other pairs already give greater xor, so we don't care about them
Oh nice. I got it. Thanks!
My approach to the problem B is completely different. 269255620
I was using the exact same logic, idk why I was not getting the correct answer on my IDE. Thanks this helped.
I found problem G is too annoying to implement. Is there any simple implementation for it? (Editorial's code is also too long.)
"Keep using operations for array a of the form l=1 and r=2 while changing the operation p,q to different values to make the elements at indices 3,4,…n the same in array b as they are in array a . Now, since both the arrays should have the same parity of inversions at the end, a1=b1 and a2=b2 , and so, our arrays a and b will become equal"
if we wont ever use a2 and b2 how can we ensure its the same as a1 and b1?
May be it would have been better if B was given at C or D position with asking to perform operation.
https://mirror.codeforces.com/contest/1983/submission/269611660 This is my submission for problem C. Can someone point out what is wrong in my code??
Interesting. My $$$O(n(logn)^3)$$$ solution is passing for F even though it shouldn't.
I am using bit-trie for solving F, but I am getting TLE in testcase 2
submission link -> 269731104
Can someone please help me optimise this code.
problem C title "Have Your Cake and Eat It Too"
is that a reference to the sitcom "Friends" ? XD
Maybe, but it might just refer to the famous idiom in English
I believe problem C can also be solved in $$$\mathcal O(NK + K 2^{K})$$$ time via bitwise DP, which might be useful if it were extended to $$$K \le 20$$$ people.
Might be kinda trivial, anyways here is an implementation: 270151191
can someone please explain how problem E is working here?? As, I am supposed to find the expected value of the scores of both players, which is the weighted average of score with their probabilities. But since there exists k>=1 special balls which lead to many different combination of balls each players can have. So how we are supposed to find the probability here ?? Wouldn't the different combination of balls lead to different scores and hence different expected values ?? then how can we uniquely determine the expected score for each given set of balls ??
I am just confused. Please help.
For problem F, I have a solution that can pass the tests, but I don't know how to prove that the time complexity of my code is small enough. https://mirror.codeforces.com/contest/1983/submission/270741083
Found a cool way to invalidate pointers without explicitly using any pointers in F:
(This exact code crashes in MSVC, might need a bigger example for GCC/clang.)
But the idea is that this line
heap[child_index].add(depth-1);
seems to work similarly to e.g.And then when push_back causes vector resize, then the
child
reference will be invalidated.So, on the surface it looks like no pointers or references were used, but they be sneaky like that.
Cool round
I followed the solution of Problem F, implementing with struct, and got TLE. Can some one help me check where is the problem? Thanks!!! 273029376
Oh I found that the time I used for delete is too long.
Here is the Accepted solution: 273036038
Can someone please point out the mistake in my solution for problem D; 279908013 It gives WA in test case 14.
Your code also gives wrong answer for 1 4 37122 30915 5098 13035 30915 37122 13035 5098
Try debugging using this test case
Thanks for taking the time to reply. I was inactive for the last two months hence the late reply.
Proof for D (mathematical approach):
Consider the first array A and the second one B. If their multiset is different the answer is trivially no. Follows the proof assuming they have the same elements:
A <=> B iff there exists a permutation of A and B such that A => p and B => p.
If A <=> B, then every permutation of A and B are achievable by both. Because if A ~= B (denoting there is a bijection between them), then I can convert both of them to some permutation p on which I can perform the same operations for both and achieve any permutation conceivable.
Remember that a => b implies !b >= !a. Therefore, if I can't achieve any permutation with A and B, then it's impossible to form a bijection between them. As it turns out, this condition is not only necessary, but also sufficient: If I can achieve any equal permutation with the arrays, then they are equivalent, as aforementioned; it suffices, therefore, to find out if I can achieve a specific permutation of A and B. If I can achieve it, the answer is YES; if I can't, the answer is NO.
Let's choose a convenient permutation. Say that it's them sorted in non-decreasing order (they are equal in this format as their multiset is the same, as per our initial assumption). Initially, let's assume A is already sorted and B is not. I can perform bubble sort in B while swapping two adjacent elements of A. If the number of swaps in A turns out to be even after B is done, then A is exactly equal to what it was before and also equal to B, as they're now sorted. If the number is odd, then I can never make A equal to B as they're out of sync for one elements. As the editorial claims, if the initial parity of the number of inversions of A is different than that of B, then the parity will never be the same.
Of course, this also applies to when A is not initially sorted. It is enough to check whether the initial inversion count of both are congruent modulo 2.