### ooaa's blog

By ooaa, history, 4 months ago, translation,

Hello, everyone!

We hope you like our problems!

Thank you for participation!

1918A — Brick Wall

Author: ooaa

Tutorial
Solution

1918B — Minimize Inversions

Author: ace5

Tutorial
Solution

1918C — XOR-distance

Author: ace5

Tutorial
Solution

1918D — Blocking Elements

Author: ace5

Tutorial
Solution

1918E — ace5 and Task Order

Author: ace5

Tutorial
Randomized solution
Non-randomized solution

1918F — Caterpillar on a Tree

Author: ooaa

Tutorial
Solution

1918G — Permutation of Given

Author: ace5

Tutorial
Solution by ooaa
Solution by green_gold_dog
• +196

 » 4 months ago, # |   +31 for problem B, could not prove why sorting one array greedily would work, but still went ahead with logic bcz it's B(not supposed to be hard). :p
•  » » 4 months ago, # ^ |   +36 I solved B using a different approach, so maybe that can help:I want to select such an $i$ and $j$ such that $a_i > a_j$ and $b_i > b_j$ , combining these two equations I get, $a_i+b_i > a_j+b_j$, so I basically sorted in increasing order of $a_i+b_i$, then printed $a$ and $b$.My submission: 244097120I found this easier to understand and prove rather than the editorial solution!
•  » » » 4 months ago, # ^ |   +10 Hi, I coded a solution in java based on your thought process. But I keep getting TLE. Here is my submission, help is greatly appreciated, thanks!
•  » » » » 4 months ago, # ^ | ← Rev. 2 →   +1 Hey, I am not very much familiar with Java, but what I can see is that you are using list of array, which might make it inefficient. Also, you should use the functions BufferedReader and StringTokenizer, which fastens the input reading speed, I have modified your code, which gives AC have a look : 245026970 (I have used 2D Array of size [n][2] which is much better than using a List of an array and used the functions I specified above for faster I/O)PS : I would recommend not using Java, because it is a very slow language, the same thing coded in C++ is way way faster.
•  » » » » » 4 months ago, # ^ |   0 Thanks, it really helps me a lot, I was struggling over it for days! I was also experiencing TLE with the use of an array. After formatting the results with a StringBuilder and then printing it out, the TLE is eliminated. The performance is about 1000ms.Replacing the Scanner with BufferedReader and StringTokenizer reduces the time taken to 500ms.I am currently using Java because my upcoming Uni course includes a lot of modules involving Java, so I want to familiarize myself with it. But I will definitely pick up C++ when I have more bandwidth. (I am indeed envious to see the editorial's C++ code that prints out elements one by one without an explicit buffer like StringBuilder in Java and completes in 100ms!)
•  » » » » » » 4 months ago, # ^ |   0 Nice to hear that it finally works :)Oh, fine, then you can continue in Java, but yeah C++ is like crazy fast.
•  » » 4 months ago, # ^ | ← Rev. 2 →   +22 If you take a look at each $(i, j)$ pair for both $A$ and $B$ you have two cases:1) $A_i>A_j\hspace{0.15cm}{and}\hspace{0.15cm}B_iA_j\hspace{0.15cm}{and}\hspace{0.15cm}B_i>B_j$) no matter what.So when you sort one array, you remove all inversions from case #2 pairs and only keep unavoidable inversions from case #1 which is optimal.
•  » » » 4 months ago, # ^ |   0 How to prove that this is a minima point and not a local minima point
•  » » » » 4 months ago, # ^ |   0 All case #1 inversions cannot be removed, each of them will always add 1 to the total number of inversions no matter how you permute A and B. Case #2 inversions however can be removed and by sorting A or B you are getting rid of all case #2 inversions. It is not a local minima point because case #1 inversions are unavoidable across all possible permutations.
•  » » » » » 5 weeks ago, # ^ |   0 Is it possible to get different answers when either of a or b is sorted at once?Let's say if a is sorted then we get total number of inversions as Na, but if b is sorted then it is Nb. will Na=Nb always?
•  » » 4 months ago, # ^ |   0 I wasted a lot of time trying to prove it or find some other solution but couldn't do either, so in the end I just submitted with fingers crossed :p. I got lucky.
•  » » 4 months ago, # ^ |   0 Let's make the first array $a$ to be sorted. Take a pair of indices $i$, $i+1$ which are adjacent and $a_i > a_{i+1}$. In this, there could be two cases. When $b_i$ < $b_{i+1}$ and $b_i$ > $b_{i+1}$, in the first case when we swap $a's$ element the total number of inversions will remain the same and in the second case, total number of inversions will decrease by one (elements on the right and left indices of $i, i+1$ does not effect the two cases differently). So, if we follow a process where only adjacent elements are swapped and reach the state where the first array $a$ is sorted then the number of inversions will either decrease or remain the same. Such a transition is possible because for example, Bubble sort works that way. Now, if there exists a third state of the array in which both of the arrays are unsorted and this state has even a lower number of inversions (let's say) then we can always do the above process and get to the sorted state (with same or lower number of inversions), thus proving that the sorted state is one of those states which have the lowest number of inversions.
•  » » 3 months ago, # ^ |   0 Making a wrong submision is better than Trying Proving it for 1 hr.
 » 4 months ago, # |   +28 Great Contest.
 » 4 months ago, # | ← Rev. 2 →   +110 There is also a different non-randomized solution to E:First, locate elements $1$ and $n$ with $3n$ queries each, similar to the editorial. After doing this, the value of $x$ will be $1$ (or $n$, depending on in which order we locate them). Now, anytime we want to change the value of $x$, we can do that by querying the position of $1$ or $n$.We will now run a parallel binary search to find the values of all elements: For each element $a_i$, store a range $[l_i, r_i]$ in which this element must lie and iterate over all ranges in increasing order of $\left\lfloor\frac{l_i+r_i}{2}\right\rfloor$.On one iteration we will initialize $x = 1$ and increase its value until $x = n$, handling all binary searches by querying when $x = \left\lfloor\frac{l_i+r_i}{2}\right\rfloor$. We need $n$ queries to move $x$ from $1$ to $n$. We need one binary search query for every element of the array, which is another $n$ queries. Every binary search query can move $x$ in the incorrect direction by one step, which we also need to revert with another query — yet another $n$ queries. Thus, we need at most $3n$ queries for one iteration of the binary search. It might seem like we need another $n$ queries to reset $x$ from $n$ back to $1$, but here we can be clever: If $x$ ends at $n$ after one iteration of binary search, we can do the next in decreasing order by moving $x$ from $n$ to $1$. Thus, on the iterations of the binary search we will end up alternating with $x$ going from $1$ to $n$ and from $n$ to $1$.How many iterations of binary search do we need? Due to how we get info from the queries, an interval of length $w$ will become an interval of length at most $\left\lceil\frac{w-1}{2}\right\rceil$. We can show that going from $w = 2000$ to $w = 1$ takes at most $10$ steps — thus we need at most $10$ iterations of binary search.We use at most $2\cdot 3n + 10\cdot 3n = 36n \le 40n$ queries in total.Implementation: 244173268
 » 4 months ago, # |   -9 Damn, I was thinking binary search and dp. But couldn't figure out the dp part. Great contest though
 » 4 months ago, # |   +21 A small thing in the editorial in D:The time complexity should be $O(n\ logn\ log(n*10^9))$ instead of $O(n\ logn\ log10^9)$.
•  » » 4 months ago, # ^ |   +24 tbhsame thing considering $O(\log(nX))=O(\log n+\log X)=O(\log X)$ when $X \gg x$
•  » » » 4 months ago, # ^ |   0 so is this legal: O(nlognlog10^9) = O(nlognlog10^2 + nlognlog10^7) and then ignore 10^2 term cuz 10^7 >> 10^2
•  » » » » 4 months ago, # ^ | ← Rev. 2 →   0 Yes, and $O(n logn) = O(n/100000 * logn)$. When you calculate complexity of the solution and write it in O-notation, you have to assume that there is no big constant in real complexity.You can even say that $O(nlognlog10^9) = O(nlogn)$, because $10^9$ is a constant. Do not take notations too seriously, its meaning is to approximate
•  » » 4 days ago, # ^ |   0 .ComplaintFrame { display: inline-block; position: absolute; top: 0; right: -1.4em; } .ComplaintFrame a { text-decoration: none; color: #ff8c00; opacity: 0.5; } .ComplaintFrame a:hover { opacity: 1; } ._ComplaintFrame_popup p, ._ComplaintFrame_popup button { margin-top: 1rem; } ._ComplaintFrame_popup input[type=submit] { padding: 0.25rem 2rem; } ._ComplaintFrame_popup ul { margin-top: 1em !important; margin-bottom: 1em !important; } In Problem D how do we know that the answer is Binary Searchable can anyone prove it ? how is a monotonic Space being Generated
 » 4 months ago, # |   0 I solved C in similar way , suppose ${a > b}$ and ${d = a - b}$ find the positions where the ${i}$ bit in ${a}$ is $1$ and $0$ in ${b}$ , when flipping one position in $a$ and $b$ where the previous condition is met we reduce the difference ${d}$ by $2^{i+1}$ when $d\geq2^{i+1}$. Store those values in array c in descending order and decrease $d$ by ${c_i}$ while $\Sigma^k_0{c_i}{\leq}2r$.
 » 4 months ago, # |   +8 I'm not really understanding how we can do E using the non-randomised in comfortably less than $40n$ queries. Here's my thought process. Use $6n$ queries to find $n$ and $1$, then to find $\frac{n}{2}$, we need to increment $x = 1 \to \frac{n}{2}$, then when we query an element, either it's bigger,smaller, or equal to $\frac{n}{2}$. If it's bigger or smaller, then $x$ will be incremented or decremented in the wrong direction, so we query either $1$ or $n$ to correct $x$.We then go to the next element and repeat. In the worst case, this takes $O(n)$ turns (suppose $\frac{n}{2}$ is the last element or something). However, since we need to query $1$ or $n$ each time to correct, it's actually order $2n$. Then, we need to query every other element to get its relative position. This takes $O(n)$ time. So in total, we have $\frac{n}{2} + 2n + n = \frac{7}{2}n$.This would be fine since $\frac{T(n)}{n} = \frac{\frac{7}{2} n + 2T(\tfrac{n}{2})}{n} \approx 39.2 \leqslant 40$ when $n = 2000$, but in the divide and conquer step, we need to first handle the left side and then the right. This would require decrementing $x$ (which is now $\frac{n}{2}$) to 1 then we need to bring it back up to deal with the left side, so this is going to take $\frac{n}{2}$ probably which leads to overall time complexity of $\frac{T(n)}{n} = \frac{4 \cdot n + 2T(\tfrac{n}{2})}{n} \geqslant 40$ when $n = 2000$. Also the $6n$ queries used at the beginning can hurt too.What's wrong with my analysis?
•  » » 4 months ago, # ^ |   +17 I had a parallel binary search after finding 1 and n, my worst case is 39n. Limits definitely werent lenient for my approach.
•  » » 4 months ago, # ^ |   0 I had a similar parallel binsearch solution.Keep all non 1 and non n values in array, start by assigning 1 to all of them and each time try to add $2^j$ (11 bits max). Each time you sort all values, minimise x (i.e. call $? pos(1)$ until you get '=' response), then go through all values in increasing order. Increase x if necessary to get to $val + 2^j$, after that you check if actual value is lower or not. If not, you add $2^j$. After that you return to the previous value by calling 1 or n and continue.Now this didn't quite pass, because you get about 4N calls per each cycle and we have 11 bits, but we can simply switch direction each time we pass a cycle (i.e. one time in increasing order then one time in decreasing and so on) so that we don't have to waste N queries to restart our value each cycle.This solution uses at worst 3N (find 1) + 3N (find n) + 11*3*N = 39N queries
•  » » 4 months ago, # ^ |   +3 I think some of your steps aren't needed which is adding unnecessarily to the total number of operations. So initially, in $6n$ queries we find $1$ and $n$, and we use $\frac{n}{2}$ queries to set $x = \frac{n}{2}$ for starting the 'quick sort' algorithm. We will add these separately to the final answer.From what I understand in your logic, you first use $2n$ operations to get the position of $\frac{n}{2}$, and then you use another $O(n)$ steps to find out the relative positions of the other elements with respect to $\frac{n}{2}$. Instead of doing this in two steps like this, once you've gotten $x = \frac{n}{2}$, you can just check all the elements in a single run to see if they're bigger or smaller than $\frac{n}{2}$ in $2n$ steps ($2n$ because we need to do an extra query to bring $x$ back to $\frac{n}{2}$ after every query). In this process, we will also find the location of $\frac{n}{2}$. Then similar to quick sort you place all the smaller elements to the left of $\frac{n}{2}$ and bigger elements to the right.After this, we need to set $x$ for the left and right recursive calls accordingly, to be precise, $\frac{n}{4}$ and $\frac{3n}{4}$. This will take another $\frac{n}{2}$ steps. So in total you have $2n+\frac{n}{2} = \frac{5n}{2}$ steps.$\frac{T(n)}{n} = \frac{\frac{5n}{2} + 2T(\frac{n}{2})}{n}$ which will give $\frac{T(n)}{n} \approx 28$. In addition to this, we had $6n+\frac{n}{2}$ queries in the beginning. So overall, we can complete everything in less than $35n$ steps, which is well within the limit.
•  » » » 4 months ago, # ^ |   +3 Thanks, this made sense. I completely missed the fact that you could check relative order while on the route to finding $\frac{n}{2}$.Just one thing I'm a little confused on. When you say we need to set x for the left and right recursive calls respectively, we start $x = \frac{n}{2}$. Then it needs to go down to $\frac{n}{4}$ and once it's done, it could be at 1 (worst case I'm assuming), so bringing it up to $\frac{3n}{4}$ could take $\frac{3n}{4}$, so that "bridging" step could take $\frac{n}{4} + \frac{3n}{4} = n$. Then, $\frac{T(n)}{n} = \frac{3n + 2T(\tfrac{n}{2})}{n} \approx 33.9$, so overall when factoring in the initial $6n$, it's just barely under 40, which I suppose does work (granted my assumption above holds).
•  » » » » 4 months ago, # ^ |   +3 Actually I was also thinking about this exact same thing, good that you asked.So once we are done with the current step ($x = \frac{n}{2}$), we will decrement $x$ from $\frac{n}{2}$ to $\frac{n}{4}$ for the left recursive call. Imagine a tree like recursion structure here, so from the root, we go to the left call with $x = \frac{n}{4}$, then we process all of the recursions in the subtree of this, and once this is done, then we go to the right node of the root, (the one with $x = \frac{3n}{4}$).So, if you think about it, since we are doing recursion in a root->left->right order (kind of like a preorder traversal), the last call that is processed in the left ($x = \frac{n}{4}$) subtree will be the node with $x = \frac{n}{2} - 1$ (it will be the rightmost leaf in the left subtree). So before we start the right recursive call ($x = \frac{3n}{4}$), the last call before this would have already set $x$ to $\frac{n}{2} - 1$. So we will only have to increment $x$ from $\frac{n}{2} - 1$ to $\frac{3n}{4}$ to go from the left recursive call to the right one.Overall, the 'bridging' step only involves decrementing $x$ from $\frac{n}{2}$ to $\frac{n}{4}$, and then incrementing $x$ from $\frac{n}{2} - 1$ to $\frac{3n}{4}$. That comes out to be $\frac{n}{2} + 1$ operations. The rest of the steps, i.e. from $\frac{n}{4}$ to $\frac{n}{2} - 1$ will be taken care of in the subsequent recursion calls inside the left subtree. Let me know if this makes sense, I'm also not completely sure if it's correct but it seems right to me.
•  » » » » » 4 months ago, # ^ |   +3 This makes perfect sense, thanks!
 » 4 months ago, # |   +29 D is solvable in O(nlog(maxa)) using deque.
•  » » 4 months ago, # ^ |   +13 More specifically, a monotonic queue. It is a popular interview problem to maintain maximum/minimum in a sliding window in $O(n)$ time.
•  » » » 4 months ago, # ^ |   +1 Yes. It is interesting that monotonic queue (deque) to maintain minimum can be extended to the two pointers, not just a fixed-width sliding window.
 » 4 months ago, # | ← Rev. 3 →   +39 Problem B can be solved in $O(n)$. We know exactly where to place elements of sorted $b$ knowing the order of elements in $a$: vector a(n); for (int j = 0; j < n; ++j) cin >> a[j]; vector b(n); for (int j = 0; j < n; ++j) cin >> b[a[j]-1]; for (int j = 0; j < n; ++j) cout << j+1 << " "; cout << endl; for (int j = 0; j < n; ++j) cout << b[j] << " "; 
 » 4 months ago, # |   +3 as i understad:During contest someone hacked this 244128703 randomized E submition which use rand() without seed.Then my randomized E submission 244133027, which happened to be identical to mentioned one, failed on auto generated after hack sys test 26.I am not sure what to think about it :)always use something like srand(timestamp) i guess
•  » » 4 months ago, # ^ |   -27 How about just using a better RNG?
•  » » » 4 months ago, # ^ |   +23 "Do better" is great advice, thanks.Is there any chance that you can recommend something specific? mt19937 rnd(537) from editorial is also determined by seed as far as i can tell
•  » » » » 4 months ago, # ^ |   -21 do some work yourself, "good random number generators c++ codeforces" google => https://mirror.codeforces.com/blog/entry/61587 first result, read it.
•  » » » » » 4 months ago, # ^ | ← Rev. 2 →   0 So the problem with rand() max_N does not apply to E because n <= 2000 and have nothing to do with my "I got hacked LOL" problem. AND mt19937 share same vulnerability with rand if you use const seed.And you just decided to write "do better" because you read about "better" than rand() random.
•  » » » 4 months ago, # ^ |   +6 and... your E submission use rng(timestamp)which is somehow better than srand(timestamp)?
 » 4 months ago, # | ← Rev. 2 →   0 Problem D has very poor test cases. My n^2log(n) solution got passed in 93ms.https://mirror.codeforces.com/contest/1918/submission/244154127But if we take n = 10*5 and a = [1, 2, 3, .... , 1e5 — 20] + [1e9] * 20. Then it gives TLEhttps://mirror.codeforces.com/contest/1918/submission/244190578PS I hope the contest gets rejudged, and I hope I can have a positive delta [currently -4 :(].
•  » » 5 weeks ago, # ^ |   0 can we do the DP part of problem D using top down dp ? I cant really figure it out .
•  » » 4 days ago, # ^ |   0 In Problem D how do we know that the answer is Binary Searchable can anyone prove it ? how is a monotonic Space being Generated
 » 4 months ago, # |   0 For problem B, my code had passed the pretests during contest. But in one of the main tests it says wrong answer Integer parameter [name=b[i]] equals to 0, violates the range [1, 20] (test case 16788) I don't understand, since I am just taking the input and sorting it there's no way 0 should appear in the output right? https://mirror.codeforces.com/contest/1918/submission/244094541 What am I doing wrong here?
•  » » 4 months ago, # ^ |   0 Your comp function should use < operator instead of <= because Compare requires strict weak ordering. Check https://en.cppreference.com/w/cpp/named_req/Compare for more details.
•  » » » 4 months ago, # ^ | ← Rev. 2 →   0 Wow. I never knew that thanks for letting me know. You are right, my submission was accepted after just changing the comp function. https://mirror.codeforces.com/contest/1918/submission/244208943
 » 4 months ago, # |   +3 Amazing problemset. Overall a good contest! :)
 » 4 months ago, # |   0 Bhai how to be good at implementation based problems? Any advice plzz
 » 4 months ago, # |   +3 did anyone found any similarities between today's problem c and previous codechef round's problem xorry 1? in the problem xorry 1 it was given that, for a given x find a and b such that their xor is x and their difference is minimum .so if a^b=x then a=b^x and b=a^x ,which means the question was to find a and b |a^x-b^x| is minimum.
•  » » 4 months ago, # ^ |   0 Yes they were almost same questions
 » 4 months ago, # |   +11 ooaa will you add a feedback vote for this contest? Here's a recent example how to use it to collect feedback about problems. Good problem: Okay problem: Bad problem: Did not attempt:
 » 4 months ago, # | ← Rev. 2 →   +3 In B sorting arrays by $a_i+b_i$ also AC: 244212811Logic is that if $a_i$ is at index $i$ then it makes least number of inversions.
•  » » 4 months ago, # ^ |   +13 Funnily enougth, any type of sorting, that forbids $a_i > a_j$ and $b_i > b_j$ for $i < j$ will work(you can prove that fact similar to editorial proof). So, if you want, you can sort by $a_i$, by $b_i$, or by $10a_i + 3b_i$
•  » » » 4 months ago, # ^ |   0 Can you prove the latter case? Why will it work for $10a_i + 3b_i$ ?
•  » » » » 4 months ago, # ^ |   +1 As long as you are not creating pairs with $2$ inversions (in editorial terms) what you lose in $a$ you save in $b$. So $10a+3b$ is almost $1a+0b$ (editorial solution)
 » 4 months ago, # |   0 D problem (Blocking Elements) Video Solution: https://youtu.be/ZfqWLxBG-Ow?si=WxQOHwg_hdelqH4C
 » 4 months ago, # |   +3 About G, can anyone explain how the construction for n=7 came to mind? Or what was the general direction of the attempt? Thx :)
•  » » 4 months ago, # ^ |   +21 I found a smaller construction by implementing a brute-force in Python: for n in range(2, 10): for arr in itertools.product([-3, -2, -1, 1, 2, 3], repeat=n): if collections.Counter(arr) == collections.Counter(func(arr)): print(f"{n = }, {arr}") break Output: n = 2, (-3, -3) n = 4, (-3, -2, 2, -1) n = 6, (-3, -2, 2, -1, 1, 3) n = 7, (-3, -3, 2, 1, -1, 1, -2) n = 8, (-3, -3, 2, 1, -1, -2, 3, -1) n = 9, (-3, -3, 1, 2, 1, -1, -2, 2, -1) 
 » 4 months ago, # |   0 C using Digit DP 244113917
•  » » 3 weeks ago, # ^ |   0 Can you explain the idea of using dp ?
 » 4 months ago, # |   0 _ int f_ans=0;_ _ int sz=s.size();_ _ for(int i=0;i
•  » » 4 months ago, # ^ |   0 If 2^x does not fit in integer type but fits in long long, then you need to use 1LL << x instead of 1 << x.
 » 4 months ago, # | ← Rev. 3 →   +24 I had a different solution for G and I think its much easier.so we say that if we have a good array of size n ending in x, y (and x is not equal to y) we can achieve a good array of size n+2 like this: a1 a2 a3 ... x, y, -y, x-yif we denote the sum of ai neighbours by ci its array compared to a is like this:a1, a2, a3, ... x, y, -y, x-yc1, c2, c3, ..., cn-1, x-y, x, -yand we know that c1 to cn-1 is a permutation of a1 to an-2 and an(aka y) and we have -y, x-y, x in both arrays the new array of size n+2 is good.but we still have to be careful that |ai|<=10^9.if the two last numbers were 2, 1 the numbers that we add is like this:2, 1 -1, 1 -1, -2, 2, 1 so it repeats it self and |ai| won exit 10^9.for the examples of 4 and 7 we can use these: -1, -2, 2, 1-1, 4, -1, -3, -2, 2, 1so that was my solution!
 » 4 months ago, # |   +6 very good contest
 » 4 months ago, # | ← Rev. 2 →   0 In the tutorial of the problem F it is stated that it is possible to choose k leaves, without the last visited one and teleport from them, but for the first example of the problem you can not achieve the cost 9 without teleporting from the node 6, which would be the last traversed leaf if we sort the children of each node by depth. Additionally, in the code k is increased by 1 to show that you are allowed to do one extra jump to the root as you are not returning from the last leaf, but even though it is wrong, my intuition would be to consider just the gains between each consecutive pair of leaves (choosing k of them) and not take into consideration the distance from the last leave to the root, as you won't take this jump. Can someone clarify please?
•  » » 4 months ago, # ^ |   0 for the first example of the problem you can not achieve the cost 9 without teleporting from the node 6 Wrong. You can reach cost $9$ while ending at $6$: $1 \rightarrow 3 \rightarrow 7 \rightarrow 3 \rightarrow 8 \overset{\text{teleport}}{\rightarrow} 1 \rightarrow 2 \rightarrow 5 \rightarrow 2 \rightarrow 4 \rightarrow 6$ but even though it is wrong, my intuition would be to consider just the gains between each consecutive pair of leaves (choosing k of them) and not take into consideration the distance from the last leave to the root, as you won't take this jump. Am I misunderstanding, or are you saying that your intuition is wrong? Anyway, your intuition is correct, and so is the editorial. Considering or not considering the path ending in a leaf as an extra teleport doesn't make any difference, as that teleport would always be the most time saving one, so you'd always choose that in both cases.
•  » » » 2 months ago, # ^ |   0 Regarding problem F, I was wondering how people came up with the idea during the round. Did you guyz prove your approach or just tried a few intuitive things because the idea doesn’t seem very obvious. Just curious to know.
 » 4 months ago, # |   0 problem B: can be solved by using vector of pair too.Well its the same idea like the tutorial.Just used vector of pair, instead of array of pair. - If anyone need here is the Submission
 » 4 months ago, # | ← Rev. 2 →   0 Deleted
 » 4 months ago, # |   0 The second case (when we set 1 in x at the position of the first differing bit) is analyzed similarly, but in fact it is not needed because the answer will not be smaller, and x will become larger.Can someone explain and prove this?
 » 4 months ago, # |   0 The problem C was very good. I thinked on it one and half hour
•  » » 4 months ago, # ^ | ← Rev. 4 →   0 why in problem c I have to make the bit in a equal to 1 only if its already 1 in b and if the position is differing, then we will set a 1 in a⊕x at this position if possibleif its 0 in b and in a then making it 1 on both numbers should make no difference but I tried that and it give me wrong answer on test 2 so why would making a digit equal to 1 on both numbers make any difference ?edit: I just figured out why its because doing this process will make x gets larger with no benefit at all and I want to make x large only when it is beneficial for me
 » 4 months ago, # |   0 Can anyone help me understand what am i doing wrong in C? Here is my Solution.My approach : make a > b. As long as the ith bit of a and b are same, do nothing, if they are different, for the first time it happens, set ith bit of x such that ith bit of a becomes 1 and ith bit of b becomes 0, from then on keep the ith bit of x such that exact opposite happens. now in the same iteration, check the ith bit of r, if x and r are equal at that bit, do nothing, for the first time they are unequal, if the ith bit of r is not set and ith bit of x is set, set the ith bit of x to 0, this will ensure that x is definitely going to be smaller than r, for the rest of the iterations dont check the size of r and x. I do want to first set the ith bit of x acc. to a and b, then check later if i can or not by comparing it to r. Is the approach itself wrong or am i missing out something in the implementation? Ive broken my head for enough time over this, help a brother out folks. Thanks in advance.
•  » » 4 months ago, # ^ |   0 Take a look at Ticket 17321 from CF Stress for a counter example.
•  » » » 4 months ago, # ^ |   0 I understand the testcases are not working. I just cant figure out why.
•  » » » » 4 months ago, # ^ |   0 Did you try to dry run the testcase that I shared? It's literally the smallest possible counter example, with just 2 integers (1 and 2, and there are only 2 possible values of $x$ (0 and 1). Just read through your explanation and replace $a$, $b$ and $x$, you'll see at what point did an incorrect bit get added to the answer.
•  » » » » » 4 months ago, # ^ |   0 I did, and I figured the reason for why it wasnt running last night only, there was no way i couldnt debug it for large TCs. Later i figured i was using one more flag unnecessarily. Fixed it now. Thanks for your help.
 » 4 months ago, # |   0 I think there's a mistake in problem D's editorial. It states "less than l and not greater than r", but I think he meant "at least l and not greater than r".
 » 4 months ago, # |   0 I wonder if in B the arrays A and B were not permutations but general integer arrays such that each element is distinct in the two arrays individually, then would sorting still be optimal or not? Can someone help
•  » » 4 months ago, # ^ | ← Rev. 2 →   0 As we never care about the true values of the integers (i.e. we never do arithmetic on them) and instead, we just care about their relative order (smaller vs larger), the exact values don't matter and can be anything, as long as they're distinct (because original values were distinct). This is the same reason for why coordinate compression works, just in reverse.
 » 4 months ago, # |   0 I don't understand, why x += (1ll<
•  » » 4 months ago, # ^ |   0
•  » » 4 months ago, # ^ |   0 int is 32 bit and long long is 64 bit. So if left shift by more than 31 you will get integer overflow as it will pass the limit of integer. So that's why use long long here.
 » 4 months ago, # | ← Rev. 2 →   0 B. Minimise The Inversions The total number of inversions in a' and b' should be the minimum possible among all pairs of permutations that can be obtained using operations from the statement.Doesn't this mean we should reduce the inversions in a and b as much as possible.Doesn't it convey a different meaning convey a different meaning from total no of inversions in a and b combined should be minimum as possible.Consider the input a=[2, 3, 1] b=[3, 1, 2] the answer a' = [1, 2, 3] b' = [2, 3, 1] is being considered a valid solution but, a inversion count = 0 and b inversion count = 2.Consider the answer a = [1, 3, 2] and b = [2, 1, 3] this is a valid solution and yet the total no of inversions in a and b are (1, 1) as supposed to (0, 2) in the previous answer. isn't this the correct answer since we are supposed to keep the inversions in a and b to minimum rather than keep both of them combined to minimum ?
•  » » 4 months ago, # ^ |   0 no, the statement should be read in full it is The total number of inversions in a' and b' should be the minimum
 » 4 months ago, # |   0 I see greedy in Problem D's tag. Just wondering if anybody is aware of any greedy solution to this problem
 » 4 months ago, # |   0 In Problem C, If anyone has done it using dp then please share your idea $/$ code.
•  » » 4 months ago, # ^ |   0
 » 4 months ago, # |   +4 If you want to test your understanding of Problem D: Blocking Elements, you can try out ABC334F: Christmas Present 2. I also have a video editorial and practice contest for the latter.
•  » » 5 weeks ago, # ^ |   0 can we do the dp part of problem d using top down dp ? I cant really figure out . Or you could just make a video editorial for this problem D ?
•  » » » 5 weeks ago, # ^ | ← Rev. 2 →   0 Yes, why not? You can close your eyes and convert your iterative DP to recursive if you wish.For example, here's my iterative DP, and here's the same code converted to recursive DPIf you're talking about the set optimization, then yes, you would need iterative DP, the same way you need iterative DP in prefix sum / sliding window optimizations.
•  » » » » 5 weeks ago, # ^ |   0 Yes actually optimization was my main issue . Thanks for the reply .
 » 4 months ago, # |   0 Did anyone solve c greedily starting from least significant bit/(lsb) /right to left?
 » 4 months ago, # |   0 About problem B.Tutorial proves that we cannot get a better solution by changing ONLY ONE pair of ij. But what if we change MORE than one pair of ij? I think it's not obvious, because the proof need array a to be sorted.I use the same mothod(just prove the condition of moving one step) to prove greedy problems but I keep doubting this. XD
 » 4 months ago, # |   0 Why is the order of visting the leaves in the editorial for F optimal?
 » 4 months ago, # |   0 In the tutorial for F,To do this, it is enough to move from this leaf to the root until the first node, for which the previous node is not the rightmost child.I couldn't understand this sentence. Can anyone explain this please?
 » 4 months ago, # | ← Rev. 3 →   0 Deleted
 » 4 months ago, # |   0 D can be solved in Nlog(10 ^ 9) instead of NlogNLog(10 ^ 9).We can use monotonic queue to get the minimum, instead of a setMy Submission : https://mirror.codeforces.com/contest/1918/submission/245363349
 » 4 months ago, # | ← Rev. 3 →   0 Good questions
 » 3 months ago, # |   0 Can someone please explain what i am doing wrong in C here is my submission -> https://mirror.codeforces.com/contest/1918/submission/245687089
•  » » 3 months ago, # ^ |   0 Take a look at Ticket 17333 from CF Stress for a counter example.
•  » » » 3 months ago, # ^ |   0 Thanks Ser!
 » 3 months ago, # | ← Rev. 2 →   0 Alternative solution to F using Aliens trick and CHT:Let $b_1, \dots, b_m$ be the leaves of the tree in DFS order. We want to select a subsequence of size at most $k + 1$ to minimize $2(n - 1) - \sum\limits_{i=1}^{k + 1} d(a_i) + 2\sum\limits_{i=1}^{k} d(\mathrm{lca}(a_i, a_{i + 1}))$\$ where $a_1, \dots, a_{k + 1}$ is the chosen subsequence. Why?First of all, let's consider ending the path as another jump to the root. Let's count the contribution of each edge to the answer. Let $x$ be the number of selected leaves in the subtree of $u$. Notice that if $x = 0$ the edge between $u$ and $p_u$ will be traversed 2 times. Otherwise it will be traversed $x$ times. This simplifies into the above formula.This can be calculated with dp. After guessing that the answer is convex for increasing $k$, we can apply Aliens trick. This reduces the complexity from $\mathcal{O}(n^2k)$ to $\mathcal{O}(n^2 \log n)$.Let's now notice that $d(\mathrm{lca}(b_i, b_k)) \leq d(\mathrm{lca}(b_j, b_k))$ for $i \leq j \leq k$. This property allows us to apply CHT, and to find intersections we can use binary search. This makes the final complexity $\mathcal{O}(n \log^2 n)$, which is fast enough.Submission
•  » » 3 months ago, # ^ |   0 I find this solution quite educative jaja I mean, using 2 dp optimization looks unbelievable.
 » 3 months ago, # |   0 I think the following mental picture makes the proof for F a lot better:Firstly, consider the sequence of all vertices visited in the optimal route. Observe that $w$ must occur between $u_i$ and $u_{i + 1}$, since $u_i$ must not exist in $u$'s subtree (or else our proposed solution wouldn't pick $v_{i + 1}$). Similarly, $w$ must occur again between $u_i$ and $v_{i + 1}$. The path then looks something like $u_i, ..., w, ..., u_{i + 1}, ..., w, ..., v_{i + 1}, ...$. Case 1: If $w$ occurs again sometime after $v_{i + 1}$, swap the blue and the red: $w, \color{blue}{..., u_{i + 1}, ...,} w, \color{red}{..., v_{i + 1}, ...,} w, ...$Otherwise, let $r$ be the root of the tree. Then the path looks something like $w, ..., u_{i + 1}, ..., w, ..., v_{i + 1}, ..., r, ...$. Here, the particular $r$ we care about is either the $r$ teleported to after the last leaf in $v$'s subtree is visited, or an imaginary "extra" jump if that last leaf was the last leaf visited in the entire tree (just like in the editorial).Case 2.1: If $r$ occurs between $u_{i + 1}$ and the following $w$: $w, \color{blue}{..., u_{i + 1}, ...,} r, ..., w, \color{red}{..., v_{i + 1}, ...,} r, ...$Case 2.2: It doesn't. Then we can swap the following parts: $w, \color{blue}{..., u_{i + 1}, ...,} w, \color{red}{..., v_{i + 1}, ...,} r, ...$, and do the thing described in case 2 of the editorial (crawl back to $w$ from the last leaf in the red, and jump to $r$ from the deepest leaf in the blue). This also relies on it being true that the optimal route won't ever enter $u$'s subtree more than once, but it's much more obvious now. The proof for that is also restricted to this case, and didn't motivate the casework to begin with, unlike the lemma in the editorial — I don't think it's obvious at all why someone would start by proving that lemma.
 » 2 months ago, # |   +10 Here's a completely different solution for problem F. This is just a general idea, not an attempt to prove its correctness or explain in detail.Let's slightly modify the problem. We'll require returning to the root using teleportation after visiting all nodes, but we'll also increase $k$ by 1. Note that this problem is equivalent to the original one. Let's now solve this problem instead. If we choose a set of leaves $S$ from which we'll teleport to the root, and visit all of them in optimal order, it can be shown that the total cost of the path is equal to $2n-2-(2 |E_S| - \sum_{v \in S} depth(v))$, where $E_S$ is the set of all edges leading from the root to the nodes in $S$. Let $f(x)$ be the optimal cost we can get if $|S| \leq x$. Notice that the function $f(x)$ is increasing and concave, so we can apply the slope trick to find $f(k)$ without explicitly setting the limit for the size of the set $S$. We can do this by adding a penalty function to $f$, so instead, we're trying to optimize $f(x)-p|S|$ for various values of $p$. We binary search for the smallest value of $p$ which results in us taking no more than $k$ elements. This arrangement is then the optimal solution. Code: 253393999
 » 2 months ago, # |   0 I could not get the editorial for problem B , how come sorting first array will make the inversions minimum, i have tried some examples manually but i couldn't understand. Any mathematical proof would be great to understand. can anyone help with that ?
 » 7 weeks ago, # |   0 Please help for problem C, I keep getting WA on test 6. https://mirror.codeforces.com/contest/1918/submission/255018640
 » 4 weeks ago, # |   0 In the editorial of problem C:"Let's consider the bitwise representation of numbers a , b , x . Let's look at any 2 bits at the same position in a and b , if they are the same, then regardless of what is in x on this position, the number |(a⊕x)−(b⊕x)| will have a 0 at this position."I think that this isn't always true. let us suppose a = 13 and b = 7; their binary representations are 1101 and 0111, respectively. The bits at position 3 from the end is equal for both but let us take x = 0. So $|a \oplus x - b \oplus x | = 0110$. The bit at the position 3 from the end is still one, not zero.Please check, ace5 ooaa
 » 4 days ago, # |   0 In Problem D how do we know that the answer is Binary Searchable can anyone prove it ? how is a monotonic Space being Generated