Thanks everybody for participating in the round!
A. Shortest Increasing Path
Author: BernatP Preparation: BernatP
Hint1
Hint2
Solution
Code
B. Multiple Construction
Author: danx Preparation: danx
Hint1
Hint2
Hint3
Hint4
Solution
Code
C. Rabbits
Author: misteg168 Preparation: misteg168
Hint1
Hint2
Hint3
Solution
Code
D. Game on Array
Author: BernatP Preparation: BernatP
Hint1
Hint2
Hint3
Solution
Code
E. Maximum OR Popcount
Author: Pablo-No Preparation: Pablo-No
Hint1
Hint2
Hint3
Hint4
Hint5
Hint6
Solution
Code
Bonus
F. Exchange Queries
Author: BernatP Preparation: misteg168
Hint1
Hint2
Hint3
Solution
Code
G. Modular Tetration
Author: BernatP Preparation: BernatP
About cheaters
Hint1
Hint2
Hint3
Hint4
Hint5
Hint6
Hint7
Hint8
Solution
Code
H. Maxflow GCD Coloring
Author: FelixMP Preparation: FelixMP
Hint1
Hint2
Hint3
Hint4
Hint5
Solution
Code
I1. Longest Increasing Path (Easy Version)
Author: BernatP Preparation: BernatP
Hint1
Hint2
Hint3
Hint4
Solution
Code
I2. Longest Increasing Path (Hard Version)
Author: BernatP Preparation: BernatP
Hint1
Hint2
Solution
Code








Auto comment: topic has been updated by misteg168 (previous revision, new revision, compare).
DP Code for C pls...
i had a recursion + memoization soln for C, its contest code so its pretty ugly but if u wanna have a look: 339543843
scroll down to the third function
Iterative Solution : 339620267
Iterative dp: https://mirror.codeforces.com/contest/2147/submission/339545811
State info: - 0 : rabbit can look either way - 1 : flower - 2 : rabbit looking left - 3 : rabbit looking right
Initially I have added two rabbits looking to the left to simplify the implementation.
Hey kid, want to try some enum? 339535771

339625988
Problem C
IF ELSE SOLUTION o_O
You know I always knew about enums and that they could be used for array indexing, but had never used it so they just slipped my mind. But now that you have mentioned it I see the lost potential, truly they would make the code so much more readable.
Thank you very much.
Redpanda_x Can you explain why this update is needed?
This is the case when the previous index has a flower and current index has an empty cell. Now what would force the rabbit in the current cell to look toward the right.
That would happen if the [-2] index is either a flower or a rabbit that cannot look right, which is rabbit looking left.
I see.
I was confused as what if i+1 has a rabbit, the rabbit at i+2 still can look to the left. I found that I missed one check before that when you skipped it so the rabbit at i+2 can look both ways.
Thanks a lot!
Thanks for this. Just went through it. Although yeah, after a good night's sleep I finally managed to beat C as well : https://mirror.codeforces.com/contest/2147/submission/339648107
Our implementation ideas are slightly different but I am glad to get closure to my nightmare since last night..
Iterative DP: 339684336
For the solution for $$$D$$$, how does this show that it's never optimal to merge numbers?
Link not working for Code D
I'm not sure if this is very correct but for these small cases:
2 2 2 4 4 4 4 : it is obvious what the answer will be equal for both
3 3 3 4 4 5 5 : here after one move from alice and bob the array will look like: 2 2 2 4 4 4 4
same as above so the contribution of even elements is always distributed equally.
merging can happen if the current player chooses an even x and there exists x-1 this is always bad for the current player as the count of x-1 will go up and it is then advantageous for the other player to play using x-1.
the other merging odd->even is good for the current player as they get the best extra share possible (provided they choose the odd term with max frequency because you get to choose this only once) and the merged terms are evenly distributed between the two afterwards.
I'm not sure if this cleared much here, if there is anything i missed do tell!
339648400
BernatP pls respond
~123gjweq2
so basically the game is equivalent to solving assuming merges wont happen, and if merges dont happen both alice and bob get best possible bounds
Gotcha, thanks. So if you merge into an even number, it could be helpful, but the game plays the same as if the numbers weren't merged, as the merge is evenly distributed. And if you merge into an odd number, that just gives Bob (or the next player) the advantage because you give them a bigger frequency to take advantage of (since the odd will only help them). If you take the odd number, you leave the player with two evens that are separated by $$$2$$$, in which case no matter what the other player does, you can evenly split them. So you benefit the frequency of the odd number. But if you took the even, you actually lose out on some numbers (your score is lower than Bob's) as, when the game is evenly distributed, you have only gained the frequency of the even number while Bob has gained the frequency of both numbers because of the merge.
I can't view the codes that are linked elsewhere
It seems like we sent the submissions in admin mode, we will fix this.
please add rate problem section
Just write a comment with your opinions, we'd love to get some constructive feedback
I would have loved if either t<=100 on problem E or the time limit was 3-4 seconds.
If problem B didn't make me spend 1.5 hours on it and still not solve... but perhaps this is my skill issue
Auto comment: topic has been updated by Pablo-No (previous revision, new revision, compare).
Can anyone help me find issue with the logic for Problem C. I used greedy, first fixing the direction of rabbit for those pots where they dont have any other option like 11010 or 01011, then find if I can find the pair for them and then for rest of the rabbits which have options chose them greedily for left or right. Left those on the border or consecutive 0s as they always find a direction to not jump: https://mirror.codeforces.com/contest/2147/submission/339602741. Please suggest a simple testcase where it might fail, I am not able to find any, also it failed on 1705th testcase can't view that.
A testcase:
10
0101010101
Thanks, very helpful.
D should have been placed before C, imo... I had to think for a decent bit to come up with the critical observations in C, whereas in D it was very easy to guess that even numbers are split, well, evenly.
Thanks for the awesome contest! Had a lot of fun solving and implementing problem C and (unexpectedly) D.
For problem C, there's another DP with just one parameter. Whenever we put a rabbit, we make sure it can be placed, either looking at an adjacent rabbit or paired with the next rabbit in a $$$\texttt{010}$$$ pattern.
Expressed as forward DP:
trueBackward DP is similar.
Was it intended in problem F to cut $$$O((n + q) \sqrt{n})$$$ solution ?
We have an $$$O((n+q)\sqrt{n})$$$ solution that I don't remember how it works and a $$$O((n+q)\log^2(n))$$$ using the dynamic connectivity trick
How does the dynamic connectivity trick solution work?
Add timespans to every range and build the segment tree over time with these ranges. Maintain a data structure while doing a DFS in this tree, you can either use a persistent segment tree or a segment tree with rollbacks.
My idea was to use two segment trees, one to keep the SCC and the other to keep the contribution of each component.
To maintain the components, we have an array $$$a$$$, were $$$a_i = a_j$$$ if $$$i$$$ and $$$j$$$ are in the same SCC, for example, if $$$a = [1, 1, 3, 3, 3]$$$ first two numbers are in the same SSC and the last three are also in the same SCC.
To merge two or more components when adding a new range, we look for the smallest value in the range we are adding and set every value in the SCCs that intersect with our range to the new smallest value we have found. For exampl,e if $$$a = [1, 1, 3, 3, 3]$$$ and we add the range [2, 4], the minimum here is 1 in position 2 and the range intersects with components 1 and 3.
Updating the contribution when merging two components is straightforward.
For C, we can also solve by generating a bipartite graph. You can check this comment for more details: Comment
any one who is looking for D
see my submission 339613047
you will get an idea about the even odd simulation
for those who want to submit by there own take this idea:-
eg:-n=4
1 1 2 3
you can just calculate freq :-
1-2, 2-1, 3-1,
sort it in the pair in decr order of freq the main idea is that if 1 with count 2 and 2 with count 1 you can always chose the max count number because if you choose 1 then all the one we can decr and get the point equal to the count of the 1 i.e 2. but in the case for number 2 with count 1 you get only 1 point for the decr operation. i think that is pretty much required hint to solve or implement this one!
For C, only the run of 0101...1010 matters, which has >1 1s on both ends. For these, if there are odd 0s, we don't have an answer; otherwise, we always have an answer.
Proof: It's trivial that >1 consecutive 0s always have an answer. For single 0s with <=1 1s on both sides, if we have a group of >1 0s on a side (otherwise your run is incomplete), then we can imagine this run to always be of even parity, so the answer exists. (If it's odd, simply make the rabbit near the group to face the group). This considers all possibilities.
There's a pretty simple solution for $$$I_2$$$:
Make $$$46$$$ blocks of $$$301$$$ vertices each, the center of the $$$i$$$-th block being at [ $$$2^{\,{i+12}} - 2^{13}$$$ ] and $$$150$$$ vertices just to the left of the center and $$$150$$$ to the right.
That is, if the center is $$$c_i$$$, then there are points at [ $$$c_i - 150, \; c_i - 149, \; \dots, \; c_i + 150$$$ ]
Now take $$$2$$$ pairs of points such that the first pair of points are in the $$$a$$$-th and $$$b$$$-th block (with $$$a \lt b$$$), and the second pair of points are in the $$$c$$$-th and $$$d$$$-th block (with $$$c \lt d$$$).
It can easily be shown that if $$$b \gt d$$$ or if $$$b=d$$$ and $$$a \lt c$$$, then the distance of the first pair is more than that of the second pair.
So the main idea is: choose $$$2$$$ blocks.
Say the points of the first block are [ $$$p_{1} \lt p_{2} \lt \dots \lt p_{301}$$$ ] and the points of the second block are [ $$$q_{1} \lt q_{2} \lt \dots \lt q_{301}$$$ ]
Then jumping like this will ensure the differences are increasing: [ $$$p_{301},\; q_{1},\; p_{300},\; q_{2},\; \dots,\; p_{1},\; q_{301}$$$ ]
Basically, we are going back and going ahead $$$1$$$ extra step, then going back $$$1$$$ extra step, and so on.
We will do this procedure between the blocks in this order:
$$$(1,2),\; (2,3),\; (1,3),\; (3,4),\; (2,4),\; (1,4),\; \dots$$$
With some careful transitions between changing blocks, we can ensure the differences remain increasing.
We use $$$46 \times 301 = 13846$$$ points here, and an additional $$$990$$$ extra points to handle transitions, keeping the number of distinct points within the limit.
This solution works up to $$$n = 622935$$$ and can be extended to around $$$700{,}000$$$--$$$800{,}000$$$ by tweaking the numbers used.
Submission : https://mirror.codeforces.com/contest/2147/submission/339611687
And I think using the I2's expected idea, instead of doing $$$ p301,q1,p300,q2,…,p1,q301$$$ you can add two pivots in the middle $$$u_1, u_2$$$ and do $$$p301, u_2, q1, u_1, p300, u_2, q2 \ldots$$$ to get double the amount of jumps with O(1) more points and reach 1M+. Crazy
There is a solution for E that take $$$O(nb + b^3)$$$ time instead of $$$O(nb^2)$$$. At each bit position $$$i$$$ we need to sort all numbers restricted to bits $$$0..i$$$ and choose the largest number out of the ones that we didn't pick for previous bits. note that we picked at most $$$b-1$$$ numbers for previous bits so for every $$$i$$$ we only need to store top $$$b$$$ largest values. You can obtain those in $$$O(nb+b^2 \log b)$$$ time using
std::nth_element. After this preprocessing stage we can forget about the initial array, and the whole algorithm takes just $$$O(b^3)$$$ time.P.S. This $$$O(b^3)$$$ factor could be dangerous if we had $$$t \le 10^5$$$ instead of $$$t \le 10^3$$$ but one can overserve that this stage of the algorithm actually takes $$$O(b^2 \cdot \min(b, n))$$$ time so we're good.
Congratulations! You got the fist part of the bonus problem at the end of the editorial of E.
Now you only have left to improve the $$$\mathcal{O}(b^2\log(b)+b^2\min(b,n))$$$ part to $$$\mathcal{O}(b^2)$$$ per testcase.
(Considering $$$t$$$ testcases this improves: $$$\mathcal{O}(tb^2\log(b) + tb^2min(b,\frac{n}{t}))$$$ to $$$\mathcal{O}(tb^2)$$$)
whoops. i am sorry :) i did not see that there is a bonus.
I feel this is a pretty clean solution for E : 339628766
I like this implementation for C rather than a flag / counter based implementation since it runs on the intuitive idea of removing what's unnecessary like 11->1 or 00->0 . Had a lot of fun solving this : 339628766
I submitted a greedy solution for D where the player always picks the available number with the highest frequency and I assumed NO merging happens but it passed. Can someone prove why it works?
Example
2 2 1
Alice picks 2 to get +2 points — > 1 1 1, then Bob picks 1 but I assumed that no merging happens so he also gets +2. — > 0 0 1.
Alice picks 1 and gets +1. So final answer is 3 2
This method is wrong but it passed somehow. Don't know why.
I did this too. For ai and aj having the same parity, even after merging, the final answer is the same as without merging (imagine making a bar graph and colouring alternate horizontal lines). This means that in this round, both get equal points (0 difference).
For different parities, the first person to play can do an operation on a smaller element; now both have the same parity, and it reduces to the above logic (0 difference). The only difference in this round comes from the first operation, so it's optimal to be the first one to do it. If we go from large to small, we take turns being the first one alternately, so it gives a valid solution.
I wrote a problem very similar to B before.
523552D - A + B = Cookie
My version is actually harder since the distance between the elements of $$$x$$$ should be exactly $$$x$$$. Still took me 40 minutes to solve B though..
That sounds pretty similar to the Langford sequence... although I guess solving your version might (?) be easier
Today's contest had some really high quality problems absolutely loved C but I couldn't solve C during the contest.
for B I did something greedy we know that there are more choices for two pairs for small numbers compared to larger numbers, so if we just start fixing larger numbers first then we can fill the smaller numbers in the "holes".
D was a good Problem I noticed the fact that for a Person if the highest frequency element is odd then its best to choose it and if even then we can see that if we take any element which occurs odd times then its better to take that
After seeing potato167's code I understood C (so beautiful) So the main observation in C was having two consecutive same chars split the problems into different sub parts so now in one part I don't have consecutive stuff so only thing to deal with is the sequence 1010101010101. 0 before and after doesn't matter as its L and R respectively Now this sequence can only be good if no. of zeros is even so condition is length of this sequence is 3(mod 4)
Problem D code TLE https://mirror.codeforces.com/contest/2147/submission/339616363 If someone could share why I would really appreciate it
Replace unordered_map with map, like in https://mirror.codeforces.com/contest/2147/submission/339636773, and it passes.
you're my hero!
The solution for problem D demonstrates that merging numbers is never the best strategy.
see my submission 339613047
you will get greedy even odd idea
or can also read my comment on D above in which i give idea/hint about it.
The idea of problem F is to find all $$$\max_{1\dots p}a_p=p$$$ positions, so another solution is using a segment tree that maintains max $$$a_i$$$ and the info of SCCs of its right son, while the answer of a node can be calculated in $$$O(\log n)$$$ time.(I can't come up with the name of this data structure.) It's $$$O(n\log^2n)$$$ (because the
pushupfunction takes $$$O(\log n)$$$ times) but enough to pass though.C can be solved using 2-SAT as well. I feel like it is kind of an overkill but it was the first thing that came to my mind so I just went with it :))
Can you explain how, and a brief about 2-SAT (like I do have a general idea from my university course on Algorithms), but how is it used in CP problems, and particularly C, because I wasted a lot of time in my DP solution :(
If there is a rabbit with no neighbours, print "No" directly. Otherwise:
$$$\land_{\text{all rabbits}} ((\text{rabbit on left looks right}) \lor (\text{rabbit on right looks left}))$$$ (emptly clause are always false that's why no neighbour implies unsat)
If we set looking right to be true and looking left to be false, and $$$b_i$$$ as the valuation of 0 on the i-th index, then this becomes:
$$$\land_{\text{i:a[i]=0}} (b_{\text{left 0}} \lor \neg b_{\text{right 0}})$$$, which is the same as $$$\land_{\text{i:a[i]=0}} (b_{\text{right 0}} \implies b_{\text{left 0}})$$$, replace left 0 and right 0 with index of closest rabits at a distance of at most 2. For people with only one neighbour, we can write $$$(\neg b_{\text{left}} \implies b_{\text{left}})$$$ or $$$(b_{\text{right}} \implies \neg b_{\text{right}})$$$ depending on which neighbour is present. Then make implication graph described here, find SCC and see if it is 2-sat.
orz
In the solution of problem G, is that all $$$ord_a(m)$$$ should be $$$ord_m(a)$$$?
It was a swap(C, D) contest. Anyways, thanks for the round.
Although many people solved C with DP and basic if-else's there is a bit easier solution via DSU. First of all, we can see that two sections of rabbits are independent if the size of border in between is bigger or equal to 1. But, if it's equal to one, we can solve them together. Let's merge all of those. Now, we consider a group of rabbits, when are they solvable?
1- If there's any rabbit section of size >= 2. (We can show it) 2- If there's any rabbit that is either in the beginning or ending of the whole string. 3- If 1-2 are false, the total number of rabbits in this group must be even. 4- Else, answer is false.
We can manage all those via DSU easily. (We just need a flag for checking, and sizes.)
My code (although a bit rushed in the contest): 339545851
proving E was diabolical
Seems that there's a typo in the solution of E, it should be $$$2^{30} \leq 2 \cdot 10^{9} \lt 2^{31}$$$. (~.~)
Thanks! Fixed.
The editorial of H explains nothing, could you write a more detailed one instead of forcing me to guess everything?
In problem E, it should be $$$2^{30} \leq 2*10^{9} \lt 2^{31}$$$.
Thanks! Fixed.
I love problems B, C, E and F but I hate problem D. I rather quickly guessed the solution but failed to prove it for quite long and failed :( I was really surprised to see it get accepted. E was great because once you had the correct guess/observation, it was easy to prove its correctness. F was absolutely perfect but no time to code :( So overall thank you for the great contest!
In E what if we had to minimize the popcount instead?
Can Anybody help with why my code is giving the wrong answer? I was very convinced that it should work, but it's failing somewhere on test case 2 for Problem C — Rabbits, and I badly want to see where it's breaking.
What I am doing is basically seeing where the current rabbit is facing, if its face is to its left — In this case, I will need some right counter which means rabit facing left.
If the current rabbit can face either left or right, or it's not decided yet, I will mark undecided so that I can resolve it based on the next rabbit. If the next rabbit is undecided too, I'll pass the counter undecided until it's resolved.
Solution Link
I think the official editorial is too complex on problem I. I found a much better solution that can solve I2 even if $$$M=7182$$$. 339845253. I didn't use any DP or random. Are there solution even better?
Problem H.
The constructive proof for $$$d=2$$$ was surprising to me.
My question: Is it difficult to prove this problem using only general facts about the linear algebra of graphs?
For example, the problem can be restated as constructing a vector $$$x = (x_v)_v$$$ such that $$$Lx = (\deg(v))_v$$$, where $$$L$$$ is the Laplacian matrix. Let $$$B$$$ be the incidence matrix, then it can be written as $$$BB^{T}x=B \cdot \mathbf{1}_E$$$.
I tried to derive the conclusion from this kind of reformulation, together with well-known facts about the cycle space and the cut space, but I couldn't manage to find a proof.
If there is a neat proof along these lines, I'd love to see it.
Yes, the existence of an even degree partition can be proved using linear algebra. See, for example, the solution of USAMO 2008/6 here.
I'm struggling to understand the proof presented in the editorial for D. More specifically I don't get the 'by induction' step in the first two of the three cases presented. Can someone help clarify please?
I get the logic behind removing the most frequent odd element first, and then even if it there's more frequent even numbers after that step it doesn't matter because they're going to get equal points for that anyways and so it's more beneficial for the second player to go for the next most frequent odd. I'm guessing that's what the inductive proof is trying to say because it's now a smaller case but it's got a lot more detail and I'm struggling with that.
Also is there an alternative inductive or other type of proof that is a little simpler? thanks
How will they know (for question G) which submissions are genuine and which are from cheaters? Past data or something else ??
I find the editorial for F to be totally unintelligible (maybe it's because I don't know what a tournament graph is), so I'll explain my solution here (btw, orz to the author for creating such a beautiful problem in this dark era of anti-gpt problems).
Let's sort $$$p$$$ and $$$s$$$ in decreasing order of values to get permutations $$$a$$$ and $$$b$$$ respectively, and then relabel every value with its position in the original permutation. For example, $$$[3, 4, 2, 1, 5]$$$ upon sorting becomes $$$[5, 4, 3, 2, 1]$$$, and upon relabelling becomes $$$[5, 2, 1, 3, 4]$$$. This simplifies analysis because now we can swap element $$$i$$$ for $$$j$$$ if $$$i$$$ occurs before $$$j$$$ in $$$a$$$ or $$$b$$$.
Define $$$v_a(i)$$$ to be the position of $$$i$$$ in $$$a$$$, and $$$v_b(i)$$$ to be the position of $$$i$$$ in $$$b$$$.
Let's consider an item $$$i$$$, and analyse the set $$$S(i)$$$ of all the items that $$$i$$$ is at least as valuable as.
Observation 1: The set of elements $$$S(i)$$$ lies on the contiguous range $$$a[l(i), n]$$$ where $$$l(i) \leq v_a(i)$$$.
The answer is then simply $$$\sum_{i = 1}^{n} n - l(i) + 1 = n \cdot (n + 1) - \sum_{i = 1} ^ {n} l(i)$$$, and all that remains is to figure out how to maintain $$$\sum_{i = 1} ^ {n} l(i)$$$ quickly.
Firstly, what even is $$$l(i)$$$?
Observation 2: We define an index $$$j$$$ to be "good" if the set of elements in the subarray $$$a[j, n]$$$ is equal to the set of elements in the subarray $$$b[j, n]$$$. Then, $$$l(i)$$$ is the greatest $$$j \leq i$$$ such that $$$j$$$ is a good index.
Exercise for the reader (outline: consider the path from $$$i$$$ to $$$l(i)$$$).
Now, how do we maintain $$$\sum_{i = 1} ^ {n} l(i)$$$ efficiently? Notice that to preserve the definition of $$$a$$$ after swapping $$$p[i]$$$ and $$$p[j]$$$, we only need to swap the values $$$i$$$ and $$$j$$$ in $$$a$$$ (and the same is symmetrically true for $$$b$$$ and $$$s$$$).
The task now reduces to this:
We'll maintain $$$G$$$ using a segment tree, and store $$$G_i \cdot (G_{i + 1} - G_{i})$$$ at location $$$G_i$$$ in the segment tree. How can we do this? Let's start off with an array of 0s, and for every $$$i$$$, add $$$1$$$ on the segment $$$(\min(v_a(i), v_b(i), max(v_a(i), v_b(i))]$$$. It's not difficult to see that the value at position $$$i$$$ in this array will now be $$$0$$$ iff $$$i \in G$$$. So we simply maintain this array using a segment tree, and at every position with a value of $$$0$$$ (a good index), we store $$$G_i \cdot (G_{i + 1} - G_{i})$$$. A swap will simply require removing the $$$2$$$ affected ranges by pushing a range update of $$$-1$$$, and then including the new ones by pushing a range update of $$$1$$$. The answer at any given point of time is simply the sum over all good indices (the entire range). Here's my code.
I also implemented a square-root decomposition solution with XOR hashing (to trivially compare suffix sets) just to see if a mindless cheese could be squeezed within the TL, and it's unfortunately a bit slow (runs in 2.7 seconds when tested with a higher TL on a custom mashup).
nvm, square-root decomposition also easily passes (in <1s) upon switching to a better hashmap (code)
Interesting problem D
please help me understand the proof of problem E
I can give a proof, but first let me explain the previous approach, in case some parts differ from what you did:
The approach to the problem is to enumerate a value of $$$j$$$ such that the final bitwise OR result has its first $$$j$$$ bits all equal to $$$1$$$, and the remaining bits are the same as in the original OR result.
After enumerating $$$j$$$, we adjust the set $$${a_i \bmod 2^j}$$$ so that their bitwise OR result becomes $$$2^j - 1$$$.
Here, we enumerate $$$k$$$ from back to front, ie. $$$k = j - 1, j - 2, \dots, 0$$$.
If the final result already has a $$$1$$$ in the $$$k$$$-th bit, we do nothing;
if it has a $$$0$$$, then we must choose one element from $$${a_i \bmod 2^j}$$$ and increase it so that its $$$k$$$-th bit becomes $$$1$$$.
We simply apply a greedy strategy here, we choose the operation with the minimum cost, and remove the number that was operated on. If no number can be operated on, we add a contribution of $$$2^k$$$.
The correctness of enumerating $$$k$$$ from back to front is proved as follows:
Suppose in some solution, when handling a bit position $$$t$$$ whose result is $$$0$$$, we chose a non-optimal element $$$a_k$$$, while the optimal choice should have been $$$a_j$$$. Then we have $$$a_j \bmod 2^t \ge a_k \bmod 2^t$$$.
If we operate on $$$a_k$$$, during the increment process we must pass through the initial value of $$$a_j \bmod 2^t$$$.
This is equivalent to first turning $$$a_k$$$ into $$$a_j$$$, and then applying the remaining operations on $$$a_j$$$.
Clearly, directly operating on $$$a_j$$$ is never worse: we avoid unnecessarily increasing $$$a_k$$$, and the total number of operations is smaller. Moreover, if we use $$$a_k$$$ but we may increase it more than needed, that extra increase is not good.
So exchanging $$$a_j$$$ and $$$a_k$$$ yields a new solution that is no worse than the old one.
Isn't D way too easy for 1700???
absolute genius
Can someone explain, why is the transition in G when we group terms by their rad correct? The process was that you pick a divisor of $$$p_i - 1$$$ for all primes of $$$m$$$ and then multiply their $$$\phi$$$, then add this value to the $$$lcm$$$ of the divisors. But then it changes somehow to picking divisors of $$$\phi(m)$$$ which are comprime with $$$m$$$ and I just don't understand why does it work?
The editorial solution itself is failing on testcase 1!
nvm I copied the code wrong
We want visualiser again
In the problem G's solution, the notation $$$[t]$$$ appears, but I searched through the entire explanation and couldn’t find any definition of it. Is this a commonly known symbol that I just don’t know about?
edit: I know what it is, sry. G is very good problem and hit my weakness of math theory, thx to problem preparer.
I think div2C editorial has some inaccuracies: "We should also take care of boundary conditions. If the string starts or ends with a zero, we can ignore that position by placing the rabbit looking towards the border; these cases were present in the sample tests." Counter-example — 01011 , here first 0 should look to the right (the only possibility).
"The last observation is that if we have two consecutive 0 's, we can split the problem using them in a way that every subproblem looks like this alternating pattern (101010…0101 ) with an odd number of 0 's. Thus, the only thing we need to look for is a substring with this alternating pattern that has an odd number of zeros." Can't say I fully understand what's meant by this statement, but I imagine idea is (according to code) that if we find 00 pattern, then we can discard 0 counts from before as it's always possible to make last fragment counting from 1|1 / _|1 / _|0 valid and continue validating only from the next occurence of 1|1.
The code given in question 3 gives YES for case 100101 but in reality the answer should be NO. Can anyone tell me the logic i can apply, as we cannot divide the string if there are 2 00 together.
1RR1L1
ohh thanks
C was preety good
In D explanation, I think there are slight inaccuracies. It states "if k>0 choose f0 , otherwise choose any x". From equation before, f0>=f1>=...>=fk, so k>0 means there are at least 2 odds, which is not true in case of "2 1 1" test case (1 needs to be necessarily taken first for the maximum).
In the block, which mentions 3 cases: for 1), it's not specified which odd i player 2 takes, difference equals to S at the end only if odd i is with the maximum currently available frequency at odd indices.
For 2), I think the formula should be S-2f{i}+2f{i-1}.
In 3) (even x for P2), editorial only discusses assumption when one fi is removed, what happens with two or more (I believe it can be proved that it's still >= S)? Also, proof on why strategy to merge numbers is not optimal, is not covered, unless that sort of overlaps with the "Assume you remove one..." part?
For problem C As per editorial for the last part if curr=false then answer is always yes string : ..... 1 1 0 1 0 1 0 1 0 0 1 0 1 0 1 last part : 1 0 1 0 1 0 1 0 0 1 0 1 0 1 ( we are having 0 0 , so curr=false) for this part answer is No ?? Can somebody explain this to me
1 0 1 0 1 0 1 0 0 1 0 1 0 1 answer is Yes:
1 R 1 L 1 R 1 L L 1 R 1 L 1, the idea is that you can choose to make the second zero to be L or R as you wish to fix the parity.