Thanks for the participation!

1754A - Technical Support was authored by KAN and prepared by DishonoredRighteous

1754B - Kevin and Permutation was authored and prepared by KLPP

1753A1 - Make Nonzero Sum (easy version) and 1753A2 - Make Nonzero Sum (hard version) were authored and prepared by Artyom123

1753B - Factorial Divisibility was authored and prepared by sevlll777

1753C - Wish I Knew How to Sort was authored and prepared by TheOneYouWant

1753D - The Beach was authored by Tikhon228 and prepared by Ormlis

1753E - N Machines was authored and prepared by Tikhon228

1753F - Minecraft Series was authored and prepared by teraqqq

Tutorial is loading...

Tutorial is loading...

Tutorial is loading...

Tutorial is loading...

Tutorial is loading...

Tutorial is loading...

Tutorial is loading...

Tutorial is loading...

Tutorial is loading...

i dont know why in 1d the ans with $$$\le 1$$$ op for each sunbed is optimal. can sb show me why?

My test cases are passing for 1753B - 27 - Factorial Divisibility

https://mirror.codeforces.com/contest/1753/submission/177676671

but for

1 2

1000

It returns

Noinstead ofYes. I applied inverse modulo arithmetic approach.EDIT: I checked the constraints a[i] < x because of which, after dividing the number, the quotient will not cross the primes, which above test case doesn't fulfils.

May I ask, how in the world solution of div1E runs ~150ms??? I mean the worst case is ~ 5k * 60 * 20 * 20 = 120kk, and there is a lot of jumping through memory in binary search. When I first thought about the solution I expected this would get TLE, or at least be very close to that. I mean the conclusion would be that you can do 10^9 jumps in array per second, which I think is not realistic?

Can anyone explain Div1.C more detail?

You want to get an array where the zeros are at the beginning and ones are at the end. Let's say there were $$$k$$$ zeros and $$$(n-k)$$$ ones in the initial array.

Look at the array prefix of size $$$k$$$. Note that if you had $$$j$$$ ones in at any point, that number cannot increase after an operation since we only swap two numbers if the former is greater than the latter.

Therefore, we can define a random variable $$$R_j$$$ to be the number of operations it takes for the number of ones in the prefix of size $$$k$$$ to decrease from $$$j$$$ to $$$(j-1)$$$. If we find a way to compute that, then the expected number $$$R$$$ of operations for the number of ones in the prefix of size $$$k$$$ to decrease from $$$j$$$ to 0 is the sum of the expectations of $$$R_j$$$:

Now we know that in order for the number $$$j$$$ of ones in the prefix of size $$$k$$$ to decrease, the two selected indices have to be those of a one from the prefix and of a zero from the rest of the array (suffix of size $$$(n-k)$$$). The probability of that happening is:

Therefore we can define $$$E\left[R_j\right]$$$ as follows:

Eventually you get:

Add them up and you get the answer.

Thanks for a

brilliantexplanation !Can anyone explain this part: E[Rj]=pj⋅1+(1−pj)⋅(1+E[Rj])

Now we have the probability of $$$p_j$$$ to turn $$$j$$$ into $$$j-1$$$, and otherwise (i.e. the probability of $$$1-p_j$$$) $$$j$$$ will not change.

So, there's a probability of $$$p_j$$$ that $$$R_j=1$$$, and otherwise $$$R_j=E[R_j]+1$$$ since the state does not change and we have wasted an operation.

So we get the equation.

Forgive my poor English.

$$$E[R_j]=p_j⋅1+(1−p_j)⋅(1+E[R_j])$$$

In the above equation the only part that I don't understand it is $$$(1+E[R_j])$$$ ,why we add the same expectation since it leads to infinity ??

I think, it is just E[1 + Rj] (due to the one wasted swap) = 1 + E[Rj] (since E[a + b] = E[a] + E[b])

Actually you do add the same expectation infinitely. If you think about it, this makes sense because you have the chance to repeatedly keep wasting moves forever. In the author's case, I think they just rearranged their equation to account for this. I suck at math, so I can't explain why this works. However I think I have logical way to explain the result. Suppose you have $$$n$$$ possibilities and $$$m$$$ of them are good. What is the expected amount of tries before you pick a good element? The answer to this is $$$\frac{n}{m}$$$.

Why? Here we have a $$$\frac{m}{n}$$$ chance to pick a good element. Thus, if we picked $$$n$$$ elements, we are expected to pick $$$m$$$ good ones. Which on average means we will pick a good one every $$$\frac{n}{m}$$$ times. This also corresponds to the final equation, which is $$$\frac{1}{p_j}$$$.

I have not read nhtrnm's solution. But the transitions described in editorial are fairly straightforward if you think in the following way :

If there are $$$g$$$ zeros (total number of zeros in the array) in initial $$$g$$$ positions, then the array is sorted.

Let's assume there are $$$k$$$ zeros in initial $$$g$$$ positions. Now we make a move (swap). By making this move, we either end up with $$$k+1$$$ zeros (by exchange of $$$1$$$ from first $$$g$$$ position with a $$$0$$$ in the positions after that), or we end up with $$$k$$$ zeros (by any other type of exchange). The probability that we end up with $$$k+1$$$ zeros is given by $$$p$$$. You can easily calculate $$$p$$$.

If the number of expected moves to sort the array if there are $$$k$$$ zeros in initial $$$g$$$ position is $$$dp[k]$$$, we have, $$$dp]k] = p*dp[k+1] + (1-p)*dp[k] + 1$$$

First term : we reach state with $$$k+1$$$ zeros by probability $$$p$$$, meaning we make extra $$$p*dp[k+1]$$$ moves to sort.

Second term : we reach state with $$$k$$$ zeros by probability $$$1-p$$$, meaning there are extra $$$(1-p)*dp[k]$$$ moves to sort.

Third term : the move we make to reach the above states has been added.

If we are moving from K to K+1 with prob P then why are we multiplying dp[k+1] with P. isn't dp[k]*p term added to dp[k+1] ?

Why dont you understand editorial ?

Perfect! Thanks!!

A nice way to think about Div. 1 D is that the sunbeds are a matching of the bipartite graph and we want to increase the size of the matching by one. To do this, we find an augmenting path in the corresponding flow network. In such an augmenting path, each edge that corresponds to an edge currently in the matching needs to be matched with the edge before it or after it in the path, indicating that the sunbed is moved to the position corresponding to that edge. This should have cost p or q depending on whether the sunbed moves or rotates. We can encode this by modifying the graph in a suitable way, so that a shortest path in the graph corresponds to an optimal augmenting path in the flow network.

Can someone please explain the mathematics behind Div2E/Div1C?

If the probability for

`1 1 1 0 0 1`

getting sorted in 3 operations would be`(1/15).(1/15).(14/15)`

then the answer should be`(Z - 1)/X^Z`

where`X = nC2`

and Z is the number of zeroes which are not in the positions where they would be if the array were sorted.Why in problem C answer is dependent only on number of zeros in first g positions?

Symmetry. Because moves that don't change the number of zeroes there, also don't change transition probabilities between states with different k - transition probability from dp[k] into dp[k-1] only depends on number of 0's and 1's in each subarray and invariant to a particular permutation of elements in subarrays. So we we can just partition states by number of zeroes and only have to calculate a single value dp[k] for each partition.

First g positions are important because it's kind of a progress marker towards fully sorted end state.

Important for symmetry is that we are considering all n(n-1)/2 pairs at each step. If instead for example we'd be choosing uniformly only among pairs with inversions, that'd break the symmetry as denominators and probabilities would be changing at each step.

Consider the array in two parts:1~n0 and n1~n. And we want to swap all 1s to the right part.

Call an operation "good swaps" if a pair of 0 and 1 in different "parts" are swapped(1 to right, 0 to left), and other operations can be ignored since they don't make any progress in our consideration.

The array is sorted after and only after m good swaps are performed, where m is the initial number of 0s in the first n0 (g for you).

The expected number( $$$\frac{C_n^2}{mm^2}$$$ ) of operations to perform a good swap only depends on the "remaining number" mm of good swaps. Then add ones that mm=1,2,...,m we get the rational number as desired.

For Div.1 D, the editorial said

I'm curious about how to prove it.

If this situation exists, there will be at least two free cells around the sunbed. After some classified discussion, you will find out that we can always spend less to achieve the target.

in most combinations of ops it's easy to prove that they are either replaceable by less costly one or requires two continuous free cells to do it, but i can't prove why $$$\pi\operatorname{-rad}$$$ rotation(i.e. fix the same end during two rotation in the same direction) is replaceable by better one.

It can prove that you don't need fix the same end during two rotation in the same direction. If you want a cell to be empty, you only have to rotate it once at most, because if the cell itself is empty, you don't have to rotate it, otherwise you only have to rotate the sunbed on that cell once. It can be proved that the newly occupied lattice after rotation does not contribute to the adjacent lattice of the current lattice. So you don't have to rotate the sunbed anymore.

I'm sorry for my poor English.

This situation will never happen because we've run Dijkstra's algorithm twice. In the first time, we only visited the cell $$$(x,y)$$$ satisfied $$$(x+y)\bmod 2=0$$$. In the second time, we only visited the cell $$$(x,y)$$$ satisfied $$$(x+y)\bmod 2=1$$$. Obviously, no cell could satisfy these restrictions simultaneously.

Assume the map of the beach is painted in a chess coloring as the official editorial says, but red and blue.

What you have already disproved is operating twice on two cells colored differently is impossible.

Another two assumption is freeing one red cell first and then a neighboring blue one, and the destination of a move is already

free(or elsefreed, by recurringly solve it (UPD:It can be shown that we can't operate on a sunbed moved during the first recursion, either, for the $$$\pi-\operatorname{rad}$$$ rotation you say of it is impossible for it changes arrangement of the red twice but doesn't affect the blue ones, and the other operations moves it twice, which you have already disproved)).Firstly, it's obvious that we can't free the other side of the same sunbed(the only moved one), for the destination is already another two continuous free cells as you say.

When freeing a blue cell, a free red one(another one but the previously free one) can be useful only for placing another red end, but we want to change the arrangement of the blue ones, so it can only be used for another operation on the blue ones. That's the contradiction — the destination of the last move is two free cell, again(we don't need the first rotation instead of once or twice). So the second free red is useless except for freeing another red one.

You rotate twice to free the first destination to free the first destination, but we have a better way.

The further prove of the original problemThus, it can be shown that only the corresponding red occupied cell of a neighboring blue occupied one(but not the free ones) is the only needed info of the blue when we consider freeing a red one, and vice versa.

So the all the needed information of red ones has been already fixed and the moved sunbed doesn't affect the following process. So we don't enumerate the red we free before applying the second Dijkstra's algo, and we only apply Dijkstra's algo twice.

Sorry for my broken English and logic.

A better logic of this proof:

(Here, operations on sunbeds have the same meaning as the problem says, and operations on cells mean moving the free cell, the same as the editorial says.)

Theorem: All the sequences of 2 or more operations on the same sunbed is replaceable.There's a simpler way to prove it, so the following proof needn't be done unless you wish the problem to get more complicated.

Another proofPr: it's easy to prove they are replaceable except the reversion (called $$$\pi-\operatorname{rad}$$$ rotation above), which need tons of casework without the lemmas below.Lemma 1: 2 or more operations on differently colored ends of the same sunbed are replaceable(i.e. there exists one or more sequence of operations that is not worse than the current one).Proof: they are either replaceable by less costly one or requires two continuous free cells to do it. (As Dotswana said).we divide the original problem into freeing a red cell and a blue one below (we can prove that they can be solved separately according to the propositions below).

Corollary 1: we can't go through the free red cell when we're freeing a blue one, and vice versa.Pr: we couldn't move the blue (Assume we're freeing the blue one in the proof) end of the corresponding moved red end again. So we could only operating on the red cells, but not blue cells, and they don't affect blue movements and can be deleted.Corollary 2: the useful information about red cells when freeing a blue one only contains the corresponding end of the red end of a sunbed (to get which type of operation we're doing), and vice versa.Lemma 2: movements for freeing two red cells simultaneously is replaceable by operating on only one of them.Pr: easier to prove with the Additional Proposition below, but here's another one: when we're freeing a blue one, we can't move any of them according to Corollary 1. So it's replaceable by only keep the one in the final answer free(which is a better answer).The Final Proof: if we operate on a red end twice, it's replaceable, for it either move the cell into the original place or is replaceable by operating only once.Additional PropositionAll the needed and

irreplaceableinformation of red ones has been already fixed and the moved sunbed doesn't affect the following blue process.(Assume we free the red one first)Pr: the sunbed we've moved in the red turn couldn't be freed again for violating the Theorem, and the others remain unchanged.Therefore, we can consider the red and blue part independently.

Clearer and easier to understand than mine. Thanks!

[UPD: Please ignore I was wrong]

Alternative solution to A2 (that doesn't require checking parity): Like A1, consider pairs of values, except now we look at pairs of non-zero values. Specifically, we have some non-zero value $$$a$$$, followed by $$$k$$$ 0s (possible to have no 0s), and then a non-zero value $$$b$$$.

My submission: 177658418 (I used a 0-indexed vector, which might be confusing, sorry)

thanks. also how did you get to expert in 3 months. orz can u dm me tips

I didn't actually climb up in 3 months. I started competitive programming about ten years ago during my undergrad, joining ICPC contests. I wasn't very active in Codeforces back then. I did take a big break after I graduated, but I eventually came back and decided to code seriously on Codeforces this time, leading to the sudden growth.

Nevertheless, there are some tips that I hope would help:

Join every contest, but don't compare yourself with others.Joining actual contests is a great way to train yourself to solve problems, while maintaining the competitive time-restricted tension. However, don't be discouraged by your actual ranking in comparison with others; just focus on your own learning experience. Even if you spend the entire 2+ hours on a Div1 contest and still fail to solve Problem A, I think it can still be a valuable learning experience, especially if you are able to solve A later on after reading discussions from others and/or the editorial.Practice intelligently outside contests.Avoid wasting time with problems that are too easy or too hard for you. Instead, try to find problems that are suitable for your current level. A simple way to find such problems is to utilize the difficulty filter in Codeforces, e.g., set it to 800-800, solve a few problems; when you feel they're too easy, change to 900-900, solve, etc. Stick to a difficulty level in which you are able to solve problems eventually, but are not too happy with how much time you needed for figuring them out and/or the number of failed attempts before the AC. Practice problems at this level to improve.Learn new topics when they start becoming common.If you're not able to solve a problem after agonizing over it for quite a while, you can read discussions on it and/or part of the editorial or eventually even the entire editorial, with the intention of learning from it. Sometimes, the solution requires some specific topic that you're not familiar with. Just note such instances down and move on. If you find that some topics arise often enough that you shouldn't ignore them, then look up an external resource to read up on them. I recommend either the Competitive Programmer's Handbook or the cp-algorithms siteKeep tags hidden unless you want to practice a specific area.Codeforces has an option to keep problem tags hidden for unsolved problems, which I recommend, since actual contests would not reveal such tags. However, when you learn a new topic or you feel like you're weak at a particular topic, it can be helpful to filter problems with tags relating to this topic to practice more on them. This is good, but you should not depend too much on this kind of tag-aware practice, and you should soon switch back to general tag-hidden practice.Hey Andalus can you help me in my cod. It failing in WA2. Here is my submission i'd

First, let me teach you how to identify the problematic test-case. Please examine the following submission: 189156699. The code is copied from yours, but I changed the main function and added burn and reveal functions to show the problematic test case. Please use this technique in the future to reveal test-cases that your solution fails at.

As for the actual issue with your code, it does not handle the case of $$$n = 1$$$ correctly.

Sorry, But i'm not understand how burn and reveal function work in finding wrong test cases (can you please explain more so that i change those function according to my question in future). Although i handle the case of n=1 but still it gives wrong answer.

In the submission you linked (189085095), the test details indicated that your code failed in the second test suite, at test case 9576. So what I did was modify your code so that when $$$t == 9840$$$ (second test suite), the first 9575 cases will simply have their input read with no output being printed. Then, for the 9576th case, the program should read the input and then print the same input. When submitting this code, the program output for the second test suite will display the 9576th test-case, so you can identify exactly what the input is that caused your program to fail.

Anyway, regarding your latest submission (189217586), you still did not correctly handle the case where the array is simply $$$[0]$$$. Here, the output is supposed to be:

indicating that there is only 1 segment, and its range is from index 1 to index 1. However, your output is:

which does not follow the correct format of specifying the number of segments first.

In D1C, the answer for third test case is $$$\frac{75}{4}$$$ = 18.75 . I tried finding out the expected moves to swap by $$$10^6$$$ random experiments. I am getting values in the range [29.5, 30.05], which is too far from editorial answer. Where is my code wrong? I have tried seeding it with start time and also with the experiment number. In both, I am getting same. Here is code: https://pastebin.com/yBMgahy4 . It'll take around 4 seconds to run.

In the original problem, it only swaps when $$$a_i>a_j(i<j)$$$.

Thank you.

I am not so good at math, help. At problem 1753C, in the recurrence equation: $$$dp[k]=1+dp[k]⋅(1−p)+dp[k+1]⋅p$$$. I kind of understand $$$dp[k]⋅(1−p)$$$ and $$$dp[k+1]⋅p$$$. But I don't know what does the value $$$1$$$ mean. Please explain :>.

dp[k] is defined as expected number of moves to reach n from k. dp[k] is defined recursively by the following: after making 1 move what is the expected moves to get to n. This is the 1 in the formula.

Ohhh o:! Thanks a lot :).

Div2 C1 and C2 have Dp as one of the tag, can someone suggest how to use dp in this problem?

I solved it using DP, but I think it can be optimised in terms of lines of code.

Submission: https://mirror.codeforces.com/contest/1754/submission/223733429

If you failed 1D/2F on test 13 with your answer

`3000000016`

，you may try this.Thank you kind stranger! Just posted a question asking why the graph needs to be directed and then found this comment

Thank you it is wounderful test case

Glad that it helped you :)

A different way to think Div 2 E.

The Initial Array with $$$1's$$$ and $$$0's$$$ has four scenarios

$$$S_{ones}$$$-> count of $$$1's$$$ already at sorted positions(don't need any swaps).$$$S_{zeros}$$$-> count of $$$0's$$$ already at sorted positions(don't need any swaps).$$$NS_{ones}$$$-> count of $$$1's$$$ not in sorted positions (They need a swap with some $$$0$$$).$$$NS_{zeros}$$$-> count of $$$0's$$$ not in sorted positions (They need a swap with some $$$1$$$).Try to figure out yourself$$$NS_{ones}$$$and $$$NS_{zeros}$$$ is the only valid swap we have which will help in sorting the array.$$$NS_{ones}$$$is always equal to$$$NS_{zeros}$$$.So when we randomly select $$$2$$$ indices $$$i<j$$$. The elements $$$arr[i]$$$ and $$$arr[j]$$$ will belong to these $$$4$$$ types.

Possible cases-

Only the $$$10th$$$ type swap is beneficial in helping to sort the array.

Let Probability of Each type of Case $$$1,2,3....10$$$ be represented by $$$A,B,C,D...J$$$;

Now let's come to the formulae because we know the only thing that matters is the number of $$$1's$$$ or $$$0's$$$ we haven't fixed yet.

Let $$$X = NS_{ones}$$$

Let $$$E(X)$$$ = Expected number of moves when we have $$$X$$$ $$$1's$$$ not at sorted positions.

$$$E(X) = A(1+E(X)) + B(1+E(X)) + C(1+E(X)) + D(1+E(X)) + E(1+E(X)) + F(1+E(X)) + G(1+E(X)) + H(1+E(X)) + I(1+E(X-1)) + J(1+E(X))$$$

$$$E(X)(1-A-B-C-D-E-F-G-H-I) = (A+B+C+D+E+F+G+H+I+J) + J(E(X-1))$$$

$$$E(X)J = 1+J(E(X-1))$$$

$$$E(X) = \frac{1+J(E(X-1))}{J}$$$

Also $$$ J = \frac{X*X}{\frac{N*(N-1)}{2}}$$$.

Code for your reference -> 177811759

The second possible case seems to be wrong as there will be no NS1 after a S1. And i guess there will be 12 possible cases. I may be wrong, can you plz check it once

Although the final formula for the answer is absolutely correct :). Thanks your explanation helped me!

After reading the Div1 C task, I was kinda surprised that it was not well known. The tasks seems like something someone must have thought about already but in actuallity, has't. The problem was rather beautiful and elegant yet fresh. Kudos to the problem author :)

:)

For D1E, it's probably worthwhile to note that this idea is Alien trick in disguise. In fact, for most of those problems where the naive is "take the first X elements from this heap", you can optimize it by binary searching on the last thing that will be selected from the heap (i.e. Alien trick).

In Div1.C，my recurrence formula is

The number of zeros in the array is $$$g$$$, and $$$o$$$ is the initial number of zeros in the first $$$g$$$ positions.

$$$dp[i]$$$ means the expected number of operations I need to make origin array's first $$$g$$$ positions exist $$$i$$$ zeros.

so $$$dp[o] = 0$$$ and the final answer is $$$dp[g]$$$. but it seems wrong, I don't know why.

Hi, I still don't understand the problem

1753B — Factorial Divisibility. I don't understand the case when there are leftovers, why do we can say that if that happens, it's no divisible byx!?For example how to know that 3*7! + 4*5! are not divided by 8!?

Could someone help me please?

3*7! + 4*5! < 7*7! + 5*5! < 1*1! + 2*2! + ... + 7*7! < 8! (proof of last transition is in editorial)

If number is less than 8! it can be divided by 8! only if it's a zero

In problem div1 E, what does "continue estimating this value fairly" mean? I think $$$4608$$$ is the proper upper bound. What is that $$$15000$$$? Is that a simpler (but more rough) estimate of $$$F(C)$$$?

Div1 E.

the $$$profit_j$$$ should be $$$(lmul - 1) \cdot rmul \cdot a_j$$$.

In Div1-C, how to find $$$x$$$ such that $$$pq^{-1} \equiv x$$$ (mod $$$M$$$) if we have $$$p$$$ and $$$q$$$?

In the last problem, there are no tests countering 16-bit storage (see my submissions, it's not really an optimisation, just a bug — I misread bounds at first). One such test:

Btw my solution's pretty dumb semibrute force — for each diagonal, I do a 2-pointers-like algorithm to find maximal bad squares with a bitset for MEX, time $$$O(\mathrm{min}(N,M) K + NMT/\omega)$$$.

Can someone explain why in DIV1C linearity of expectation can't be used.

My explanation:

M is number of ones K is number of zeros in [(N-M), N] region Expected move of all zero = (moving first 0 out of places to left of (n-m) + moving second zero + ... moving subsequent zeros)

My understanding from https://www.youtube.com/watch?v=LX2q356N2rU video is.

If all the events happening in a set contributes to individually to concerned move.

for eg if we have 3 green and 4 red balls and if we pick 2 balls at random one after the another. Then expected number of red balls = (4/7) + (4/7);

Since in DIV1C, we can't decompose every event, such that every event contributes to individual zeroed value's expected sum, we can't use linearity of expectation.

for C, does anyone know why they are using $$$dp[k]$$$ on the right side of the recurrence relation?