The official implementations of all the problems are here.
Author: Kaey
Preparation: akifpatel
What's the most efficient way to make the sad students happy?
In most cases, you can make $$$2$$$ sad students happy in $$$1$$$ move.
Let $$$s$$$ be the number of sad students at the beginning. The answer is $$$\lceil \frac{s}{2} \rceil$$$.
In one move, you can make at most $$$2$$$ sad students happy (because you can change the position of at most two students), so you need at least $$$\lceil \frac{s}{2} \rceil$$$ moves.
In fact, you can make everyone happy in exactly $$$\lceil \frac{s}{2} \rceil$$$ moves:
- while there are at least $$$2$$$ sad students, you can swap them and both of them will be happy;
- if there is exactly $$$1$$$ sad student left, you can swap it with any other student.
Complexity: $$$O(n)$$$
1855B - Longest Divisors Interval
Author: TheScrasse
Preparation: TheScrasse
What's the answer if $$$n$$$ is odd?
Try to generalize Hint 1.
What's the answer if $$$n$$$ is not a multiple of $$$3$$$?
If the answer is not a multiple of $$$x$$$, the answer is $$$< x$$$. If the answer is a multiple of $$$1, \dots, x$$$, the answer is $$$\geq x$$$.
Suppose you find a valid interval $$$[l, r]$$$. Note that the interval $$$[l, r]$$$ contains at least one multiple of $$$x$$$ for each $$$1 \leq x \leq r-l+1$$$ (you can find it out by looking at the values in $$$[l, r]$$$ modulo $$$x$$$). Then, the interval $$$[1, r-l+1]$$$ is also valid and has the same length.
So, it's enough to check intervals with $$$l = 1$$$, i.e., find the smallest $$$x$$$ that does not divide $$$n$$$. The answer is $$$x-1$$$.
Complexity: $$$O(\log(\max n))$$$
Author: TheScrasse
Preparation: akifpatel
There are several solutions to the easy version. In any case, how to get $$$a_i \leq a_{i+1}$$$?
For example, you can try making $$$a_{i+1}$$$ bigger using a positive element.
What to do if all the elements are negative?
If all the elements are negative, you can win in $$$n-1$$$ moves.
If there is a positive element, you can try to make $$$a_1 \leq a_2$$$, then $$$a_2 \leq a_3$$$, etc.
Make a big positive element using moves $$$(i, i)$$$, then make $$$a_2$$$ bigger.
If all the elements are negative, you can make suffix sums (they are nondecreasing) with moves $$$(n-1, n), (n-2, n-1), \dots$$$
If there is at least one positive element $$$a_x$$$,
- make $$$a_x > 20$$$ using $$$5$$$ moves $$$(x, x)$$$;
- make $$$a_2$$$ the biggest element using $$$2$$$ moves $$$(2, x)$$$;
- make $$$a_3$$$ the biggest element using $$$2$$$ moves $$$(3, 2)$$$;
- ...
This strategy requires $$$5 + 2(n-1) \leq 43$$$ moves.
Complexity: $$$O(n)$$$
Author: TheScrasse
Preparation: akifpatel, dario2994
The hints and the solution continue from the easy version.
You can win in $$$n-1$$$ moves if all the elements are negative, but also if all the elements are positive.
Again, make a big positive / negative element, then use it to make everything positive / negative.
In how many moves can you either make everything positive or make everything negative?
Assume the positive elements are at least as many as the negative elements. Can you win in $$$34$$$ moves?
The bottleneck is making the big positive element. Is it always necessary? Can you find a better bound on the "best" number of moves?
If you either make everything positive or make everything negative, you can win in $$$n-1 \leq 19$$$ moves by making prefix sums or suffix sums, respectively. So, you have to reach one of these configurations in $$$12$$$ moves.
How to make everything positive? First, you create a big element with the maximum absolute value (this requires $$$x_1$$$ moves), then you add it to every negative element (this requires $$$x_2$$$ moves). $$$y_1$$$ and $$$y_2$$$ are defined similarly in the negative case.
So,
- one of $$$x_1$$$ and $$$y_1$$$ is $$$0$$$ (because the element with the maximum absolute value at the beginning is either positive or negative), and the other one is $$$\leq 5$$$ (because you can make $$$|a_i| \geq 32$$$ in $$$5$$$ moves);
- $$$x_2$$$ is the number of negative elements, $$$y_2$$$ is the number of positive elements. So, $$$x_2 + y_2 \leq n \leq 20$$$.
Therefore, you need additional $$$\min(x_1 + x_2, y_1 + y_2)$$$ moves. Since $$$x_1 + x_2 + y_1 + y_2 \leq 25$$$, $$$\min(x_1 + x_2, y_1 + y_2) \leq \lfloor \frac{25}{2} \rfloor = 12$$$, as we wanted. Now you can simulate the process in both cases (positive and negative), and choose one that requires $$$\leq 31$$$ moves in total.
Complexity: $$$O(n)$$$
Author: TheScrasse
Preparation: akifpatel
The order of used cards doesn't matter. So, you can assume you always use the card on the top.
Suppose that you unlock $$$x$$$ cards in total. How many points can you get?
If you unlock $$$x$$$ cards, it means you end up making moves with the first $$$x$$$ cards. So, you know the total number of (cards + points) that you get.
If you unlock $$$x$$$ cards, the number of points is uniquely determined. It's convenient to assume that $$$x \leq 2n$$$ and the cards $$$n+1, \dots, 2n$$$ have value $$$0$$$.
Now you have to determine the unlockable prefixes of cards (i.e., the values of $$$x$$$ you can reach). It looks similar to knapsack.
You can optimize the solution using a bitset. Be careful not to use locked cards.
First, note that the order of used cards doesn't matter. If you use at least once a card that is not on the top on the deck, you can prove that using the cards in order (from the top) would give the same number of victory points.
Let's add $$$n$$$ cards with value $$$0$$$ at the end of the deck. Then, it's optimal to unlock $$$x \leq 2n$$$ cards, and use cards $$$1, \dots, x$$$, getting $$$a_1 + \dots + a_x - x + 1$$$ points.
Let's find the reachable $$$x$$$. Let $$$dp_i$$$ be a bitset that stores the reachable $$$x$$$ after using the first $$$i$$$ cards.
Base case: $$$dp_{0,1} = 1$$$.
Transitions: first, $$$dp_{i}$$$
|=
$$$dp_{i-1}$$$<<
$$$a_i$$$. If $$$dp_{i,i} = 1$$$, you can update the answer with $$$a_1 + \dots + a_i - i + 1$$$, but you can't unlock any other card, so you have to set $$$dp_{i,i} = 0$$$ before transitioning to $$$i+1$$$.
Complexity: $$$O(n^2/w)$$$
Author: TheScrasse
Preparation: akifpatel
Consider $$$n$$$ blocks in positions $$$S_1, S_2, \dots, S_n$$$. After how much time does block $$$x$$$ disappear? It may be convenient to put a fake "static" block in position $$$m+1$$$.
Block $$$x$$$ disappears when it reaches block $$$x+1$$$. But what if block $$$x+1$$$ disappears before block $$$x$$$?
From the perspective of block $$$x$$$, it's convenient to assume that block $$$x+1$$$ never disappears: when it touches another block $$$y$$$, it's $$$y$$$ that disappears.
When you consider the pair of blocks $$$x, x+1$$$, the other blocks don't really matter, and you can use linearity of expectation to calculate the contribution of each pair independently.
A reasonable interpretation is given by considering an $$$(n+1) \times (m+1)$$$ grid, where the $$$i$$$-th row initially contains a block in column $$$S_i$$$. Then, you are calculating the expected time required for the blocks $$$1, \dots, m$$$ to have another block immediately below them (in the same column).
Blocks $$$x, x+1$$$ both move with probability $$$1/2$$$, unless block $$$x+1$$$ has reached position $$$m+1$$$.
$$$dp_{i,j} =$$$ expected number of moves of block $$$x$$$ before it disappears, if the block $$$x$$$ is in position $$$i$$$ and the block $$$x+1$$$ is in position $$$j$$$.
Consider an $$$(n+1) \times (m+1)$$$ grid, where the $$$i$$$-th row initially contains a block in column $$$S_i$$$, and row $$$n+1$$$ contains a block in column $$$m+1$$$.
- The set is empty if all the blocks are in column $$$m+1$$$; i.e., if every block has reached the block in the following row.
- Every "connected component" of blocks (except the last one) represents an element in the set. These components move equiprobably.
Let's calculate the expected time required for the block in row $$$x$$$ to "reach" the block in row $$$x+1$$$. If you consider a single pair of blocks, every block moves with probability $$$1/2$$$, unless block $$$x+1$$$ is in column $$$m+1$$$.
So, you can calculate $$$dp_{i,j} =$$$ expected number of moves of the block $$$x$$$ before it reaches the block $$$x+1$$$, if the block $$$x$$$ is in position $$$i$$$ and the block $$$x+1$$$ is in position $$$j$$$.
The base cases are $$$dp_{i,m+1} = (m+1)-i$$$ (because only the block $$$x$$$ can move) and $$$dp_{i,i} = 0$$$ (because block $$$x$$$ has already reached block $$$x+1$$$). In the other cases, $$$dp_{i,j} = ((dp_{i+1,j} + 1) + dp_{i,j+1})/2$$$.
By linearity of expectation, the answer is the sum of $$$dp_{S_i, S_{i+1}}$$$.
Complexity: $$$O(m^2)$$$
Author: TheScrasse
Preparation: akifpatel
You can find any $$$a_i$$$ in $$$9$$$ queries.
Find the nodes in the cycle in the component with node $$$1$$$. What happens if you know the whole cycle?
Suppose you already know some nodes in the cycle. Can you find other nodes faster?
Can you "double" the number of nodes in the cycle?
The component with node $$$1$$$ contains a cycle. If you know the whole cycle (of length $$$x$$$), you can win in $$$n-x$$$ queries by asking for each node if it ends up in the cycle after $$$n$$$ moves.
You can get a node in the cycle in $$$9$$$ queries, doing a binary search on the $$$n$$$-th successor of node $$$1$$$.
How to find the rest of the cycle?
- First, find $$$k$$$ nodes in the cycle, doing a binary search on the successor of the last node found. These nodes make a set $$$C$$$.
- Then, check for each node if it's "sufficiently close" to $$$C$$$, by asking $$$(i, k, C)$$$. Now, you know either $$$2k$$$ nodes in the cycle, or the whole cycle.
- Repeat until you get the whole cycle.
If you choose $$$k = 63$$$, you spend at most $$$9 \cdot 63 + (500 - 63) + (500 - 126) + (500 - 252) + (500 - 252) = 1874$$$ queries.
Author: dario2994
Preparation: akifpatel, dario2994
Go for a randomized approach.
Many ones are useful.
Either you go for a greedy or for a backpack.
We describe a randomized solution that solves the problem for $$$m$$$ up to $$$10^{11}$$$ (and, with some additional care, may be able to solve also $$$m$$$ up to $$$10^{12}$$$). We decided to give the problem with the smaller constraint $$$m\le 10^{10}$$$ to make the problem more accessible and because there may be some rare cases below $$$10^{11}$$$ for which our solution is too slow (even though we could not find any). We don't know any provably correct solution, if you have one we would be curious to see it. We expect to see many different solutions for this problem.
Main idea: Choose suitably the values $$$a_1, a_2, \dots, a_h$$$ that belong to $$$[1,29]$$$ and then find $$$a_{h+1}, a_{h+2},\dots,a_k$$$ in $$$[31,60]$$$ by solving a backpack-like problem.
Let us describe more precisely the main idea. Assume that $$$a_1, a_2, \dots, a_h\le 30$$$ are fixed and they satisfy $$$a_1+a_2+\cdots+a_h<60$$$. For any $$$s=0,1,2,\dots,29$$$, let $$$f(s)$$$ be the number of subsets $$$I\subseteq{1,2,\dots,h}$$$ so that $$$\sum_{i\in I}a_i=s$$$. If we can find some values $$$0\le s_1,s_2,\dots,s_{k-h}\le 29$$$ so that $$$f(s_1)+f(s_2)+\cdots+f(s_{k-h})=s$$$, then by setting $$$a_{h+i} = 60-s_i$$$ for $$$i=1,2,\dots, k-h$$$ we have found a valid solution to the problem.
There are two main difficulties:
- How can we find $$$s_1, s_2,\dots, s_{k-h}$$$?
- How should we choose $$$a_1, a_2,\dots, a_h$$$?
Since it is important to get a good intuitive understanding of the computational complexity of the algorithm, let us say now that we will choose $$$h\le 44$$$ and (accordingly) $$$k-h=16$$$. These values are flexible (the solution would still work with $$$h\le 45$$$ and $$$k-h=45$$$ for example). We will say something more about the choice of these values when we will describe how $$$a_1,a_2,\dots, a_h$$$ shall be chosen.
The backpack problem to find $$$s_1, s_2,\dots, s_{k-h}$$$.
The naive way to find $$$s_1,\dots, s_{k-h}$$$ would be to try all of them. There are $$$\binom{k-h + 29}{29}$$$ possible ways (up to order, which does not matter). Since $$$k-h=16$$$ this number is $$$\approx 2\cdot 10^{11}$$$ which is too much to fit in the time limit.
To speed up the process, we will do as follows. Partition randomly $$$A\cup B={0,1,\dots, 29}$$$ into two sets of size $$$15$$$. We iterate over all possible $$$s_1, s_2, \dots, s_{(k-h)/2}\in A$$$ and over all possible $$$s_{(k-h)/2+1},\dots, s_{k-h}\in B$$$ and check whether the sum of one choice from the first group and one choice from the second group yields the result. This is a standard optimization for the subset sum problem. What is its complexity? It can be implemented in linear time in the size of the two groups we have to iterate over, which have size $$$\binom{(k-h)/2+15}{15}\approx 5\cdot 10^5$$$. Notice that in this faster way we will not visit all the $$$\binom{k-h+29}{29}$$$ possible choices $$$s_1,s_2,\dots, s_{k-h}$$$ because we are assuming that exactly half of them belong to $$$A$$$ and exactly half of them belong to $$$B$$$. This is not a big deal because with sufficiently high probability we will find a solution in any case.
The choice of $$$a_1, a_2,\dots, a_{h}$$$.
It remains to decide how we should select $$$a_1, a_2, \dots, a_{h}$$$. The following choice works:
- Approximately the first $$$\log_2(m)$$$ values are set equal to $$$1$$$.
- Five additional values are chosen randomly from $$$[1, 6]$$$ so that the total sum stays below $$$60$$$.
One should repeat the whole process until a solution is found.
Some intuition on the construction. The choice of $$$a_1, \dots, a_{h}$$$ may seem arbitrary; let us try to justify it. The goal is to generate a set of values $$$f(0), f(1),\dots, f(29)$$$ that are simultaneously ``random enough'' and with size smaller but comparable to $$$m$$$. These two conditions are necessary to expect that the backpacking problem finds a solution with high enough probability.
If $$$a_1=a_2=\cdots=a_{h}=1$$$, then $$$f(s) = \binom{k-h}{s}$$$ and these numbers have size comparable to $$$m$$$ if $$$2^{h}$$$ is comparable to $$$m$$$. This observation explains why we start with approximately $$$\log_2(m)$$$ ones. The issue is that we need some flexibility in the process as we may need to repeat it many times, this flexibility is provided by the addition of some additional random elements which don't change the magnitude of the values $$$f(0), f(1), \dots, f(29)$$$ but that modify them as much as possible (if we added a large number it would not affect many $$$f(s)$$$ and thus it would not be very useful).
Author: dario2994
Preparation: akifpatel, dario2994
Solve the 2d version first.
The 4d version is not too different from the 2d one.
Find all the points such that the expected number of necessary moves is wrong.
The $$$2$$$-dimensional case.
Let us begin by cpnsidering the $$$2$$$-dimensional version of the problem. The solution to this simpler version provides the idea of the approach for the $$$4$$$-dimensional version.
We want to reach $$$(a, b)$$$. Can we do it with exactly $$$k$$$ moves? Two simple necessary conditions are:
- $$$|a|+|b|\le 1 + 2 + \cdots + k$$$,
- $$$a+b$$$ and $$$1 + 2 + \cdots + k$$$ shall have the same parity.
It turns out that this two conditions are also sufficient! One can prove it by induction on $$$k$$$ as follows. If $$$k=0$$$ or $$$k=1$$$ or $$$k=2$$$ the statement is simple, thus we may assume $$$k\ge 3$$$.
Without loss of generality we may assume $$$0\le a\le b$$$. If $$$|a|+|b-k| \le 1 + 2 + \cdots + k-1$$$, then the statement follows by inductive hypothesis. Assume by contradiction that such inequality is false. If $$$b\ge k$$$ then we have a contradiction because $$$|a|+|b-k| = |a|+|b|-k \le (1 + 2 + \cdots + k) - k$$$. Otherwise $$$b < k$$$ and the contradiction is $$$|a|+|b-k| = a + k-b \le k \le 1 + 2 + \cdots + k-1$$$.
Hence, we have shown:
Lemma 1: The point $$$(a, b)$$$ is reachable with exactly $$$k$$$ moves if and only if $$$|a|+|b| \le 1 + 2 + \cdots + k$$$ and $$$a+b$$$ has the same parity of $$$1+2+\cdots + k$$$.
The $$$4$$$-dimensional case.
One may expect statement analogous to the one of Lemma 1 to hold also when there are $$$4$$$ coordinates. It does not, but it almost does and this is the crucial idea of the solution. More precisely, the number of counter examples to such statement is rather small and we can find all of them. This is the intuition behind the following definition.
Definition: For $$$k\ge 0$$$, let $$$A_k$$$ be the set of points $$$(a, b, c, d)$$$ such that $$$|a|+|b|+|c|+|d|\le 1 + 2 + \cdots + k$$$ and $$$a+b+c+d$$$ has the same parity of $$$1 + 2 + \cdots + k$$$ but $$$(a, b, c, d)$$$ is not reachable with exactly $$$k$$$ moves.
As an immediate consequence of the definition, we have
Observation: The point $$$(a, b, c, d)$$$ is reachable with exactly $$$k$$$ moves if and only if $$$|a|+|b|+|c|+|d| \le 1 + 2 + \dots + k$$$ and $$$a+b+c+d$$$ has the same parity of $$$1+2+\cdots + k$$$ and $$$(a, b, c, d)\not\in A_k$$$.
Thanks to this observation, if one is able to efficiently find $$$A_k$$$ for all interesting values of $$$k$$$, then solving the problem is (comparatively) easy. The following lemma is our main tool for this purpose.
Lemma 2: Assume that $$$(a, b, c, d) \in A_k$$$ with $$$0\le a\le b\le c\le d$$$. Then, either $$$k\le 6$$$ or $$$(a, b, c, d - k) \in A_{k-1}$$$.
Proof: The strategy is the same adopted to show Lemma 1. In some sense, we are saying that the inductive step works also in dimension $$$4$$$, but the base cases don't.
If $$$|a|+|b|+|c|+|d-k|\le 1 + 2 + \cdots + k-1$$$, then it must be $$$(a, b, c, d-k)\in A_{k-1}$$$ because if $$$(a, b, c, d-k$$$ were reachable with $$$k-1$$$ moves then $$$(a, b, c, d)$$$ were reachable with $$$k$$$ and we know that this is not true.
Assume by contradiction that $$$|a|+|b|+|c|+|d-k|> 1 + 2 + \cdots + k-1$$$. If $$$d\ge k$$$ then we reach the contradiction $$$|a|+|b|+|c|+|d-k| = a+b+c+d-k \le (1 + 2 + \dots + k) - k$$$. Otherwise, $$$d < k$$$ and thus we reach the contradiction $$$|a|+|b|+|c|+|d-k| = a+b+c+k-d\le a+b+k\le 3k-2\le 1 + 2 + \dots + k-1$$$ (for $$$k\ge 7$$$).
We can now describe the solution. Assume that we know $$$A_{k-1}$$$. First of all, notice that it is then possible to determine in $$$O(1)$$$ whether a point belongs to $$$A_k$$$ or not. To generate a list of candidate elements for $$$A_k$$$ we proceed as follows:
- If $$$k\le 6$$$, we simply iterate over all points with $$$|a|+|b|+|c|+|d|\le 1 + 2 + \cdots + k$$$.
- Otherwise, we iterate over the points in $$$A_{k-1}$$$ and we consider as candidate elements for $$$A_k$$$ the points that can be obtained by changing the value of one coordinate by $$$k$$$.
Thanks to Lemma 2, we know that this process finds all the elements in $$$A_k$$$. Once $$$A_0, A_1, A_2, A_3,\dots$$$ are known, the problem boils down to a (relatively) simple counting argument that we skip.
One can verify that to handle correctly all points with coordinates up to $$$1000$$$ it is necessary to compute $$$A_k$$$ for $$$0\le k \le 62$$$.
One additional cheap trick is required to make $$$A_k$$$ sufficiently small and get a sufficiently fast solution. Given $$$(a, b, c, d)$$$, the instance of the problem is equivalent if we change the signs of the coordinates or we change the order of the coordinates. Hence we shall always ``normalize'' the point so that $$$0\le a \le b\le c\le d$$$. If we do this consistently everywhere in the process, the solution becomes an order of magnitude faster. In particular, this trick guarantees $$$|A_k|\le 5000$$$ for all $$$0\le k\le 62$$$.
Bonus question: Find an explicit closed form for the elements in $$$A_k$$$ for any $$$k$$$. (in this way one can solve the problem also with larger constraints on $$$A, B, C, D$$$; but it is tedious)
really nice problems and fast editorial, thanks for the contest
bitset in the author's solution, why????????????????????????????????????????
it's something new
Has to be one of the most bullshit problems lately.Alright, I guess it's not that bad, I just got mad from having the right knapsack approach and not seeing bitsets lol
it's way easier to think of n^2 dp without bitsets
Then something could have been changed so you that the trivial DP solution didn't pass. Bitset is just a constant optimization and not a complexity optimization.
So is problem A. Constant optimizations are fine if they cause big difference in runtime, cp is not purely theoretical...
That's different. A pragma can make a code run 2x faster. That doesn't mean there should be pragma problems on contests. Plus, 10^5 is usually intended to be solved in at most $$$N\sqrt N$$$, not quadratic time.
Just because bound is in range does not mean solution should be certain subset of complexities. All that matters is if it runs fast or not, and preferably intended runs significantly faster than worse approaches (32x is significant, 2x is not).
but i am bitset-phobic
Bitsets have already been used in problems. There was a problem with solution that runs in $$$O(N^3 / 64)$$$, where $$$N$$$ was up to $$$1000$$$
n <= 2000 as well AGC 20 C set by tourist
It is different because in problem A, we can get the exact number of operations, while in problem B we can only get a rough number. The exact number depends on the language (I think this is the key problem) and how we implement the same algorithm.
by the way, bitset is acceptable for me. But I still think n=10^5 is a bad idea. Someone told me that w can be 256 on Codeforces, then n=10^5 make sense. But it seems a hardware features? There are many ways to speed up the programs by parallel techniques, and most of them need the support of compiler, language, OS and hardware. I think it is far beyond the scope of Codeforces.
If you assume the values of the problem $$$(0 \dots n)$$$ fit into an integer, then implicitly you assume that the length of the integer data type you are using is $$$\geq log(n)$$$. That means that if you apply bitsets, you are guaranteed to at least divide the runtime by a $$$\log(n)$$$ factor. So actually the solution is $$$O(n^2 / \log(n))$$$ or better. Another justification for this lowerbound of $$$\log(n)$$$ on the word size is here: https://en.wikipedia.org/wiki/Word_RAM.
This means that it is better than quadratic.
You cannot assume a bound for n when calculating time complexity. Following that logic, since the code will do at most 10^10 operations, then theoretically it's O(1) with an extremely high constant. Sure, on practice you can optimize quadratic in 32 or 64 times, but it's still quadratic.
Edit: Lol
It's not that you bound the maximum $$$n$$$, the actual logic, is, assuming you are going to do testcases with huge values of $$$n$$$, you will also upgrade your computer to have a bigger word size, otherwise you can't even store the input inside integers.
That's fair, but the optimization is still limited to your computer's architecture. Your compiler may fit 128 or 256 bits into an integer in a 32 bit architecture, meaning you're still getting x32 optimization.
Complexity depends on what you count as a single operation. The word ram model is probably the most common model where you consider an integer operation as one operation, which is why you are able to divide by factor of lgn in complexity.
If you instead want to say every bit flip counts as single operation, you have to multiply lgn complexity of every addition/multiplication every time you do big O complexity, which is almost always not very insightful and annoying.
How is it possible to do as n was given 10^5
lol
I disagree. If $$$O(n^2)$$$ passes, then people could formulate a dp without noticing the connection to knapsack.
the bitset is still O(n^2)...
its atleast O(n^2/logn) as pointed out by jeroenodb
no, the time complexity remains as O(n^2), it's simply been optimized by a constant factor of 32-64, which doesn't affect the actual complexity? if my understanding is correct
that 32/64 is coming from log of Integer size (log 10^9 or log 10^18), so its a log factor.
yea its a log of the int size, which is a constant, so its still a constant factor. comparatively, it wouldn't be accurate to say that it runs in O(n^2/cbrt(n)) just because of the fractional constant factor.
based on your arguments won't log(max(a[i])) be constant aswell ?, then isn't the complexity O(nlog(max(a[i]))) just O(n) ?
n <= 10^5, so n^2 <= 10^10, which is constant, hence our complexity is actually O(1)
that's fair, but i think ive always just considered the complexity to be based solely on the constraints given in the problem. with a bitset, the optimization isn't actually based on the max size of the constraint, but rather the max size of our fixed data structure, so its not counted as part of the complexity in terms of n. that's why its always represented as O(n^2/w) and not O(n^2/log(n)), as the w does not actually come from taking the log of n, but the fixed size of the container. ig another thing is that O(n^2/log(n)) for n=1e5 comes around to 6e8, which is far too high regardless.
to expand on this, big O notation is defined as the worst case, in terms of n, regardless of how it's constrained in the problem. As we scale up n, the worst case time complexity becomes equivalent to O(n^2/w), where w is a constant, rather than O(n^2/log(n)), so it makes sense to me to call it as O(n^2).
it should be o(n^2/w)? and in bitset w always be 32 or 64? qwq
Does this mean it cannot be solved in any language other than C++
Welcome to codeforces
Sometimes it's difficult to make a problem suitable for other languagues while not nerfing it for C++ enjoyers. When there are problems with $$$O(Nsqrt(N))$$$ solution, there are people who do magic and get AC for $$$O(N^2)$$$ solution just because AVX and because stupid solutions have incredible constants while smart solutions have a big constant
No; see this Python solution by misorin. (Congrats on the first Python AC!)
You can technically write your own bitset in most languages (like this witchcraft: https://mirror.codeforces.com/contest/1854/submission/216290667)
Tried to implemente Bit set in Java but got TLE (just as expected) submission.
Your solution seems N^2/8 instead of N^2/32, why didn't you use all 32/64 bits of your int/long.
Alright, so there were some bugs in your implementation (plus i didn't know a few things about Java, like unsigned right shift), but ultimately I managed to get AC with your code in 2947ms, by modifying your Java BitSet implementation.
Should be possible to optimize it further easily. (i % 64 can we written as i & 63, along with some other minor adjustments and optimizations)
EDIT: AC in 1762ms with some minor optimizations such as i % 64 -> (i & 63) and i / 64 -> (i >> 6) and i * 64 -> (i << 6)
You can optimize it even more to 967ms if you simplify and inline the
getWord
function: 216865905.That is amazing!
I thought what more could be done here, and got 779ms by moving the branching outside the critical loop. I doubt if anything more can be done here.
You can solve it in C!
Simply use bitset.
can't agree more!!!!!!!!!!!!!!!!!!
No, no one is retarded and your post is offensive. There is nothing inherently bad/wrong with bitsets.
By the way, if any author of a problem whose solution uses bitsets were retarded, then you are offending someone unexpected.
p.s. There is the possibility that you were sarcastic and I am a boomer.
p.p.s. The first problem I linked is the problem that taught me bitsets.
people who didn't understand from the links, Author of the editorial is Lord Tourist.
https://img.atcoder.jp/agc020/editorial.pdf
Sorry bro, didn't mean to be offensive, I was too emotional when I wrote this(because of my skill issue), I changed my comment to what I actually meant
to answer your question, why not? its a perfectly valid optimization and the complexity is only around 1.5 x 10^8, which is very reasonable for 3 seconds.
There are atleast 2 observations that would not be required to solve the problem if n <= 5000 or similiar
1) you can find victory points by doing sum of prefix unlocked — (prefix size — 1), instead of storing max victory points in a dp state
2) you only ever need to open atmost 2n things
for instance, the solution dp[index][prefix opened] = maximum victory points needs neither observation and is much easier than the current intended solution.
Since, bitset is used, I think it is worth bringing up how powerful bitsets really are :p
https://touristfacts.dikson.xyz/?fact=43
Excellent, insanely fast editorials!
Can someone explain the bitset part for Earn or Unlock? What is $$$dp_{i, j}$$$? How to update it?
$$$dp_{i,j}$$$ is similar to knapsack $$$dp$$$. $$$dp_{i,j}$$$ says if it's possible to unlock exactly $$$j$$$ cards after first $$$i$$$ moves (in knapsack it says if the sum of weights can be $$$j$$$ if we look on first $$$i$$$ elements). In knapsack you can see that to count all values of $$$dp_i$$$ you only need values of $$$dp_{i-1}$$$, and it can be further optimised to one-layer dp. So, in other words, knapsack $$$dp$$$ can be written as $$$dp_i |= dp_{i-a_j}$$$, where we iterate $$$j$$$ from $$$0$$$ to $$$n - 1$$$. Consider this $$$dp$$$ as a giant bitmask. Then, when we update dp with $$$a_j$$$, this is equivalent to updating $$$dp$$$ like this: $$$dp |= (dp « a_j)$$$. In this problem $$$dp$$$ is calculated in a similar (not the same) way
It was so curiously but cool enough, thank you :)))
But why there were n,m <= 500 in C task if solution works in $$$~O(nm)$$$?
there exists some cubic solutions (like mine) which they wanted to pass as well ig
nice problems and rly quick editorial with hints(best kind of editorial). thanks for the contest
I submitted a solution to B https://mirror.codeforces.com/contest/1854/submission/216340662 in the end of the contest which I knew wouldn’t get AC (I had nothing to lose at this point)
can’t believe the author’s accepted solution is the same but using bitset :\ . I can already see a day the author’s solution is gonna require some #pragma optimiza_some_bs. Really didn’t like this question at all.
unlike pragma, bitsets are not constant time optimizations in some sense
When using pragmas in different situations you can get different results, sometimes RE or the time even increases. However with bitset you know, that everything with it works $$$64$$$ times faster
I really liked div2C2, it is a well-rounded problem, and discovering why k=31 is the worst case was really fun!
I was stuck at 34 moves, how did you come to the k=31 soln
The solution when every value is either non-negative or non-positive takes at most 19 moves.
For C1 my idea was to convert negatives to positives when there is fewer of them, and vice versa. The converting is done by adding the biggest positive to every negative value. However if the value of the maximum positive is smaller than absolute value of smallest negative, you must waste at most 5 moves by adding it to itself, that is when
maxPositive = 1
andminNegative = -20
. Worst case scenario is when there is equal number of negative and positive values. So for C1 my solution takes at most19+5+10=34
moves. I guess you discovered that already.Now, for C2, notice that if the maximum absolute value is positive, you need at most
19+countOfNegatives
moves. This strategy only takes10+19
when the number of negative and positive values is equal. And, it is better than our first strategy as long as the difference between number of negative and positive values is smaller than 5. This means that the turning point is when the number of negatives is 12 and the number of positives is 8. The second strategy then takes19+12=31
moves! And when the number of negatives is 13, positives 7, the first strategy is better, taking19+5+7=31
moves!.your pfp offends me
damn, the explanation was too good. kudos Zandler
Help me a lot!!
This was an amazing round, thank you for the nice problems :D
Feedback on the Div. 1 round:
tl;dr ACD are great problems; BE are mid. The problem statements were great, but there were numerous FSTs that probably could have been avoided. Overall, the round was pretty good.
A: Great problem, although I think the n <= 50 subtask should have been worth a bit less since the observation that the problem is easy when all numbers are positive/negative was a lot more obvious to me than the observation that we can make all numbers positive/negative in 12 moves and then finish in 19.
B: Not an awful problwem, but coming up with the DP is pretty trivial and the hardest part is just confirming that bitset will be fast enough. Not my favorite task.
C: Nice little EV problem. n <= 500 baited me into thinking intended may be O(n^3), but the DP is cute and it's a nice linearity application.
D: Fun interactive; I really enjoyed trying different ideas to puzzle this one out. Probably my favorite problem of the round.
E: The idea to trying to start with a bunch of small numbers and then add in some big ones to make the desired answer felt pretty standard, so the main new part here is that even if any individual starting set might not work for all N, we can quickly try a bunch of different starting sets until we find one that's valid. (fwiw, my implementation is a lot simpler than what the editorial describes--I literally just brute force starting sets in a reasonable order and end up ACing in 15ms.) This felt a bit like a guessing game and didn't feel satisfying or clever to come up with. I think I (mildly--it's not a terrible problem by any means) dislike this problem for the same reason I dislike B: the solution is basically just to tweak something stupid rather than to make a clever observation.
F: Didn't read. (fwiw, I think this and the last round are maybe a reason to have Div. 1.5 rounds rated for, say, 1600-3000 or something, and/or to have five-problem Div. 1 rounds--problems like this one are relevant to only a handful of people in the contest pool and it might allow for more contests if not every round needs to have a really hard problem at the end.)
Feedback on preparation: I liked that pretests = systests on D/E (I think, anyway), and the problem statements were all clear, short, and easy to parse. However, I think A should have had a higher test limit (I don't see why 1-2k tests per input would be infeasible?) and B and C should have been multitested. (I don't think multitesting makes the complexity calculation weird for either of those problems or allows unintended solutions to pass.)
Thanks to the authors for the round!
Thanks for the feedback! :)
Problem B does not have multitests because there is no easy way to declare variable-length bitsets.
Wouldn't the complexity stay the same if you declare one bitset of size 2e5 and use it for every test case (setting $$$O(n)$$$ elements to zero after each TC), assuming the sum of $$$n$$$ was bounded by $$$10^5$$$?
That's true, I missed it. So it would have made sense to make multiple testcases, sorry.
No worries, thanks for the response and for preparing the contest!
Well bitset would still work in $$$O(maxN / w)$$$, so multitests could be diificult to include here
Yes, but that's ok, because the complexity becomes $$$O(n \cdot \text{MAXN}/w)$$$ per testcase and $$$O(n^2/w)$$$ in total.
can someone explain what is the factor "w" in the complexity of 1854B ?
$$$w$$$ is the system word length (basically, the number of bits the processor stores in one unit; generally either 32 or 64).
Thank you, but how does that complexity could pass the n<=1e5 constraint?
With $$$n \leq 10^5$$$, we have $$$\frac{n^2}{32} \approx 3 \cdot 10^8$$$, suggesting that we're fine given the 3s time limit. In practice, bitsets have a pretty good constant factor, and my code passes in about 1s.
I mean when seeing the constraints and the time limit, i knew the complexity would be really weird, but never have thought of that. Thank you!
tourist's solution for E seems not randomised although I don't really understand what it does and why there is some magic constant defined in it (216302036)
fwiw, mine is also not randomized. I iterate over the possible "base sets" in an order that's basically just counting in base 60. For a given base set, I check if an answer exists by computing the number of ways $$$dp[sum]$$$ to get each sum up to 60, then adding additional values. In particular, I iterate over $$$i$$$ from $$$60 - hi$$$ to $$$60$$$, where $$$hi$$$ is the number we can get in the most distinct ways. Then, while $$$dp[60-i] + dp[60] \leq m$$$, I add one copy of $$$i$$$ to the solution. If this process generates a valid solution, we can output it.
Checking a given base set takes $$$O(60^2)$$$ time. The expected number of base sets we must check is small (intuitively there is no reason any given number will fail for all base sets), and my solution passes in 15ms.
Thanks! It makes sense. tourist's solution seems similar.
What could be the difficulty ratings for all the problems ?
CLIST usually gives quite accurate problem ratings
My solution for E:
Use $$$c_a$$$ times of $$$a$$$ and $$$c_b$$$ times of $$$b$$$, where $$$1\leq a<b\leq 30$$$. And for every $$$x$$$ from $$$31$$$ to $$$60$$$, we'll put an $$$x$$$ in if the total count does not exceed $$$m$$$. Because $$$x>30$$$, we can't include $$$2$$$ latter type numbers into one set, so the total count will be trivial to calculate. The time complexity is $$$O(60^5)$$$.
I don't know why this is correct but it actually got Accepted. Submission: 216320776.
216743005
Funny but it seems like setting a=2 and b=3 would still be enough
Can you help me with d2D? I have a bit different approach with dp: dp[i].first = minimal cost of cards to unlock ith card, dp[i].second = max j that jth card can be unlocked using same set of cards as ith card can. I recalc dp with segtree. Why this idea can lead to WA?
https://mirror.codeforces.com/contest/1855/submission/216348756
Take a look at Ticket 17022 from CF Stress for a counter example.
Dont know why this worked, so please can someone explain me how i got this lucky to pass all system test in this solution 216290096
Your solution basically checks all ranges between $$$[1, 10\,000]$$$ and finds the longest one.
We can prove that this will always find the correct answer.
Fact 1: We can show that the range $$$[1, r]$$$ will be optimal for some $$$r$$$. In other words, if (for given $$$n$$$) the longest good range has length $$$x$$$, the range $$$[1, x]$$$ must also be good. For proof of this fact, check my comment here.
Fact 2: We can show that the answer never exceeds $$$42$$$ (for $$$n \le 10^{18}$$$).
Proof: Suppose the answer for some $$$n$$$ is $$$x$$$. What do we know about this $$$n$$$? We know that $$$n$$$ is a multiple of each of $$$1, 2, \dots, x$$$, according to Fact 1 (and this is a sufficient condition for such an $$$n$$$ to exist with answer $$$x$$$). This means that $$$n$$$ is a multiple of $$$\text{lcm}(1, 2, \dots, x)$$$. But notice that $$$\text{lcm}(1, 2, \dots, 43) = 9\,419\,588\,158\,802\,421\,600 \approx 9 \cdot 10^{18} \ge 10^{18}$$$, so such $$$n$$$ doesn't exist for $$$x = 43$$$ within the constraints of the problem. This means that the maximum answer will be $$$42$$$, and your solution will find it. $$$\square$$$
My approach for C2 was a bit different. Let's suppose the maximum absolution value is mx. If mx is from positive, we'll first modify the array like this: v3-v1 >= mx, v5-v3 >= mx, v7-v5 >= mx ... For this, we'll kind of use 2 moves for each position, so we'll need n/2*2 = 20 moves.
After that, for each of v2, v4, v6... we can find a valid value (v[i-1] <= x <= v[i+1]) using only one move, except for if the last unmodified value is the nth index, for that we'll use 2 moves. So total = 20 + 9*1 + 2 = 31
We'll do the same if mx is from a negative value, but the other way around.
How to solve div2D in python?
There is likely no way
Can you help me with 216358050. Why it is failing on pretests
How is a formal proof to the model solution of a problem in a competitive programming contest NOT required?
This is the next codeforces phase.
From Mathforces to Guessforces to Hopeforces.
you forgot about animeforces
Can anyone please explain the solution for problem Div2-D ? I read the editorial but didn’t quite understand it.
Can someone Please explain div2 B , I saw few codes in which brute forcing <= 100 works but I am unable to understand what is going on and how this thing works ? kindly explain please
If you found an interval from l to r of length s such that every number in the interval divides n, there is an interval starting from 1 with the same length such that every number in it divides n. Demonstration: Analyzing the remainders mod i (1=<i<=s) of all the numbers in the interval [l,r] you can see that for every i there is at least one number x in [l,r] such that x=0 mod(i), which implies that i also divides n.
So it is enough to find the largest interval starting from one that holds the condition. That interval will not be so long because n is divisible by lcm(1,2,3,...,s) and hence n must be greater than the lcm(1,2,3,...,s) (being s the length of the interval), and if n=100, lcm(1,2,...,s) will be greater that the product of every prime less than 100, which is greater than 10^18(You can implement the sieve of Eratosthenes in order to check that last thing).
Does anyone know how to apply the solution for the A?, I'm trying with cpp but it doesn't work.
Idk what you're trying to do in your solution but answer is just number of i such that pi = i divided by two and rounded up.
I was strying smthng like this
You need to learn how division works in c++. In particular int divided by int is int, result is rounded down. If you want to use ceil function, (int)ceil(sad/2.0f) will give you the right answer. But you can use formula for integers, (a + b — 1) / b. In this case, (sad + 1) / 2.
any one to explain this more please?: (div2 hard C)
Therefore, you need additional $$$min(x_1+x_2,y_1+y_2)$$$ moves. Since $$$x_1+x_2+y_1+y_2 \leq 25$$$, $$$min(x_1+x_2,y_1+y_2) \leq \lfloor 25/2 \rfloor = 12$$$, as we wanted. Now you can simulate the process in both cases (positive and negative), and choose one that requires $$$\leq 31$$$ moves in total.
x1 + y1 <= 5, x2 + y2 <= 20, so x1 + x2 + y1 + y2 <= 25. Let X be x1 + x2, Y be y1 + y2, X + Y <= 25. If a + b = c, then min(a, b) <= c/2, because if min(a, b) > c/2, then с — min(a, b) < c/2, we got a contradiction. min(X, Y) <= 25/2
Here's an alternate explanation without so many equations:
As in the model solution, we want to make everything positive or everything negative in at most 12 queries. If there are at most 12 positive numbers and at most 12 negative numbers, this is easy: pick the number with the largest absolute value and add it to every number (at most 12) of the opposite sign.
If there are at least 13 positive numbers (the solution is the same with at least 13 negative numbers), pick a positive number and add it to itself five times. The resulting number will be at least $$$32$$$ (the worst case is if we start with $$$1$$$), and we can then add that to the negative numbers to make them positive. There are at most seven negative numbers, so this takes at most $$$5 + 7 = 12$$$ queries.
My solution to Div. 1 A2 / Div. 2 C2 differs from the editorial and uses at most 30 moves:
https://mirror.codeforces.com/contest/1855/submission/216373889
an amazing method!
B's solution is too abstract. Can anyone prove it
Note that the interval $$$[l,r]$$$ contains at least one multiple of $$$x$$$ for each $$$1 \leq x \leq r-l+1$$$.
This is the only observation you need and its trivial to prove.
I never think bitset as official answer. When I came up with bitset, I rather think of $$$O(nlogn)$$$ solution and got stuck for hours. Upset when reading editorial :(
For 2C. Can someone explain why this code needs to include a special judgment for sorting
https://mirror.codeforces.com/contest/1855/submission/216393543 (AC)
https://mirror.codeforces.com/contest/1855/submission/216393849 (WA)
Take a look at Ticket 17026 from CF Stress for a counter example.
Editorial of D using the term "type" when saying about each move (it said use moves of type 1, 2,..., x), but I don't get it. Could anyone explain for me what actually the "type" mentioned here?
I've updated the editorial.
Thanks, that's much better now
In Div 2 problem B, can you prove that multiple from 1 to r — l + 1 statement Thanks
Video editorial for this round. Problem(A,B,C1,C2)Link
thanks man!
Some clarifications, some doubts -- (regarding Problem B, div. 1) Clarifications :
In case you are trying the greedy method (I did too, during the contest). The approach would be that if you can move forward with an increase in your total points, that means you should do it. While this is fine when you are traversing the array, once the array is finished you would need the minimum points spent to open all the cards(which could be more than the no. of cards), and then calculate the score (sum of all cards — (used points — 1)) — so this doesn't work (greedy approach will not necessarily give you the minimum points, and there are also more arguments as to why this approach can't work).
Now, according to the model solution : say we arrive at the card with index i, if we could find a way to make i points using the cards with less than index i, then that means we can update the answer with (sum(till i) — (i)).
Doubt :
If $$$dp_{i,i} = 1$$$, it means there is a way to unlock exactly $$$i$$$ cards using the first $$$i$$$ cards. This means the game is finished, because all the unlocked cards have already been used. In that case, you can't use the following cards, so you have to discard that case from the DP before processing card $$$i+1$$$.
Oooh, thanks! Got it!
TheScrasse may you provide a proof for the first claim the card problem is making that is order of card taking will not effect the answer ? As in suppose I choose first card which says to take 4 more cards , then I choose the 3rd card after 1st card is removed which I take and claim it I will get suppose 6 points , 2nd card I didn't consider but it was having 10 points I then take that but this time choose to draw 10 cards , so will this not affect the final points I am getting ?
Suppose you don't use the cards in order. If you use them in order instead (but you don't change the type of move you perform with each single card $$$i$$$), both the number of victory points and the number of unlocked cards don't change.
but why 2*n cards i cant come up with anything upd: i get it whatever the sum of first n numbers be any two number sum will lie between o to 2*n
My solution to Div.1 A2 uses $$$1+\lceil1.5n\rceil$$$ operations.
Assume that $$$\max a_i + \min a_i\ge 0$$$, and $$$a_p=\max a_i$$$. First, set $$$a_p\gets 2a_p$$$. Then, iterate $$$i$$$ from $$$1$$$ to $$$\lfloor 0.5n\rfloor$$$, for each $$$i$$$, we can do $$$\le 3$$$ operations that will only change $$$a_{2i-1}, a_{2i}$$$ to make $$$a_{2i}\ge a_{2i-1}\ge a_{p}$$$ (after these operations, we set $$$p\gets 2i$$$).
let $$$x=[a_{2i-1}<0], y=[a_{2i}<0]$$$.
Finally, if $$$n$$$ is odd, do $$$2$$$ times $$$a_n\gets a_n+a_p$$$. The case $$$\max a_i+\min a_i<0$$$ is symmetric.
My solution for Div1 E:
Let $$$dp_i$$$ be the number of ways to sum up to $$$i$$$ with $$$a_1, ..., a_k$$$, then adding a number $$$x$$$ will let $$$dp_j \leftarrow dp_j+dp_{j-x}$$$. So we can construct $$$a_1, ..., a_k$$$ by letting $$$dp_i$$$ form a geometric progression in the very beginning and non-decreasing afterwards (also be aware to keep $$$dp_{60} <= m$$$). Setting the geometric progression factor around $$$3$$$ seems to be a good choice in practice.
The code is very short: 216442010
couldnt ubderstood why the EV of 1. element reaching the 2. + EV of 2. reaching the 3. = EV of 1. reaching the 3. in div2E/div1C ( I need some intuition on linearity of expectation)
For some intuition, consider $$$n = 2$$$ (so, there are $$$3$$$ blocks, and the last one is in position $$$m+1$$$). The general case is similar.
Here, the expected value moves required for block $$$1$$$ to reach block $$$3$$$ is equal to the final answer, which is also equal to the expected total number of moves, which is the sum of the expected number of moves of blocks $$$1$$$ and $$$2$$$.
I really liked the contest. Tasks A-D. Were very pleasant to solve. True, I didn't have time to solve them all, but still. I think the gap between C and D is too big, but still tolerable. I also don't understand why many people are against using bitset in the author's solution. After all, such a solution, with adequate implementation, fits in 0.6 seconds, which is very good.
How does "One can verify that to handle correctly all points with coordinates up to 1000 it is necessary to compute Ak for 0≤k≤62." work?
Why the following is true: "If you consider a single pair of blocks, every block moves with probability 1/2"?
The other blocks don't affect neither the position of the two considered blocks nor the number of moves that we are calculating, so we can ignore them. Since the two remaining blocks move equiprobably, they move with probability $$$1/2$$$.
Since x1+x2+y1+y2≤25, min(x1+x2,y1+y2)≤⌊25/2⌋=12
this is not math, this is magic. really awesome
Is there anyway to solve div1B without bitset
No, because there is no way to solve that type of knapsack without bitset.
What happened?
I’m a huge fan of dario2994. He created so many smart, innovative and elegant adhoc problems. Just curious: what does the id mean? Was him born in 2994?
Thank you, you made me happy.
Dario is my (unofficial) second name, 29 is the day of my birthday, '94 is the year.
For the solution of problem 1854D, where it says "Now you know $$$2k$$$ nodes in the cycle". Shouldn't that be "Now you know $$$2k$$$ nodes in the component with node $$$1$$$"?
I don't think getting a "yes" response from querying $$$(i,k,C)$$$ guarantees that node $$$i$$$ is in the cycle $$$C$$$. It just guarantees it's in the same component.
I've updated the editorial.
After a batch of queries $$$(i, k, C)$$$, you know all the nodes whose distance from $$$C$$$ is at most $$$k$$$.
How do you know which nodes are in the cycle?
You don't need to know which nodes are in the cycle. But you will be sure that you have found the whole cycle if the number of nodes in $$$C$$$ isn't at least doubled.
i think the tutorial of the problem "1854B Earn or Unlock" have some problems(maybe im wrong and stupid) the dp translation"dp[i]=dp[i−1] << ai" should be "dp[i]|=dp[i−1] << ai"? add a "or" operation to it? qwq
Yes, fixed.
Update: Just figured it out. I misunderstood the editorial.
I was not able understand what hints means in problem expected destruction . can you please explain this?
The main idea was to think of it as blocks moving on the number line (removing and adding X+1 is like moving right).
Suppose we limit the number line to m(Cannot move beyond it). What is the condition for block X to disappear? (Note that block X doesn't correspond to the number X, just some block that moves to the right)
The trick is to notice one block X touches block X+1 (i.e block X correspond to some number k, and block X+1 is k+1, and we try to remove block X). This is equivalent to block X just disappearing. This is why having a static block at M+1 in hint 1 makes it easier to reason about.
But what if block X+1 "disappears" first?
Notice, this doesn't matter. When block X+1 "disappears", it must have touched block >= X+2, from the perspective of block X however, we can view it as the block >= X+2 disappearing and block X+1 just moving right.
Why are we looking at the perspective of each block? Because here we can bring in the linearity of expectations. If we know the expected number of steps for Block X to disappear, then we can sum them up for the answer.
hello, i am confused about "1854B — Earn or Unlock"
the forth testcase goes as follows:
10
4 3 1 5 5 3 4 9 7 0
the answer should be $$$33$$$.
the first card has to be use to unlock or we end up with a value of $$$4$$$. if we don't unlock with one of the fives we only can get the next $$$4$$$ cards with values $$$3+1+5+5=14$$$ if we use one of them to unlock we can "earn" the rest that follows afterwards so we get a total of:
have i missed something or is the testcase wrong?
i would be very thankful for help.
We can use the first three cards (with values $$$4$$$, $$$3$$$ and $$$1$$$) to unlock $$$4+3+1=8$$$ more cards (all cards except for the last card with value $$$0$$$) and use all remaining cards to get a score of $$$5+5+3+4+9+7=33$$$
In problem div2 E, russian statements says "What is expected number of seconds until S is NOT empty" in 6 line.
Edited — Got it.
Interesting that in some sense both div1B and div1E are problems about subset sum.
UPD: Nevermind, I assumed that "using" was using the card to unlock, but it's about using it for either option
218706810
Can anyone help here?
getting MLE in testcase 2
my solution was also giving MLE initially . Try this test -1 -1 -1 0 0
I must say dual(hard version) is a pretty good prblm
Dual(Easy Version) -> 220885517 dual (Easy + Hard Version) -> 220968157
can someone tell why $$$dp[i][i] = 0$$$ in problem D ?
TheScrasse where is proper justification for problem B,why interval starting from 1 always give answer??, this is what will happen if highschool studets become problemsetters.they probably dont know importance of proof of correctness whiile stating something.please dont state something in open air without justification.
Fortunately I'm not a high school student anymore.
Anyway, the proof is in the above paragraph. If the answer is $$$x$$$ because there is a valid interval of length $$$x$$$, that interval contains one multiple of $$$1$$$, ..., one multiple of $$$x$$$, so $$$[1, x]$$$ is also valid. In other words, any interval which does not start from $$$1$$$ is useless because it can be replaced with an interval which starts from $$$1$$$.
In question B, im unable to understand this intuition that — find the smallest x that does not divide n. The answer is x−1. i tried reading editorial, but didn't understood, plz help anyone. thnks:))
腹肌男,我爱腹肌男