Блог пользователя wangmarui

Автор wangmarui, история, 3 месяца назад, По-английски

We'd like to thank you all for participating in the contest, and hope you enjoyed it. Any feedback would be appreciated!

Rate The Contest!

2183A — Binary Array Game

Idea & Preparation: wangmarui

Rate The Problem!
Hint 1
Hint 2
Solution
Implementation

2183B — Yet Another MEX Problem

Idea & Preparation: wangmarui

Rate The Problem!
Hint 1
Hint 2
Solution
Implementation

2183C — War Strategy

Idea & Preparation: Network_Error

Rate The Problem!
Hint 1
Hint 2
Hint 3
Claim 1
Proof
Claim 2
Proof
Solution
Implementation
Bonus

2183D1 — Tree Coloring (Easy Version)

Idea & Preparation: jiazhichen844

Rate The Problem!
Hint 1
Hint 2
Solution

2183D2 — Tree Coloring (Hard Version)

Idea & Preparation: jiazhichen844

Rate The Problem!
Hint 3
Solution 1
Solution 2
Implementation (Solution 1)

2183E — LCM is Legendary Counting Master

Idea & Preparation: Gold14526

Rate The Problem!
Hint 1
Hint 2
Hint 3
Hint 4
Solution
Implementation

2183F — Jumping Man

Idea & Preparation: Neil_Qian

Rate The Problem!
Hint 1
Hint 2
Hint 3
Hint 4
Solution
Implementation

2183G — Snake Instructions

Idea & Preparation: xvchongyv & wangmarui

Rate The Problem!
Hint 1
Hint 2
Hint 3
Hint 4
Solution
Implementation

2183H — Minimise Cost

Idea & Preparation: Hoks_ Special Thanks: Sorasaki_Hina

Rate The Problem!
Hint 1
Hint 2
Hint 3
Hint 4
Solution

2183I1 — Pairs Flipping (Easy Version)

Idea & Preparation: Network_Error Special Thanks: N_z__

Rate The Problem!
Hint 1
Hint 2
Hint 3
Hint 4
Solution
Implementation

2183I2 — Pairs Flipping (Hard Version)

Idea & Preparation: Network_Error

Rate The Problem!
Solution
Implementation
Разбор задач Hello 2026
  • Проголосовать: нравится
  • +146
  • Проголосовать: не нравится

»
3 месяца назад, скрыть # |
 
Проголосовать: нравится +29 Проголосовать: не нравится

Auto comment: topic has been updated by wangmarui (previous revision, new revision, compare).

»
3 месяца назад, скрыть # |
 
Проголосовать: нравится +6 Проголосовать: не нравится

really awesome contest , this is like my favourite contest ever i really enjoyed the problems ; ; i was like stuck on b cause my dumbass wanted to speed blitz ab to try c ;; so ended up stuck in b for almost whole contest .... ;

»
3 месяца назад, скрыть # |
 
Проголосовать: нравится 0 Проголосовать: не нравится

Awesome contest , thank you

»
3 месяца назад, скрыть # |
 
Проголосовать: нравится 0 Проголосовать: не нравится

This contest was considerably harder to me, is it div.2 level or div.2+div.1? In any case, good job on the problems, they were well defined and I liked them.

»
3 месяца назад, скрыть # |
 
Проголосовать: нравится 0 Проголосовать: не нравится

really amazing of a contest i wish i could solve D1 i did get the intuition of pairwise connected components belonging to different subsets but couldnt really solve it properly. Loved the contest tho!

»
3 месяца назад, скрыть # |
Rev. 2  
Проголосовать: нравится +58 Проголосовать: не нравится

H: The QI condition is definitely false if we only group together the negative prefix, e.g. take A = -100 1 1. Then, cost [-100, 1, 1] + cost [1] << cost[-100, 1] + cost[1, 1]. Instead, we need to extend our prefix until the QI condition holds (equivalently, extend it to the global optimum), and then we can run the Knuth-optimization code.

  • »
    »
    3 месяца назад, скрыть # ^ |
     
    Проголосовать: нравится +10 Проголосовать: не нравится

    sry you're right I just find a mistake in my prove that the s_b could be a huge negative number I will fix it soon

  • »
    »
    3 месяца назад, скрыть # ^ |
     
    Проголосовать: нравится +8 Проголосовать: не нравится

    Actually I wanted to hack this.wrong code at first. To my relief,the data is still right

  • »
    »
    3 месяца назад, скрыть # ^ |
    Rev. 2  
    Проголосовать: нравится +3 Проголосовать: не нравится

    There's a way to make the solution work while only excluding negatives. Note that all transitions that start at positive values are Monge. Then, we can exclude the $$$a=0$$$ transitions from the monotonicity logic and run LARSCH on the later start points, and then min with the $$$a=0$$$ transition outside the LARSCH/Monge/monotonicity.

    • »
      »
      »
      3 месяца назад, скрыть # ^ |
       
      Проголосовать: нравится +3 Проголосовать: не нравится

      You can also run the dp backwards, so that only positive values are considered in the dp. To get the final answer, iterate through all $$$i$$$ as the break point of the segment containing negative values. Since the dp gets the answer for every prefix (or suffix in the original sense), it's easy to get the answer in $$$O(n)$$$.

»
3 месяца назад, скрыть # |
 
Проголосовать: нравится 0 Проголосовать: не нравится

Can someone please explain in B, why for k — 1, anything greater than or equal to k−1 are invalid, is it because to maximize the MEX, we need to keep the elements less than sequence length in the array?

»
3 месяца назад, скрыть # |
Rev. 3  
Проголосовать: нравится +11 Проголосовать: не нравится

I used a different thought-process for D1, although I do believe it's more or less the same as what the editorial talks about: -

It is easy to observe that for any particular depth $$$d$$$, no two nodes can belong to the same set (as per the problem statement). Therefore, if we let $$$N_d$$$ be the count of nodes at depth $$$d$$$, we must make at least $$$N_d$$$ moves. This leads to our first lower bound being

$$$\text{Ans} \ge \max_{\forall d} (N_d)$$$

Now consider a specific node $$$u$$$. We know that: -

  • $$$u$$$ cannot be in the same set as any of its children (due to the edge constraint).
  • None of the children of $$$u$$$ can share a set with each other (since they are all at the same depth, $$$depth_u + 1$$$).

This implies that $$$u$$$ and all its children form a clique where every node should be part of a unique set. This gives us our second relation

$$$\text{Ans} \ge \max_{\forall u} (\text{children}[u] + 1)$$$

Combining these two necessary conditions, the optimal answer is simply the maximum of these two values. Hence we arrive at our final relation

$$$\text{Ans} \ge \max \left( \max_{\forall d} (N_d), \ \max_{\forall u} (\text{children}[u] + 1) \right)$$$

I hope this helps! :)

Submission Link

»
3 месяца назад, скрыть # |
 
Проголосовать: нравится 0 Проголосовать: не нравится

Auto comment: topic has been updated by wangmarui (previous revision, new revision, compare).

»
3 месяца назад, скрыть # |
Rev. 18  
Проголосовать: нравится +21 Проголосовать: не нравится

Problem C can be solved faster in $$$O(t)$$$ time ($$$O(1)$$$ time per test case):

Solution
»
3 месяца назад, скрыть # |
 
Проголосовать: нравится 0 Проголосовать: не нравится

Auto comment: topic has been updated by wangmarui (previous revision, new revision, compare).

»
3 месяца назад, скрыть # |
 
Проголосовать: нравится +48 Проголосовать: не нравится

I didn't find E as easy as >1200 AC

»
3 месяца назад, скрыть # |
Rev. 2  
Проголосовать: нравится +12 Проголосовать: не нравится

Alternative solution for C using Binary Search: -

Let $$$F(S)$$$ be a boolean function that returns true if it is possible to fortify $$$S$$$ forts within the given time limit $$$M$$$. It is easy to observe that if a valid configuration of $$$S$$$ forts exists, we can trivially achieve a configuration of $$$S-1$$$ forts by simply removing the soldier at the furthest fortified node. Thus, the function $$$F(S)$$$ is monotonic, allowing us to use Binary Search for the maximum $$$S$$$.

Now consider how to optimally distribute $$$S$$$ fortified forts. We know that: -

  • The starting fort $$$K$$$ is always fortified (it contains a soldier from $$$t=0$$$).
  • We need to distribute the remaining $$$S-1$$$ soldiers to the left and right of $$$K$$$.

To minimize the time required, we should balance the expansion as much as possible. Let $$$L_{cap} = K-1$$$ and $$$R_{cap} = N-K$$$ be the capacities of the left and right sides. We attempt to assign $$$P = \lfloor \frac{S-1}{2} \rfloor$$$ soldiers to each side.

The actual number of forts we can fill on each side is bounded by the capacity: -

$$$l = \min(L_{cap}, P)$$$
$$$r = \min(R_{cap}, P)$$$

The optimal way to fortify these is to prioritize the side that requires more soldiers to maximize the utility of the spawner while traveling. Let: -

$$$h = \max(l, r)$$$
$$$w = \min(l, r)$$$
  1. We wait for soldiers and travel to the end of the heavy side ($h$).
  2. We return to the source fort to collect the soldiers that accumulated for the weak side ($$$w$$$).
  3. We travel to the end of the weak side.

This leads to our fundamental relation for the number of days used: -

$$$\text{Moves} \approx 2 \cdot h + w$$$

We must also account for specific edge cases: -

  • If $S$ is even (i.e., $$$S-1$$$ is odd), we need 1 extra soldier on one side, costing $$$2$$$ extra time units (wait + travel).

  • If $$$P \gt \text{capacity}$$$ for any side, we cannot extend further on that side. This forces us to send those soldiers to the other side (or wait), adding a penalty cost of $$$2 \cdot (P - \text{capacity})$$$.

Combining these conditions, we simply check if the calculated $$$\text{Moves} \le M$$$ to update our binary search boundaries.

I hope this helps! :)

Submission Link

»
3 месяца назад, скрыть # |
 
Проголосовать: нравится +1 Проголосовать: не нравится

Guys in the editorial's approach for C, a can take values in [0;k — 1] and b can take values in [0;n — k] so shouldn't we check for all k * (n — k + 1) possibilities. Whats the reason behind trying to increment a and b in turns and how does that work

  • »
    »
    3 месяца назад, скрыть # ^ |
     
    Проголосовать: нравится 0 Проголосовать: не нравится

    Checking k*(n-k+1) possiblities is too slow as it can easily become O(n^2) (take k=n/2, then we have about n/2*n/2=n^2/4=O(n^2) possibilities to check

    • »
      »
      »
      3 месяца назад, скрыть # ^ |
       
      Проголосовать: нравится +1 Проголосовать: не нравится

      Yea I got that it's too slow but we do need to prove that the faster way is correct because we do have problems where exponential solution is the fastest. I was thinking of a generalization of this problem. Let's say we are trying to find the largest subarray containing a given element which satisfies some condition (that condition should be evaluated in constant time)(in this problem that element is k th position and that condition is a + b + max(a,b) — 1 <= m.) my doubt is whether there whether it's possible to solve this in O(n). If so can u please explain it with proof of correctness? Thanks

      • »
        »
        »
        »
        3 месяца назад, скрыть # ^ |
         
        Проголосовать: нравится +10 Проголосовать: не нравится

        There are multiple solutions: First of all you could speed up the brute force by setting b and the binary searching for the optimal a in O(n log n), which would prob already pass.

        The next improvement is our turn-based approach. The main assumption is, that you always want to keep |a-b| as small as possible. I find it hard to formalize this, but maybe this idea helps: For a given sum a+b (our "score") lets try to keep the term a+b+max(a,b)-1 as small as possible. The part a+b-1 is constant, so we have to try and minimize max(a,b), which is given exactly if |a-b| is minimal (This is kinda intuitive I hope). Now we can always try to increase the lower one of a,b if possible in each step, so |a-b| gets minimal (this is basically done in the editorial approach). You can also do it in O(1) per query by doing some casework. I hope this helps.

»
3 месяца назад, скрыть # |
 
Проголосовать: нравится 0 Проголосовать: не нравится

is it somehow possible to solve problem E with Cauchy–Schwarz inequality?

»
3 месяца назад, скрыть # |
 
Проголосовать: нравится +141 Проголосовать: не нравится

i was super confused about D1. i couldn't see how one solves D1 without figuring out the solution for D2. and the editorial seems to suggest that D1 should be just guessed. i think this problem is promoting bad problem solving practices, and i don't understand why this problem needed to have subproblems.

  • »
    »
    3 месяца назад, скрыть # ^ |
     
    Проголосовать: нравится 0 Проголосовать: не нравится

    Since problems A and B both rely heavily on guessing, I tried to keep using some kind of greedy approach in D as well, hoping to simulate the answer of D1 from D2. However, the acceptance rates between D1 and D2 really confuse me.

    Does D1 actually have some obvious conclusion about the number of operations, but one that’s hard to turn into a concrete solution for D2? I asked myself, but the answer was no.

    I feel like anyone who can solve D1 should also be able to solve D2 — maybe it’s just a matter of implementation time? But it doesn’t even seem that hard to implement.

    During the contest, in order to move fast, I relied a bit too much on some unjustified guesses in problems A and B. Anyway, thanks to the problem setters for a great contest!

  • »
    »
    3 месяца назад, скрыть # ^ |
     
    Проголосовать: нравится +28 Проголосовать: не нравится

    Although I agree with you and I used D1 just to confirm that my Hall's theorem argument was correct, that's the current cf meta.

    • »
      »
      »
      3 месяца назад, скрыть # ^ |
       
      Проголосовать: нравится 0 Проголосовать: не нравится

      oh. please tell me about your hall's theorem argument! because in that case indeed such a solution would not pass D2 but is enough to pass D1. i just went straight for a lower bound and a constructive upper bound.

      • »
        »
        »
        »
        3 месяца назад, скрыть # ^ |
        Rev. 2  
        Проголосовать: нравится +3 Проголосовать: не нравится

        Start the solution from knowing that (bipartite) matching solves the problem.

        Construct the matching layer by layer. It's obvious that maximizing the number of things matches is optimal and we never need to unmatch shit because when going to the next layer each unmatched node is connected to all nodes on the other side.

        Now to find the max match we use the min cut max flow theorem. By using middle edges with infinity capacity, a candidade for min cut can be obtained by fixing a set of nodes from the upper layer. After fixing a set of such nodes, the cut has capacity (sum of unchosen nodes from the upper layer) + (sum of nodes from the lower layer that are connected to chosen nodes). The trick in this sort of common "complement graph bipartite matching" thing is that if we take 2 different nodes the "sum of nodes from the lower layer" is the whole layer, so there are 3 types of cuts to be considered: empty set, full set, set of size 1. For the set of size 1 we take the node in the upper layer that has the most children in the tree.

        After confirming that I didn't make a dumb mistake on D1 it was a matter of finding a greedy construction for the D2 solution.

        • »
          »
          »
          »
          »
          3 месяца назад, скрыть # ^ |
           
          Проголосовать: нравится 0 Проголосовать: не нравится

          Could you please explain how did you transform this problem to bipartite matching?

          I've never before seen such a nice use of bipartite matching, so finding it difficult to relate.

          • »
            »
            »
            »
            »
            »
            3 месяца назад, скрыть # ^ |
             
            Проголосовать: нравится +3 Проголосовать: не нравится

            You can reduce the problem into "find the min number of paths in a DAG to cover all vertices" and that's a very very very very well known bipartite matching problem that I might've seen 50+ times.

      • »
        »
        »
        »
        3 месяца назад, скрыть # ^ |
        Rev. 2  
        Проголосовать: нравится 0 Проголосовать: не нравится

        Actually, I think you can also use Dilworth's theorem to get the result more directly. Evidently, any antichain either contains only nodes from layer $$$i$$$, or some nodes from layers $$$i$$$ and $$$i + 1$$$. In the second case, it can only contain one node on layer $$$i$$$, and all the chosen nodes on layer $$$i + 1$$$ must be children of that node. From this, you can get the desired bound.

  • »
    »
    3 месяца назад, скрыть # ^ |
     
    Проголосовать: нравится 0 Проголосовать: не нравится

    yeah, I think D1 is the worst problem I've seen in a Codeforces contest in at least the past few months, probably longer because this was an intentional decision... I don't understand how it was decided that this was fine. did a problem need replacing at the last minute, and they had no choice besides slotting a hard problem in D and finding some way to split it into two subtasks?

  • »
    »
    3 месяца назад, скрыть # ^ |
     
    Проголосовать: нравится +8 Проголосовать: не нравится

    I think figuring out the solution for D2 is easier than actually implementing it. Maybe that’s why the author decided to include subproblems, so people who understand the logic but can’t implement it fully can at least solve D1.

»
3 месяца назад, скрыть # |
 
Проголосовать: нравится +103 Проголосовать: не нравится
»
3 месяца назад, скрыть # |
 
Проголосовать: нравится +10 Проголосовать: не нравится

Amazing contest

»
3 месяца назад, скрыть # |
 
Проголосовать: нравится 0 Проголосовать: не нравится

Great contest i think, especially problem D2

»
3 месяца назад, скрыть # |
 
Проголосовать: нравится +3 Проголосовать: не нравится

A typo in Tutorial (zh) Page 11, Problem E,Hint4.

It should be:

If $$$y \gt x$$$ and $$$gcd(x,y)=y−x$$$

But:

If $$$x \gt y$$$ and $$$gcd(x,y)=y−x$$$

»
3 месяца назад, скрыть # |
 
Проголосовать: нравится 0 Проголосовать: не нравится

Auto comment: topic has been updated by Network_Error (previous revision, new revision, compare).

»
3 месяца назад, скрыть # |
 
Проголосовать: нравится 0 Проголосовать: не нравится

I liked H. Been practicing dp tasks. Loved it. Hoks_ !!

»
3 месяца назад, скрыть # |
 
Проголосовать: нравится 0 Проголосовать: не нравится

Auto comment: topic has been updated by Network_Error (previous revision, new revision, compare).

»
3 месяца назад, скрыть # |
 
Проголосовать: нравится 0 Проголосовать: не нравится

Auto comment: topic has been updated by Network_Error (previous revision, new revision, compare).

»
3 месяца назад, скрыть # |
 
Проголосовать: нравится +8 Проголосовать: не нравится

There is a way to avoid any case analysis for G. Every two consecutive positions which are still alive after LR shouldn't be too far apart. So just brute force through all possible scenarios between some two consecutive elements and get the results beforehand. When answering a query, use the results given to find the actual speeds in the stuffs calculated before.

»
3 месяца назад, скрыть # |
 
Проголосовать: нравится 0 Проголосовать: не нравится

how to do C in O(1)?

»
3 месяца назад, скрыть # |
 
Проголосовать: нравится 0 Проголосовать: не нравится

I misread B in virtual contest and thought the final sequence was length k rather than k-1, which makes it a lot harder. I'm curious if there is a solution to this harder variant?

»
3 месяца назад, скрыть # |
 
Проголосовать: нравится 0 Проголосовать: не нравится

Can anybody explain in E in solution how does last term (gcd(an,a1)/a1*an) becomes a1/an*a1 , shouldnt it be (an-a1)/an*a1

»
3 месяца назад, скрыть # |
 
Проголосовать: нравится 0 Проголосовать: не нравится

Amazing Problems! Thank for the great round! Really enjoyed the problems!

»
3 месяца назад, скрыть # |
 
Проголосовать: нравится +44 Проголосовать: не нравится

You will solve C quickly if you've played generals.io.

»
3 месяца назад, скрыть # |
 
Проголосовать: нравится 0 Проголосовать: не нравится

Alternative solution for problem F. It comes more from intuition than from strict math.

Notice:

  1. Since a[i + 1] > a[i], lcm(a[i], a[i + 1]) grow very fast, thus 1 / lcm decreases very fast.
  2. Try to solve problem by hand with all a[i] = 0 and n = 2, 3, 4, 5. Notice that it's hard to even reach 1 as sum of 1/lcm's, reaching more than 1 is impossible at all.
  3. Check also the samples and convince yourself that max possible sum of 1/lcm's is exactly 1.
  4. If a[1] > 1, sum becomes very small, impossible to reach 1. Thus always set a[1] to 1 if that's zero. If that's > 1, answer is 0.

Since constraints are <= 3000, try DP with N * M states. Most obvious state is a pair of current index i and current value a[i]. Because sum is max 1, just store 2 numbers for each DP state:

  1. Max reachable sum of 1/lcm's.
  2. Number of ways to construct exactly the above-mentioned sum of 1/lcm's.

There are O(N * M) states, but complexity is O(N ^ 2 * M), because we have M transitions (iterating a[i + 1] possibilities for current i and a[i]). When solving small samples by hand, we noticed that with increase of a[i + 1], final sum decreases very fast. Thus, there are not many good a[i + 1] for given a[i]. Say a[i + 1] = a[i] + x, here if x is not a divisor of a[i], lcm(a[i], a[i + 1]) grow a lot and we likely never reach sum 1. So, iterate only x's which are divisors of a[i].

  • »
    »
    3 месяца назад, скрыть # ^ |
     
    Проголосовать: нравится 0 Проголосовать: не нравится

    That's exactly what the editorial solution actually does, it just also contains a proof of this working.

    • »
      »
      »
      3 месяца назад, скрыть # ^ |
       
      Проголосовать: нравится 0 Проголосовать: не нравится

      Editorial contains a lot of math. I've just showed how to come to a valid solution via less math. Path to solution is different.

      Implementation is also different actually. My DP produces 2 outputs:

      1. Max reachable sum of 1/lcm's.
      2. Number of ways to construct exactly the above-mentioned sum of 1/lcm's.
      And I calculate the first of them just straightforward in "double" type variable. Editorial solution seems to calculate "number of ways" without calculating the max 1/lcm's sum.
    • »
      »
      »
      3 месяца назад, скрыть # ^ |
       
      Проголосовать: нравится -8 Проголосовать: не нравится

      Editorial does math to come up to:

      The equality holds if and only if gcd(ai,ai+1)=ai+1−ai for all 1≤i<n, and a1=1. Thus, the problem is transformed into counting the number of sequences that satisfy these conditions.

      I don't do anything like this, but operate directly with LCM.

»
3 месяца назад, скрыть # |
 
Проголосовать: нравится +1 Проголосовать: не нравится

Problems C and D1 would be much harder without the test cases.

»
3 месяца назад, скрыть # |
 
Проголосовать: нравится +5 Проголосовать: не нравится

is there a more intuitive proof for C's necessity

»
3 месяца назад, скрыть # |
 
Проголосовать: нравится 0 Проголосовать: не нравится

Shouldn't A be Alice and Bob, not Yes and No?

»
3 месяца назад, скрыть # |
 
Проголосовать: нравится 0 Проголосовать: не нравится

For the problem B

I understand that in the end, the array will have k-1 elements so the final MEX <= k-1, if any number from 0 to k-2 isn't in the array, then it's the final mex.

But this still doesn't sound satisfactory to me. Can anyone kindly tell me an intuitive way to think about B?

  • »
    »
    3 месяца назад, скрыть # ^ |
     
    Проголосовать: нравится 0 Проголосовать: не нравится

    The main observation is that you can end up with any $$$k-1$$$ numbers in the array, because in every step, you have an option to remove one number from $$$k$$$ numbers, and since you're only trying not to remove $$$k-1$$$ numbers, you always have an option to remove a number you don't care about. So the answer is just the maximal MEX among any $$$k-1$$$ numbers. Then, it's clearly optimal to first take a $$$0$$$, then $$$1$$$, $$$2$$$, etc., unless you either get to a number not in the starting array (in which case the answer is MEX of the starting array) or you use all $$$k-1$$$ spots (in which case the answer is $$$k-1$$$.

»
3 месяца назад, скрыть # |
 
Проголосовать: нравится -77 Проголосовать: не нравится

Auto comment: topic has been updated by jiazhichen844 (previous revision, new revision, compare).

»
3 месяца назад, скрыть # |
 
Проголосовать: нравится -77 Проголосовать: не нравится

Auto comment: topic has been updated by jiazhichen844 (previous revision, new revision, compare).

»
3 месяца назад, скрыть # |
 
Проголосовать: нравится +10 Проголосовать: не нравится

Can anyone explain problem B? I didn't understand...

»
3 месяца назад, скрыть # |
 
Проголосовать: нравится 0 Проголосовать: не нравится

Can anyone tell where I am going wrong ? My submission

»
3 месяца назад, скрыть # |
 
Проголосовать: нравится -14 Проголосовать: не нравится

难:)

»
3 месяца назад, скрыть # |
 
Проголосовать: нравится +10 Проголосовать: не нравится

E is such an amazing prblem! I cant help but admire the person who set this problem

»
3 месяца назад, скрыть # |
 
Проголосовать: нравится 0 Проголосовать: не нравится

I solved 3 questions but my question how could one correctly give a solution with proof during the contest as most of times I dont know if my solution is correct or not and also the proofs in the editorial look very non-trivial, pls tell what to do?

»
3 месяца назад, скрыть # |
 
Проголосовать: нравится 0 Проголосовать: не нравится

Also is there some other solution for problem E other then the one given in the editorial and also how would one think intuitively of the solution that was given in the editorial?

»
3 месяца назад, скрыть # |
 
Проголосовать: нравится 0 Проголосовать: не нравится

Auto comment: topic has been updated by Hoks_ (previous revision, new revision, compare).

»
3 месяца назад, скрыть # |
 
Проголосовать: нравится 0 Проголосовать: не нравится

can anyone tell a better or more easily comprehendable solution of F?

  • »
    »
    3 месяца назад, скрыть # ^ |
    Rev. 2  
    Проголосовать: нравится +8 Проголосовать: не нравится

    dp[u][v] = answer for the problem and the current "heads" of the paths are in u, v.

    $$$dp[u][v] = \sum_{a \space child \space of \space u} dp[a][v] + \sum_{b \space child \space of \space v} dp[u][b] - [str[u] \neq str[v]] \sum_{a \space child \space of \space u} \sum_{b \space child \space of \space v} dp[a][b]$$$

    That straightforward dp (think of that sort of count common subsequences dp on an array to understand why the — is there for managing double counting) is quadratic because for state (u, v) we do one transition per pair of edges going down u and v, so the total number of transitions is just the number of pairs of edges which is O(N^2).

»
3 месяца назад, скрыть # |
 
Проголосовать: нравится 0 Проголосовать: не нравится

Auto comment: topic has been updated by Hoks_ (previous revision, new revision, compare).

»
3 месяца назад, скрыть # |
Rev. 4  
Проголосовать: нравится 0 Проголосовать: не нравится

In div2C explanation, there's some confusion regarding claim 1. I think correct version should go like this:

"Consider a soldier at x starting from k and moving to x; it must pass through base y along the way. Consider making this soldier stop exactly at y . Then soldier x ends up at y .". Also for clarity, it's worth mentioning that k always ends up containing a soldier in the end.

By the way, how this remark

(If the move is not to the leftmost positions but to some intermediate positions, we can show through adjustment that such a move is suboptimal. Therefore, an optimal solution will not contain such moves.)

can be proven? Is it really suboptimal or only the swapping doesn't make the answer worse? I can't find a counterexample which would show that.

One more doubt which I have, the proof assumes it's optimal to fill leftmost contigous interval and doesn't consider the case where it might be gaps (I know it's intuitive, but still, no proof for that). And I'm not sure if h(S1)-h(S0)>=a-t is correct. Considering we start from t soldiers at k (the only occupied base), after a-t moves, we'll have 2 bases occupied (k and k-a+t, although it's not a part of our goal and could be occupied before) instead of 1 (k-th), while soldiers count will indeed grow by a-t. So I think the h delta should be >=a-t-1 (and I don't think we need > sign here, as we constantly have to move during this phase to k-a+t and each turn, the soldiers count grow by 1?).

»
3 месяца назад, скрыть # |
 
Проголосовать: нравится 0 Проголосовать: не нравится

Auto comment: topic has been updated by Hoks_ (previous revision, new revision, compare).

»
3 месяца назад, скрыть # |
 
Проголосовать: нравится 0 Проголосовать: не нравится

It was an awesome contest...since I am a beginner I felt overwhelmed but still managed to get one of them...but I want to know exactly how should I approach for these types of contest...kindly guide me through this journey...it would be a great help.

»
3 месяца назад, скрыть # |
 
Проголосовать: нравится 0 Проголосовать: не нравится

Great contest, I was only able to solve the A problem but I hope to get better :)

»
3 месяца назад, скрыть # |
Rev. 2  
Проголосовать: нравится 0 Проголосовать: не нравится

I didn’t really like the official explanation for H, so here is an alternative one.

Observations
As shown in the editorial, there exists an optimal solution where the partition consists of contiguous subarrays after sorting $$$a$$$. We work on the sorted array.

Let ($$$m$$$) be the number of non-negative elements.

Case 1: $$$(m \gt k)$$$:

We can put each non-negative element into its own group and keep all negatives in one prefix.

The optimal partition is: one group containing the first $$$(n-k+1)$$$ elements, and the remaining elements as singleton groups.

Case 2: $$$(m \le k):$$$

All negative elements should belong to a single group, so there is at most one prefix segment containing negatives (and possibly some positives).

We apply Alien’s trick. Introduce a penalty ($$$\lambda$$$) per segment and define
$$$ dp_\lambda[i] = \min_{j \lt i}\left(dp_\lambda[j] + (s[i]-s[j])\cdot(i-j) + \lambda\right), $$$ where ($$$s$$$) is the prefix sum of the sorted array.

For the transition cost
$$$ f(l,r) = dp_\lambda[l] + (s[r]-s[l])\cdot(r-l), $$$ we have the Monge property for indices $$$(l,r \ge n-m)$$$:
$$$ \forall a \le b \le c \le d:\quad f(a,c) + f(b,d) \le f(a,d) + f(b,c), $$$ since all involved values are non-negative and the $$$(dp_\lambda)$$$ terms cancel.

Negative elements are handled separately. Since they appear in at most one prefix segment, we start transitions from index (n-m+1) and additionally relax
$$$ dp_\lambda[i] \leftarrow \min(dp_\lambda[i], f(0,i) + \lambda). $$$

Thus, DP transitions satisfy the Monge property and can be optimized with standard 1D/1D optimization trick to solve in $$$O(n \log n)$$$ which you can read online.

»
3 месяца назад, скрыть # |
 
Проголосовать: нравится +8 Проголосовать: не нравится

The first 2 hints for H are kinda useless since $$$n=k$$$ is just sum. Better to go for

Spoiler
»
3 месяца назад, скрыть # |
 
Проголосовать: нравится 0 Проголосовать: не нравится

Auto comment: topic has been updated by Hoks_ (previous revision, new revision, compare).

»
3 месяца назад, скрыть # |
 
Проголосовать: нравится 0 Проголосовать: не нравится

i would like to respectfully express my frustration at the implementation for D2. looking at the code makes me want to look away. upsolving is hard enough when im this sleepy

»
3 месяца назад, скрыть # |
 
Проголосовать: нравится 0 Проголосовать: не нравится

Auto comment: topic has been updated by Network_Error (previous revision, new revision, compare).

»
3 месяца назад, скрыть # |
 
Проголосовать: нравится 0 Проголосовать: не нравится

Brilliant problems! However, However, I still have a question regarding the proof for Problem H.

In the part of proving QI, why does the inequality $$$bs_b-(b-1)s_{b-1}\ge 0$$$ hold even when only $$$a_1$$$ is negative. If $$$a=[-100,1]$$$ and $$$b=2$$$, for instance, the condition does not seem to be satisfied.

  • »
    »
    3 месяца назад, скрыть # ^ |
     
    Проголосовать: нравится 0 Проголосовать: не нравится

    I remember I'very changed the prove to the new right version In current version,We define k as the largest k satisfying the partition being optimal for the preceding part. So in your example -100 and -1 must be in the same part so we don't consider position 2 so that the QI holds. In fact, even if the QI in the previous version was incorrect, the dp within it can still be optimized to be less than O(n^2). But as for me,without QI just hard to prove it could use Alien's Trick

»
3 месяца назад, скрыть # |
 
Проголосовать: нравится 0 Проголосовать: не нравится

Was upsolving and found some solutions with O(1) time in Problem C. Can someone explain how the math is working?

»
2 месяца назад, скрыть # |
 
Проголосовать: нравится 0 Проголосовать: не нравится

there is a mistake in problem E, LCM is not legendary counting master, its actually me(LegendaryCandidateMaster)

jk jk this was a great contest thank you!

»
2 месяца назад, скрыть # |
 
Проголосовать: нравится 0 Проголосовать: не нравится

Thank you wangmarui for contest and problem.

»
2 месяца назад, скрыть # |
 
Проголосовать: нравится 0 Проголосовать: не нравится

problem I. I haven’t even proved that the final score is $$$\leq 7$$$, but my solution still got accepted even with $$$\leq 3$$$. https://mirror.codeforces.com/contest/2183/submission/363500243