Блог пользователя TheScrasse

Автор TheScrasse, история, 19 месяцев назад, По-английски

All the Polygon materials (including the official implementations of all the problems) are here.

2019A - Max Plus Size

Author: TheScrasse
Preparation: TheScrasse

Hint 1
Hint 2
Solution

2019B - All Pairs Segments

Author: TheScrasse
Preparation: TheScrasse

Hint 1
Hint 2
Hint 3
Solution

2018A - Cards Partition

Author: TheScrasse
Preparation: TheScrasse

Hint 1
Hint 2
Hint 3
Hint 4
Solution

2018B - Speedbreaker

Author: TheScrasse
Preparation: TheScrasse

Hint 1
Hint 2
Solution

2018C - Tree Pruning

Author: wksni
Preparation: TheScrasse

Hint 1
Hint 2
Solution

2018D - Max Plus Min Plus Size

Author: TheScrasse
Preparation: TheScrasse

Hint 1
Hint 2
Hint 3
Solution

2018E1 - Complex Segments (Easy Version), 2018E2 - Complex Segments (Hard Version)

Authors: lorenzoferrari, TheScrasse
Full solution: Flamire
Preparation: francesco, lorenzoferrari

Hint 1
Hint 2
Hint 3
Hint 4
Solution

2018F1 - Speedbreaker Counting (Easy Version), 2018F2 - Speedbreaker Counting (Medium Version), 2018F3 - Speedbreaker Counting (Hard Version)

Author: TheScrasse
Full solution: Flamire
Preparation: TheScrasse

Hint 1
Hint 2
Hint 3
Hint 4
Hint 5
Hint 6
Solution
Разбор задач Codeforces Round 975 (Div. 1)
Разбор задач Codeforces Round 975 (Div. 2)
  • Проголосовать: нравится
  • +131
  • Проголосовать: не нравится

»
19 месяцев назад, скрыть # |
 
Проголосовать: нравится +6 Проголосовать: не нравится

1 minute late editorial?

»
19 месяцев назад, скрыть # |
 
Проголосовать: нравится -14 Проголосовать: не нравится

:)))) Too fast

»
19 месяцев назад, скрыть # |
 
Проголосовать: нравится 0 Проголосовать: не нравится

E was such a good problem, great problemset. im so mad i didnt get C earlier though

»
19 месяцев назад, скрыть # |
 
Проголосовать: нравится +3 Проголосовать: не нравится

The goddamn C...

»
19 месяцев назад, скрыть # |
 
Проголосовать: нравится +15 Проголосовать: не нравится

Is it just me or was this Div 2, little too difficult?

»
19 месяцев назад, скрыть # |
Rev. 2  
Проголосовать: нравится 0 Проголосовать: не нравится

To solve the third problem [2018A cards partition], can we use the binary search?

if yes then how to implement the checker function that this size of deck is possible or not

  • »
    »
    19 месяцев назад, скрыть # ^ |
     
    Проголосовать: нравится +23 Проголосовать: не нравится

    i thinking no, because the checker is no montonic fuction

    thinking in the primes number and k=0, the 6 can be divide, 7 no, and 8 yes. So the binary search can fail.

  • »
    »
    19 месяцев назад, скрыть # ^ |
     
    Проголосовать: нравится +2 Проголосовать: не нравится

    consider how you would make the decks, you would put all the cards with the highest frequency first, and then just greedily put all the cards on top of the lowest deck that you own currently, without making any new decks. then in the end if the last "row" you filled wasn't completely filled, you can try filling it with the k coins you have. you can do that if (sum)%x<=k , where x is the size youre checking. i didnt do it with bs, since i couldnt prove that if x doesnt work then x+1 surely wont work. also you have to check that the partition that you made actually uses all cards, you can check this by seeing if the amount of decks you would have would be atleast the frequency of the most frequent card. i hope i explained it well, if its not clear, just ask. also you can check my solution for details, but its very simple

  • »
    »
    19 месяцев назад, скрыть # ^ |
     
    Проголосовать: нравится +3 Проголосовать: не нравится

    A binary search on all numbers from $$$1$$$ to $$$n$$$ doesn't work, because the function isn't monotonic, so if some deck size fails, it's still possible for a bigger one to succeed. Consider a case where you have $$$n=5$$$, $$$k=0$$$, and $$$1$$$ card of each type. Clearly, the only possible deck sizes are $$$1$$$ and $$$5$$$, while $$$2$$$, $$$3$$$ and $$$4$$$ fail, but $$$5 \gt 4$$$.

    It might be possible to do a binary search on all numbers that divide at least one possible number of cards you can get. I'm not sure about that. In any case, as of now, I'm not aware of any reasonably fast checker function that isn't $$$O(1)$$$ (or easily possible to turn into $$$O(1)$$$ by some precomputation), so this probably isn't a great way to think about the problem.

  • »
    »
    19 месяцев назад, скрыть # ^ |
    Rev. 3  
    Проголосовать: нравится +1 Проголосовать: не нравится

    with binary search here

»
19 месяцев назад, скрыть # |
 
Проголосовать: нравится +3 Проголосовать: не нравится

How much practice do i need i was stuck at B completely.

»
19 месяцев назад, скрыть # |
 
Проголосовать: нравится +3 Проголосовать: не нравится

E1,O(n * sqrt(n) * log^2(n)) got a TLE......sad

»
19 месяцев назад, скрыть # |
 
Проголосовать: нравится 0 Проголосовать: не нравится

How to implement E.

»
19 месяцев назад, скрыть # |
Rev. 2  
Проголосовать: нравится +7 Проголосовать: не нравится

today's contest just ruined my day (though contest was good)... My submission on C 283251847 is the biggest blunder i had done till now... if i did not tried the binary search && just run a loop my code would be accepted && I might be Cyan today...

when i firstly start to solve this C.. i go for nlogn approach... then make the function 'f' .. but i did not notice function is literally O(1) .. so I could run a O(n) loop...

when i find this 1 minute after contest, I realize this CP is not for me...

(apologies for my poor english)..

»
19 месяцев назад, скрыть # |
 
Проголосовать: нравится +16 Проголосовать: не нравится

The intervals in the editorial of F should be $$$[i - a_i + 1, i + a_i - 1]$$$?

»
19 месяцев назад, скрыть # |
Rev. 9  
Проголосовать: нравится +31 Проголосовать: не нравится

An alternative solution for Speedbreaker (Div2D):

There are 4 cases:

  1. $$$a_0 \lt n$$$ and $$$a_{n-1} \lt n$$$: no solution is possible.
  2. $$$a_0 \geq n$$$ and $$$a_{n-1} \lt n$$$: This means that $$$a_0$$$ must be the last element that is conquered. We now check how many solutions exist in the interval $$$[1, n-1]$$$.
  3. $$$a_0 \lt n$$$ and $$$a_{n-1} \geq n$$$: Similar to second case.
  4. $$$a_0 \geq n$$$ and $$$a_{n-1} \geq n$$$: We now want to check if $$$a_0$$$ and $$$a_{n-1}$$$ are valid solutions. $$$a_0$$$ is a valid solution only if we can conquer every city going from left to right. To check if $$$a_0$$$ is a valid solution, create a segment tree $$$b$$$ with $$$b_i = a_i - i$$$. Now do a query on the range $$$[0, n-1]$$$. $$$a_0$$$ is a valid starting city if and only if the result of the query $$$\geq 1$$$. For checking if $$$a_{n-1}$$$ is a valid starting city, do something similar. After that, count the number solutions in the range $$$[1, n-2]$$$.

Submission: 283247155

  • »
    »
    19 месяцев назад, скрыть # ^ |
     
    Проголосовать: нравится 0 Проголосовать: не нравится

    cool

  • »
    »
    19 месяцев назад, скрыть # ^ |
     
    Проголосовать: нравится 0 Проголосовать: не нравится

    nice one!

  • »
    »
    19 месяцев назад, скрыть # ^ |
     
    Проголосовать: нравится +1 Проголосовать: не нравится

    The above observation actually makes the solution much easier,

    Observation 1 : All solutions would lie in a contiguous segment

    Observation 2 : If a segment [l, r] has a solution then a[l] >= r-l+1 and a[r] >= r-l+1

    My proposed solution :

    set l = 0, r = n-1. if exactly one of a[l] or a[r] is >= r-l+1 then reduce the segment by removing that element i.e. segment either becomes [l+1, r] or [l, r-1]. If both a[l], a[r] are >= r-l+1 decrement r. If both a[l] and a[r] < r-l+1 then no solution.

    Claim : The above iteration ends in the leftmost solution.

    We reach a solution because we could just perform the above steps in the reverse order and cover the entire array, By Observation 1, as we always decrement r means that we must have reached the smallest such l.

    Similarly perform the above with incrementing l at each iteration, this gives the largest r such that we can start and conquer all indexes.

    Code for this solution : https://mirror.codeforces.com/contest/2019/submission/283302596

  • »
    »
    19 месяцев назад, скрыть # ^ |
     
    Проголосовать: нравится 0 Проголосовать: не нравится

    Hi, I don't get how a city is a valid starting city can be checked? I see some implementations where they are doing l = max(l, i-a[i]+1) r = min(r, i+a[i]-1) but this gives a range of starting cities, that also I don't get, I don't get how can you verify if a city i is a valid starting city, what is the strategy to pick cities if you make i the starting city, I can't understand the editorial, if you don't mind, can you please independently explain a solution to a beginner? In your words as an editorial, take some time for us bro, I solved A, B, C, E and could not understand D till now. Please help dear friend! TELL WHY to whatever you approach, like step by step how you build the solution and how you proved yourself that's working, please help

»
19 месяцев назад, скрыть # |
 
Проголосовать: нравится +4 Проголосовать: не нравится

Wasn't div2 E a lot easier for its position?

»
19 месяцев назад, скрыть # |
 
Проголосовать: нравится 0 Проголосовать: не нравится

C was tooo gorgeous... Gave it the attempt of my life on Codeforces... Learnt a lot, it was like an adventure... Thank you contest×codeforces

»
19 месяцев назад, скрыть # |
Rev. 5  
Проголосовать: нравится 0 Проголосовать: не нравится

For div2 E i used ternary search on depth...and got wrong answer on test 3.i can not find any wrong case.can anyone hep me 283252293? update: may be this problem can't be solve using ternary search. i got a case

»
19 месяцев назад, скрыть # |
 
Проголосовать: нравится +8 Проголосовать: не нравится

Oh wow, my solution to F is completely different. The canonical strategy that I use for an array with a marked interval of possible starting cities is to always go to the city with the shortest deadline and break ties by going left. It seems that the resulting dp is very different (and in particular I don't need to use any division)

»
19 месяцев назад, скрыть # |
Rev. 2  
Проголосовать: нравится +16 Проголосовать: не нравится

Obligatory "thanks mr Radewoosh" comment.

div1E is https://mirror.codeforces.com/blog/entry/61331

»
19 месяцев назад, скрыть # |
 
Проголосовать: нравится -17 Проголосовать: не нравится

Why this code is a incorrect !! My intuition was first find maximum element across the array and then if n is even — n/2 and if n is odd — ((n+1)/2)?!

int main() {
    int t;
    cin >> t; 
    while (t--){
        int n;
        cin>>n;
        vector<int>arr(n);
        for(int i=0;i<n;i++){
            cin>>arr[i];
        }
        int max_value= *max_element(arr.begin(),arr.end());
        if(n==3){
            cout<<arr[0]+((n+1)/2)<<endl;
        }
        if(n%2==0){
            cout<<(n/2)+max_value<<endl;
        }
        else if (n!=3){
            cout<<((n+1)/2)+max_value<<endl;
        }
    }
 
    return 0;
}
  • »
    »
    19 месяцев назад, скрыть # ^ |
     
    Проголосовать: нравится 0 Проголосовать: не нравится

    First of all, your n == 3 case should be reconsidered as on test cases:

    1
    3
    1 2 3
    

    Your code says 3 but the answer is max_ele(3) + 2 = 5 As the 1 2 3 you can colour 3 and 1 red having 2 red elements + max_ele(3) will give optimal answer as 5. Moreover, you have to consider the max element being in an odd place having all odd indexes counted elements, or being in an even place having all even indexes counted elements.

    • »
      »
      »
      19 месяцев назад, скрыть # ^ |
       
      Проголосовать: нравится 0 Проголосовать: не нравится

      what about this solution-

      #include<bits/stdc++.h>
      using namespace std;
      typedef long long ll;
      
      int main() {
          int t;
          cin >> t; 
          while (t--) {
              int n;
              cin >> n;
              vector<int> arr(n);
              for (int i = 0; i < n; i++) {
                  cin >> arr[i];
              }
              int max_value = *max_element(arr.begin(), arr.end());
              
              if (n % 2 == 0) {
                  cout << (n / 2) + max_value << endl;
              } else {
                  cout << ((n + 1) / 2) + max_value << endl;
              }
          }
      
          return 0;
      }
      
      
»
19 месяцев назад, скрыть # |
 
Проголосовать: нравится 0 Проголосовать: не нравится

It's shocking that so many people in Div1 solved problem C.

  • »
    »
    19 месяцев назад, скрыть # ^ |
     
    Проголосовать: нравится +45 Проголосовать: не нравится

    i found D1C to be the easiest div1 problem today, looking at my friendlist, most people had the same feeling. It was obvious to me what to do upon reading the problem. Same for D too, I took less time to mindsolve CD combined than either of AB

    • »
      »
      »
      19 месяцев назад, скрыть # ^ |
       
      Проголосовать: нравится 0 Проголосовать: не нравится

      D might be *2200,but C is sure under 1900.

    • »
      »
      »
      19 месяцев назад, скрыть # ^ |
       
      Проголосовать: нравится 0 Проголосовать: не нравится

      I am interested to know your opinion on this. After reading your comment, I spent some more time on problem C, unfortunately I still can't come up with the solution. I haven't looked at the editorial but so far my idea is to consider iterating on the levels of the tree and taking the minimum and for a particular level the answer is N-number of distinct vertices on the path from the root to all the vertices on the current level(I actually figured all this in about ten minutes but I can't think of a way to calculate this in O(n), maybe LCA/inclusion exclusion to avoid double counting I don't know). What do you think one should ideally do in such case? Try for more time or just look at the editorial and be done with it?

      Also, I solved DIV1 A in about 30 minutes and got the idea even fairly quick and spent time only to fix a stupid typo. However, if I didn't already know the fact that we only need to know the max element and total sum to figure out if we can arrange the cards, then I don't think I would have been able to solve the problem. So, for DIV1A I would recommend someone to look at the editorial quite early if they weren't able to solve it. So, my general question is how long do you think should one spend time trying to solve a problem before looking at editorial and what was your strategy when you were at specialist/expert level ?

      • »
        »
        »
        »
        19 месяцев назад, скрыть # ^ |
         
        Проголосовать: нравится +9 Проголосовать: не нравится

        you are close on C, you just overcomplicated the latter part. Try to think of when a vertex will be deleted instead of its opposite.

        As for when I used to read editorials, it somewhat depends on my progress. If i felt I was close to the answer, I would hold off / read a bit after long time. Otherwise, if i did not make substantial progress, I would read after say 30mins — 1 hour (now its more ofcourse)

        • »
          »
          »
          »
          »
          19 месяцев назад, скрыть # ^ |
          Rev. 2  
          Проголосовать: нравится +1 Проголосовать: не нравится

          Thanks for the response! With your hint for C, I managed to solve it myself. I would have solved it in 30-40 min I guess if I thought of this:( Is there a way to count the answer if you look at it my initial way(counting the distinct vertices among all the paths from root to all the vertices at the current level)? Assume if that was what you thought first, Is there any way to recognize that the other way of looking at it is easier/will lead to solution or once you are stuck you just simply switch to looking at when the vertex will be deleted instead of opposite? Asking because I kind of get stuck on one approach like this and fail to solve many solvable problems and would love to get better.

          • »
            »
            »
            »
            »
            »
            19 месяцев назад, скрыть # ^ |
             
            Проголосовать: нравится +2 Проголосовать: не нравится

            Is there a way to count the answer if you look at it my initial way(counting the distinct vertices among all the paths from root to all the vertices at the current level

            yes, but it will end up being the same thing. We want to charactertize such vertices to easily count them. Its not hard to see that you want to count vertices that satisfy dep_u <= k and max_dep_in_subtree_u >= k

            So, for all k in range [dep_u, max_dep_in_subtree_u], increase the count of saved vertices by 1.

»
19 месяцев назад, скрыть # |
 
Проголосовать: нравится -8 Проголосовать: не нравится

can someone please tell me what's wrong with this soln for Div2-B? it gave WA on pretest 9

#include<bits/stdc++.h>
using namespace std;

int main(){
    int t;
    cin>>t;

    while(t--){
        int n,q;
        cin>>n>>q;
        vector<int> v(n);
        vector<long long> queries(q);

        for(auto &x:v) cin>>x;
        for(auto &x:queries) cin>>x;

        map<long long,int> mp;

        for(int i=0; i<n; i++){
            long long cnt = 0;  // number of segments this point is part of

            if(i==0 || i==n-1) cnt += n-1;
            else{
                cnt += (n-1+(i*(n-1-i)));
            }

            if(mp.find(cnt)!=mp.end()) mp[cnt]++;
            else mp[cnt]=1;
        }

        for(int i=0; i<n-1; i++){    // considering the points that lie on the axis apart from the ones in the array
            long long cnt = (i+1)*(n-1-i);

            if((v[i+1]-v[i]-1)>0){
                if(mp.find(cnt)!=mp.end()) mp[cnt]+=(v[i+1]-v[i]-1);
                else mp[cnt]=(v[i+1]-v[i]-1);
            }

        }

        for(int i=0; i<q; i++){
            if(mp.find(queries[i])!=mp.end()) cout<<mp[queries[i]]<<" ";
            else cout<<0<<" ";
        }
        cout<<endl;


    }

    return 0;
}
»
19 месяцев назад, скрыть # |
 
Проголосовать: нравится 0 Проголосовать: не нравится

My O(n^5) solution (with small constant) passed F1. However it's hard to optimize to O(n^4) or better (Because I solved d1B by a suboptimal solution, which mislead my thinking of F1)

»
19 месяцев назад, скрыть # |
Rev. 2  
Проголосовать: нравится +54 Проголосовать: не нравится

EDIT: you can actually do this in $$$O(n)$$$: 283287968. Same idea, but using path counting instead of the first PIE step.

For F you don't really have to think about paths at all, inclusion-exclusion on the arrays is enough (submission 283272424, ignore the unused variable).

Idea one: it's always sufficient to take the "working" interval first.

Idea two: to count the number of arrays of length $$$k$$$ with elements in $$$[1,n]$$$ and every index works, one only needs to make sure the endpoints work. You can just lower bound the elements for a count of

$$$ \frac{\left(n-\left\lfloor\frac{k}{2}\right\rfloor\right)!\left(n-\left\lceil\frac{k}{2}\right\rceil\right)!}{(n-k)!^2} $$$

Now iterate over $$$k$$$. Count the above and consider how to count extensions to arrays of length $$$n$$$ that do not break any of the elements that were supposed to work. You can do this with PIE; to get extensions of length $$$i$$$ get extensions of length $$$i-1$$$ and extend them on either side, then subtract out extensions of length $$$i-2$$$ where you can take the next two elements in either order. (see my solution for the push dp)

Now you have

$$$ ans[k] = \sum_{\text{arrays $$$a$$$}}\#\{\text{intervals $$$I$$$ of working indices in $$$a$$$ with $$$|I|=k$$$}\} $$$

Idea three. the PIE to finish is by subtracting out

$$$\#\{\text{intervals $$$I$$$ of working indices in $$$a$$$ with $$$|I|=k$$$}\}$$$

for each $$$a$$$ with a working interval larger than $$$k$$$. If you know the size of the working interval, you can count this.

»
19 месяцев назад, скрыть # |
 
Проголосовать: нравится 0 Проголосовать: не нравится

too fast!!

»
19 месяцев назад, скрыть # |
 
Проголосовать: нравится 0 Проголосовать: не нравится

D is a tremendous problem

»
19 месяцев назад, скрыть # |
 
Проголосовать: нравится +3 Проголосовать: не нравится

Good contest, I explained the entire process of how to do D (div 2) and GPT-o1-mini still couldn't solve it.

Prompt explanation + problem (separately):

In case anyone is wondering, I have written a code with the exact same logic and it is AC. You can check it out on my submissions.

Good news is, despite several attempt and demand of direct conversion of logic to code without adding its own logic, it could not produce a code which solved even TC 1.

This implies GPT-o1 really isn't something we should worry about for now for div 2 and above. Not only can it not think critically, it cannot even follow basic logical instructions.

»
19 месяцев назад, скрыть # |
Rev. 2  
Проголосовать: нравится 0 Проголосовать: не нравится

Can anyone tell me which test case will be wrong in this solution of Problem C 283238912 ? Can anyone give me the tc where binary function won't be monotonic??

»
19 месяцев назад, скрыть # |
Rev. 4  
Проголосовать: нравится +2 Проголосовать: не нравится

I have a quite different solution for Div2 E. I think it’s quite enriching and gives some insights about an alternate way of thinking hence i will try to present it here. Let’s try to calculate the cost for some fixed final depth(say d) of all the leaves. What do we need to do to make the depth of all the leaves equal to d? Try to think of it before looking up the spoiler below.

Spoiler

Do the two cases hint you something, how are they related. Do they seem quite similar?

Spoiler

We define two arrays:

1) less[d] : it represents the cost of case 2 for depth d. More formally for some depth d, it represents the cost of removing all leaves with depth less than d to a point where the left tree has no leaf with depth less than d

2) more[d] : cost of case 1 for choosen depth = d(define formally in a similar way)

How do we calculate these for all possible d(0 <= d <= n)

Spoiler

Now the cost for a final depth of d = less[d] + more[d]. Hence take minimum across all the values

You can view my solution for implementation details here : 283261930

»
19 месяцев назад, скрыть # |
 
Проголосовать: нравится +3 Проголосовать: не нравится

Div2E, can solve using bfs.
iterate level-by-level, each level we can compute the number of nodes not removed.
when a node is a leaf node, need to remove from leaf to it’s parent recursively.

»
19 месяцев назад, скрыть # |
 
Проголосовать: нравится 0 Проголосовать: не нравится

what is the "x" in the tutorila of problem C

»
19 месяцев назад, скрыть # |
 
Проголосовать: нравится +10 Проголосовать: не нравится

Here's a "troll" way to solve F in linear time.

As with some other approaches to this problem, we want to answer the following question: given $$$n$$$ and a width $$$w$$$, compute the sum over all $$$n-w+1$$$ intervals of width $$$w$$$ of the number of ways to pick numbers outside that interval so that all cities in that interval are good, given that the numbers inside that interval are already chosen to make it possible. Now observe, either through a bijective argument or by printing the results from a slower solution, that this quantity only depends on $$$n-w$$$, i.e. it is the sequence $$$1$$$, $$$2$$$, $$$7$$$, $$$34$$$, $$$209$$$, $$$\dots$$$ found in column 1 of the samples. Finally, observe that this is OEIS A002720 which provides a recurrence that computes these numbers in linear time. The rest is straightforward.

  • »
    »
    19 месяцев назад, скрыть # ^ |
     
    Проголосовать: нравится 0 Проголосовать: не нравится

    The combinatorial interpretation of the sequence is really nice, though. You don't really need to biject with anything new. Basically you have

    $$$ f(n) = \sum_{i=1}^n (i+1)*(n-1)_{(i-1)}f(n-i). $$$

    (the subscript is falling factorial).

    Here $$$f(n)$$$ is the number of ways of expanding by $$$n$$$. Consider this to be the picking the values enumerated by the path as in the editorial. Pick the first fixed point in the sequence to be index $$$i$$$, and choose how much of the suffix is on the right/tight side (options are $$$0$$$ through $$$i$$$). Pick the non-fixed points and recurse.

»
19 месяцев назад, скрыть # |
 
Проголосовать: нравится +11 Проголосовать: не нравится
»
19 месяцев назад, скрыть # |
 
Проголосовать: нравится +21 Проголосовать: не нравится

Thank you for sharing polygon materials! For future setters, can we normalize this?

»
19 месяцев назад, скрыть # |
Rev. 2  
Проголосовать: нравится +8 Проголосовать: не нравится

Could someone explain the problem div2D for me, as I couldn't understand the editorial.

  • »
    »
    19 месяцев назад, скрыть # ^ |
     
    Проголосовать: нравится 0 Проголосовать: не нравится
    1. Query 1: is it pssible to even complete.

    Maintain a lo and hi pointer, initially set to 0 and n-1. Check if either a[lo] or a[hi] >= size of array. Then, increment lo up or hi down, depending on if it was a[lo] or a[hi]. Repeat until lo>hi.

    If this process results in no answer, for any quey, then 0 starting locations will suffice.

    Otherwise, maintain a minStart and maxStart position. Set to 0 and n — 1.

    For each item: minStart = max(a[i]-i,minStart) maxStart = min(a[i]+i,maxStart)

    ans = maxStart — minStart + 1.

»
19 месяцев назад, скрыть # |
 
Проголосовать: нравится 0 Проголосовать: не нравится

For problem D(div2) can someone explain this line from the editorial — "At some time t, consider the minimal interval [l,r] that contains all the cities with ai≤t(let's call it "the minimal interval at time t"). If this interval has length >t, the answer is 0."

  • »
    »
    19 месяцев назад, скрыть # ^ |
     
    Проголосовать: нравится 0 Проголосовать: не нравится

    you got something??

    • »
      »
      »
      19 месяцев назад, скрыть # ^ |
       
      Проголосовать: нравится +1 Проголосовать: не нравится

      I still didn't get the editorial's solution. I solved it in a alternative way.

      An alternative solution for Speedbreaker (Div2D): Solution Courtesy: asdasdqwer

      There are 4 cases:

      1.a0<n and an−1<n: no solution is possible.

      2.a0≥n and an−1<n: This means that a0 must be the last element that is conquered. We now check how many solutions exist in the interval [1,n−1].

      3.a0<n and an−1≥n: Similar to second case.

      4.a0≥n and an−1≥n : We now want to check if a0 and an−1 are valid solutions. a0 is a valid solution only if we can conquer every city going from left to right. To check if a0 is a valid solution, create a segment tree b with bi=ai−i. Now do a query on the range [0,n−1]. a0 is a valid starting city if and only if the result of the query ≥1. For checking if an−1 is a valid starting city, do something similar. After that, count the number solutions in the range [1,n−2].

»
19 месяцев назад, скрыть # |
 
Проголосовать: нравится +10 Проголосовать: не нравится

It's my birthday and I get -180 as a gift :p sadge

»
19 месяцев назад, скрыть # |
 
Проголосовать: нравится -11 Проголосовать: не нравится

Can the editorial writers consider providing some useful thinking process to reach the solution instead of a formal proof of correctness which I don't think is very useful for most readers of the editorials ?

At least, for Div2 A B C D , make it simple and intelligible and actually useful for PROBLEM-SOLVING.

For example, I find editorial for problem C very challenging to follow and honestly not very useful. I Solved this problem before reading the editorial with a very different thinking.

I suggest that the editorial writer for such problems not to be red coder but maybe expert level, or a red coder who has some teaching background.

»
19 месяцев назад, скрыть # |
 
Проголосовать: нравится 0 Проголосовать: не нравится

Great contest,was able to solve A-B-C,got the idea of E in the contest but didn't know how to find lca of two node in logn time(though i know that binary lifting is used),after the contest i learned Lca using binary lifting.Basically i calculated distance of root from every node.Let say we fix the level of all the leaf nodes in the tree,total number of edge that u have to remove is nothing but ((n-1)-total number of unique edge that all the node on the particular level pass through)).To calculate the total number of unique edge on a particular level,i iterated through all the node in that level using queue,and added (distance of that node from root-distance of lowest common ancestor of previous node and current node from root).Answer is lowest operation on all the level.

»
19 месяцев назад, скрыть # |
 
Проголосовать: нравится 0 Проголосовать: не нравится

can anyone help me understand D?

»
19 месяцев назад, скрыть # |
 
Проголосовать: нравится +8 Проголосовать: не нравится

:O I see a lot of geometry dash songs in the problem statements

»
19 месяцев назад, скрыть # |
 
Проголосовать: нравится 0 Проголосовать: не нравится

in problem E can we solve it ternary search? if no why?

»
19 месяцев назад, скрыть # |
 
Проголосовать: нравится 0 Проголосовать: не нравится

For 2018B-Speedbreaker, I have another solution which seems simpler. Consider [1,n] first, we observe that if max(a[1],a[n])<n, then there isn't a valid starting point because 1st or nth must be the last city visited. If a[1]==n, we know that if we start in [2,n], then 1st is the last city and we can only consider cities [2,n]. This implies we can first check whether 1st is a valid starting point or not, and then decrease the question size by one. When considering [l,r] and a[l]>=r-l+1 (we want set l to l+1), l is valid starting point if and only if a[l]>=1, a[l+1]>=2, a[l+3]>=3,...,a[r]>=r-l+1, which is equivalent to a[x]-x>=1-l for any x in [l,r], which can be maintained using Sparse Table. Similar condition for r when we want set r to r-1. This leads to a O(n log n) solution.

»
19 месяцев назад, скрыть # |
Rev. 3  
Проголосовать: нравится 0 Проголосовать: не нравится
Div2 E / Div1 C Tree Pruning
»
19 месяцев назад, скрыть # |
 
Проголосовать: нравится 0 Проголосовать: не нравится

I might be retarded but once you get the arrays a and b for Div2E/Div1C as mentioned in the editorial, how do you calculate the count of intervals overlapping each 1<=i<=n, at least without using max segment tree with range update or something stupid. I saw a lot of answers using prefix sum and Shayan's video editorial also mentioned prefix sum, but I'm absolutely not able to understand what we are accomplishing with prefix sum.

  • »
    »
    19 месяцев назад, скрыть # ^ |
     
    Проголосовать: нравится +7 Проголосовать: не нравится

    Problem : find the intersection of the segments.

    Problem Reduction
    Hint 1
    Hint 2
    Solution
»
19 месяцев назад, скрыть # |
 
Проголосовать: нравится 0 Проголосовать: не нравится

Can someone explain div2D "speedbreaker" problem ? I am not able to understand it solution ?

»
19 месяцев назад, скрыть # |
 
Проголосовать: нравится +8 Проголосовать: не нравится

I have a strict $$$\mathcal{O}(n)$$$ approach to the div1C problem, which uses long-chain partitioning to optimize dynamic programming on trees.

283907902

  • »
    »
    16 месяцев назад, скрыть # ^ |
     
    Проголосовать: нравится 0 Проголосовать: не нравится
    Can You Explain It , Please ?? 
    
    • »
      »
      »
      16 месяцев назад, скрыть # ^ |
       
      Проголосовать: нравится 0 Проголосовать: не нравится

      First, we have an obvious dynamic programming (DP) approach. Let $$$ f_{x,i} $$$ represent the minimum number of operations required to make all the leaf nodes in the subtree of node $$$ x $$$ have depth $$$ i $$$.

      It turns out that the second dimension of the state depends on the depth, and we can optimize the transitions using heavy-light decomposition. Using the definition of heavy-light decomposition, let the child with the largest subtree depth be the heavy child, and the others be light children. The chain formed by heavy children is called the heavy chain.

      It is not difficult to observe that for any node $$$ x $$$, the depth of all the heavy chains corresponding to the light children of $$$ x $$$ will not exceed the depth of the heavy chain that contains $$$ x $$$. Let $$$ s_x $$$ be the heavy child of $$$ x $$$, and separate the light and heavy children during the merge. When merging heavy children, we directly inherit the results:

      $$$ \begin{aligned} f_{x,i}\ &\leftarrow\ f_{s_x,i-1},\quad i \gt 1 \\ f_{x,1}\ &\leftarrow\ |\mathrm{sub}(x)|-1 \end{aligned} $$$

      where $$$ |\mathrm{sub}(x)| $$$ denotes the size of the subtree of $$$ x $$$.

      When merging light children, we add the corresponding depths together. Let the depth of the heavy chain where child $$$ y $$$ is located be $$$ l $$$:

      $$$ \begin{aligned} f_{x,i}\ &\leftarrow\ f_{x,i}+f_{y,i-1},\quad 1 \lt i\leq l \\ f_{x,i}\ &\leftarrow\ f_{x,i}+|\mathrm{sub}(y)|,\quad l \lt i \end{aligned} $$$

      Now, let's analyze the time complexity of the transitions. The inheritance of $$$ s_x $$$ can be done in $$$ \mathcal{O}(1) $$$. The transition for $$$ y $$$ is divided into two parts. The first part, the brute-force transition, takes $$$ y $$$'s subtree depth and corresponds to the depth of $$$ y $$$'s heavy chain. The heavy chain where $$$ y $$$ is located will not be longer than the heavy chain extending from $$$ s_x $$$, so we can consider that each heavy chain is merged only once at the top of the chain. The time complexity for a single merge is $$$ \mathcal{O}(l) $$$, and the total time complexity is $$$ \mathcal{O}\left(\sum l\right) = \mathcal{O}(n) $$$.

      The second part of the transition requires supporting range updates, which can be maintained with a segment tree. However, we do not care about the range information, so we can directly tag the nodes in the heavy chain. Since the nodes on each heavy chain are consecutive in DFS order, we can perform the DP directly on the DFS order.

      The time complexity is $$$ \mathcal{O}(n) $$$, and the code is very easy to write with minimal details.

»
19 месяцев назад, скрыть # |
 
Проголосовать: нравится +1 Проголосовать: не нравится

Can anyone elaborate on Div1.D? I cannot understand the editorial. Thanks in advance!

  • »
    »
    19 месяцев назад, скрыть # ^ |
     
    Проголосовать: нравится 0 Проголосовать: не нравится

    hey you got some explanation??

  • »
    »
    17 месяцев назад, скрыть # ^ |
    Rev. 2  
    Проголосовать: нравится 0 Проголосовать: не нравится

    This is the solution to my understanding

    The optimal subsequence containing at least one occurence of the maximum element is probably intuitive. We just need to worry about the minimum, and for that we can iterate over all the values in decreasing order and for each minimum calculate the maximum possible size of the subsequence.

    The two queries we need to support is to "insert pick-able elements" and "calculate score". Suppose we have the array $$$[4,1,3,5,4,1]$$$. Then we will iterate over these minimums in order: $$$5,4,3,1$$$. For $$$5$$$, we insert every occurence of $$$5$$$ as a pick-able element (which will be representated as red): $$$[4,1,3,{\color{red}{5}},4,1]$$$. The query "Calculate score" would return $$$5+5+1=11$$$. Then, you move on to $$$4$$$: $$$[{\color{red}{4}},1,3,{\color{red}{5,4}},1]$$$.

    Notice that this creates two connected components, namely $$$4$$$ and $$$5,4$$$. In each component, you can choose any elements without worrying about decreasing the minimum. If the component's size is $$$s$$$ then you can select $$$\lceil s/2 \rceil$$$ elements from it. You do, however, have to worry about selecting at least one occurence of the maximum, so you store for each component whether there exists a maximum value and if yes, whether it's in an odd or even position. With this information, you can determine if you can select the most number of elements possible while also selecting the maximum $$$(*)$$$. Your score would be: $$$max(a)+i+size$$$, where $$$i$$$ is the current minimum you're evaluating and $$$size$$$ is the max number of elements you can take, which is the sum of $$$\lceil s/2 \rceil$$$ for all connected components. If $$$(*)$$$ isn't possible, your score would be the same as above, just decreased by $$$1$$$. The components can be efficiently maintained using a DSU.

    Let's simulate this process to the end:

    $$$1.$$$ Insert pick-able $$$3$$$: $$$[{\color{red}{4}},1,{\color{red}{3,5,4}},1]$$$. "Calculate score" would return $$$5+3+3-1=10$$$. Max is $$$5$$$, min is $$$3$$$, and you can pick three elements. You can't choose both the element $$$5$$$ and also pick three elements at the same time however ($$$5$$$ is at an even position in an odd-number-sized component), so the score is decremented by $$$1$$$

    $$$2.$$$ Insert pick-able $$$1$$$: $$$[{\color{red}{4,1,3,5,4,1}}]$$$. "Calculate score" would return $$$5+1+3=9$$$.

    The result you should return is the maximum score you calculate over all minimums you enumerate through. In this case, $$$11$$$.

    Some note about implementation. For the first type of query, you should store the indexes of the occurences of a value in a map to avoid having to iterate over the array multiple times which may blow up to an $$$O(n^2)$$$ time. You should also do the second type of query as you're doing the first to avoid recalculations (use the last minimum's result and update it only when you're unifying two components or creating a new one)

»
19 месяцев назад, скрыть # |
 
Проголосовать: нравится +1 Проголосовать: не нравится

For problem Div2E, I understand for all leaves to have depth d, nodes that will be alive need to satisfy below properties:

  1. Their depth(ai) >= d
  2. Maximum depth of child in the subtree(bi) >= d

But how does it translate to this line in the tutorial: So every node is alive in the interval of depths [ai, bi] ? How do above properties enable us to form an interval of depths [ai, bi] ?

»
19 месяцев назад, скрыть # |
 
Проголосовать: нравится 0 Проголосовать: не нравится

Sorry, for D1B, you say one possible solution is to intersect all $$$[i-a_i+1, i+a_i-1]$$$, but the second sample case fails?

6
5 6 4 1 4 5

$$$[-3, 5]$$$ $$$[-3, 7]$$$ $$$[0, 6]$$$ $$$[4,4]$$$ $$$[2,8]$$$ $$$[2,10]$$$

The intersection is $$$[4,4]$$$ right?

»
19 месяцев назад, скрыть # |
 
Проголосовать: нравится 0 Проголосовать: не нравится

Fun fact Now I understand why problem D was named speedbreaker.

Understood?
»
19 месяцев назад, скрыть # |
 
Проголосовать: нравится 0 Проголосовать: не нравится

I have trouble with problem E why order was O(n √n α(n)) ? i think is it O(n √(nlg) α(n))

»
9 месяцев назад, скрыть # |
Rev. 3  
Проголосовать: нравится 0 Проголосовать: не нравится

In div2 C, how to formally prove the following:

"If the two conditions above hold, you can make a deck containing the s types of cards with maximum frequency. You can show with some calculations that the conditions still hold after removing these cards. So you can prove by induction that the two conditions are sufficient to make decks of size s."

I saw in the https://mirror.codeforces.com/blog/entry/134424 the constructive proof that the card types with maximum frequency should be distributed in sequential order among decks, in similar pattern to

3 4 4 5

2 2 2 3

1 1 1 1 where number denotes i-th most frequent card and it would require to have x + 1 occurences of the same card type for it to be placed twice in the same deck, but it's not a formal proof.

Also, why these statements hold:

"Otherwise, for any choice of number of cards to buy, you can buy them without changing x. ... if you already have x*s or more cards at the beginning, you have to check if you can make m a multiple of s."

Is it always possible to add extra cards in a way that x would not change, how can it be proven?