### Pyqe's blog

By Pyqe, history, 19 months ago,

Author: Pyqe
Developer: Pyqe

Tutorial

## 1740B. Jumbo Extra Cheese 2

Author: Pyqe
Developer: errorgorn, Pyqe

Tutorial

Author: Pyqe
Developer: Pyqe

Tutorial

## 1740D. Knowledge Cards

Author: Nyse
Developer: steven.novaryo

Tutorial

Author: Pyqe
Developer: yz_

Tutorial

## 1740F. Conditional Mix

Author: Pyqe
Developer: errorgorn

Tutorial

Author: Pyqe
Developer: Pyqe

Tutorial

## 1740H. MEX Tree Manipulation

Author: steven.novaryo
Developer: steven.novaryo, Pyqe

Tutorial

## 1740I. Arranging Crystal Balls

Author: NeoZap
Developer: errorgorn, Pyqe

Tutorial
• +128

| Write comment?
 » 19 months ago, # |   +9 I solved E it without using dp, if someone could prove why it works that would be great or maybe provide a counter testcase.Approach
•  » » 19 months ago, # ^ |   0 Interesting. You constructed the array s,right?
•  » » » 19 months ago, # ^ |   +9 Yes
•  » » 19 months ago, # ^ |   0 can u explain your approach ? I have tried this question in building the array just like you I noticed the thing that if we take a given node "X" ,then nodes in the longest path from root to a leaf in a subtree where "X" is root can be made to same value.so I came up with approach something like this For Each level, For Each node in that level lets say Y, I added up the length of longest path from the node Y to the leafAfter getting answer for all levels, I took the max of them but i am missing something.
•  » » » 19 months ago, # ^ |   -8 My approach is simply instead of randomly going to a subtree I choose the one with the deepest node and when we have visited all vertices in its subtree I assign a value and push the subtree min in sequence and calculate its LNDS.
•  » » » » 19 months ago, # ^ |   +8 your explanation is not clear. whose subtree? what do you do with the subtree you choose? what value do you assign?
•  » » » » » 19 months ago, # ^ | ← Rev. 6 →   0 Sorry if my explanation wasn't clear enough I will explain it more thoroughly. So I'm maintaining a timer variable that will assign a value to each node, it is initialized to 1. Now we start by traversing a tree in dfs order but we assign a value to a node only when we have traversed all nodes in its subtree and the value that gets pushed into s is the subtree minimum of that node then we increment the timer variable. Now for traversal let's say we are at a node x so instead of randomly visiting a node in its subtree, we will start by visiting the subtree with the deepest node(maximum height) first. So for that I just calculated depth of each vertex and took the max depth (height) for each node and just sorted the adjacency list for each node based on their respective height in non increasing order. Finally I just calculate the LNDS of the s sequence that we created.178446226
•  » » » » » » 19 months ago, # ^ | ← Rev. 2 →   0 It works the same as the dp solution. let a[v] be the lnds of v's subtree. a[v] is either the concatenation of a[c_i] for all children c_i of v, or the path from v to its deepest child, whichever is longer. the length of a[v] is equal to dp[v].
•  » » » » » » » 19 months ago, # ^ |   0 Oh got it thank you.
•  » » » » » » » 19 months ago, # ^ |   0 How are we deciding the permutation a?
•  » » » » » » 19 months ago, # ^ |   +1 Can you please write the full form of LNDS?
•  » » » » » » » 19 months ago, # ^ |   0 Longest non-decreasing subsequence.
 » 19 months ago, # |   +40 How to prove the first statement in the editorial for problem F?
•  » » 19 months ago, # ^ |   +34
•  » » 19 months ago, # ^ |   +38 Well, the first thing that helps for this problem is to consider that all multisets have the same size, which is $n$. Simply, add additional zeros representing empty sets until you have $n$ numbers in the multiset.One possible way in which we realize that such a condition is equivalent and can prove it is to note the following: If we are able to createe a multiset $(M_i)_{i \in [0, n)}$ we can always move elements from a bigger set $M_i$ to a smaller one $M_j < M_i$ and still maintain a valid configuration. The reason is that in $M_i$ there are at most $M_j$ elements that are already in $M_j$, so the rest, $M_i - M_j$ can be moved to $M_i$ if it is necessary.This implies that if we consider the multisets sorted, which is what makes more sense, we can always move elements from $M_i$ to $M_{i+1}$. Thus, if a configuration $(M_i)_{i \in [0, n)}$ is valid, so is any configuration $(N_i)_{i \in [0, n)}$ with $\sum_{j = 0}^i N_j \le \sum_{j = 0}^i M_j$. And this is the key to the problem. We do not need to consider all multisets, only those that are maximal in this sense. And then we can count how many elements they have below them.By the way, once we get that formula, it makes no sense to keep considering the multisets in the way we are doing it. Instead, use the sequence of accumulated sums to represent the multiset. Then the condition becomes $N_i \le M_i$ for all $i$, which reads much better. Hence, our multisets are represented as increasing sequences $(M_i)_{i \in [0, n)}$ with decreasing $M_{i+1} - M_i$. Or even better, as increasing sequences $(M_i)_{i \in [0, n]}$ wiith $M_0 = 0$, $M_n = n$ and decreasing $M_{i+1} - M_i$.The question is, which multisets are maximal? Two multisets are not comparable if $N_i \le M_i$ and $N_j \le M_j$ for some indices $i$ and $j$. But... well, this was of little help for me. So, I thought: at the very least I know that the biggest lexicographically is maximal. Which $(K_i)_{i \in [0, n]}$ is the biggest lexicographically? Well, if we try to fit as many possible numbers in the first set, we will have $K_1 = \sum_{j = 0}^n min(\operatorname{cnt}_j, 1)$. Then, we will substract $1$ from every $\operatorname{cnt}$ and repeat. In the end, we get the multiset $K_i = \sum_{j = 0}^n \min(\operatorname{cnt}_j, i)$. (And the right hand side is precisely the formula of the statement. This, along with our previous observation, proves that the condition is sufficient.)However, now that I know the formula I also know that we cannot make another maximal multiset, because for $M_i$ we can only use at most $\min(\operatorname{cnt}_j, i)$ occurences of each number $j$. That means $N_i \le \sum_{j = 0}^n$ for all multisets $(N_i)_{i \in [0,n)}$. Thus, the lexicographically biggest multiset is, actually, the only maximal.Ending note: Yes, we can prove necessity from the very beginning and sufficiency if we consider the lexicographically biggest multiset and the first observation, without any notion of maximal elements. However, I am not sure one would get there magically. Instead, I believe the first observation must lead us to think of maximal elements and then we can either guess that there is just one or simply deduce it as it has been shown.
•  » » » 18 months ago, # ^ |   0 Why do we get $min(cnt_j, i)$ in the formula for $K_i$ ? I get the part of $cnt_j$, but not the reason for the i. Is like we're telling we are repeating an element in one of the set we built right? but this should then be invalid
•  » » » » 18 months ago, # ^ |   +1 Well, if you want to make each element the largest possible each time, you take one element from each set with one element, which is why $K_1 = \sum_{j \in [0,n)} \min(\operatorname{cnt}_j, 1)$. Then, you substract one from each set and get $K_{i+1} = K_i + \sum_{j \in [0,n)} \min(\max(\operatorname{cnt}_j - i, 0), 1),$because you took at least $i$ elements from each set already.Then, by induction, if you suppose $K_i = \sum_{j \in [0,n)} \min(\operatorname{cnt}_j, i)$, you get $K_{i+1} = \sum_{j \in [0,n)} \min(\operatorname{cnt}_j, i) + \sum_{j \in [0,n)} \min(\max(\operatorname{cnt}_j - i, 0), 1),$which equals $\sum_{j \in [0,n)} \min(\operatorname{cnt}_j, i + 1).$
•  » » » » » 18 months ago, # ^ |   0 I see, I think what I didn't get is that $K_i$ is cumulativeThanks
•  » » 19 months ago, # ^ |   +3 You can conclude and prove this lemma with mincut maxflow — if you try to find a maximum matching between a chosen multiset of sizes, and the frequencies of the elements, then try to find the condition by which all cuts are of size at least $n$.It's pretty lengthy (I can elaborate if you want), but you can arrive at this condition without magically guessing it (the magic here comes from the magic of MCMF).
 » 19 months ago, # |   0 My logic is exactly the same as the editorial but giving WA at tc 2can someone please help me... thanks
•  » » 19 months ago, # ^ |   0 $>$ or $\geq$ ?
•  » » » 19 months ago, # ^ |   0 I think it should be > as even if we have filled all slots we can remove some in next iteration and if we don't we will report that ans don't exist
•  » » » » 19 months ago, # ^ |   0 $\geq$ is right. try it and I'll explain it to you.
 » 19 months ago, # |   0 What is $k$ and $z$ in the first statement in the editorial for problem F?
•  » » 19 months ago, # ^ |   +10 Now edited to make it clearer.
 » 19 months ago, # |   0 In Problem E, can anyone explain why we would ever do this? ->If card i is used in the longest non-decreasing subsequence, then the maximum answer is the maximum value of dist(j,i) for all j∈Di.Can someone give a small counter example where doing just this (->If card i is not used in the longest non-decreasing subsequence, then the maximum answer is the sum of dp values of all of the children of i.) will fail.
•  » » 19 months ago, # ^ |   0 you can consider a stick tree. where 1->2->3->4 here it always makes sense to select a root node as we can never get the better answer by excluding the root.
•  » » » 19 months ago, # ^ | ← Rev. 3 →   0 I did consider this case in my dp. I did this. My code failed anyways. if(adjList[u].size() == 1) dp[u] += 1(Below is the dfs)  void dfs(int u, vector> &adjList, vector &dp){ int answer = 0; if(adjList[u].size() == 0){ dp[u] = 1; return; } for(int v: adjList[u]){ dfs(v, adjList, dp); answer += dp[v]; } dp[u] = answer; if(adjList[u].size() == 1) dp[u] += 1; } Thanks
•  » » » » 19 months ago, # ^ |   0 Take a look at Ticket 16403 from CF Stress for a counter example.
•  » » » » » 19 months ago, # ^ |   0 Thanks! Got it. I made a mistake with my reasoning.
 » 19 months ago, # |   +68 In my opinion, the editorial gives the conclusions more than proof or explanation for them, with many sentences like "We can observe/see/obtain/ that ...".
•  » » 19 months ago, # ^ |   +11 Agreed. Now, some of the editorials have been edited to give more explanations about the claims and conclusions. I hope they are more helpful now <3
•  » » » 19 months ago, # ^ |   +10 Thanks! Truly more helpful.
 » 19 months ago, # |   +22 I think the time complexity in C problem would be O(nlogn) the writer forgot that he assumed that the elements are sorted . ( Though nlogn will do the work but still )
•  » » 19 months ago, # ^ |   +10 It is fixed now. Thanks for pointing it out! <3
•  » » 4 months ago, # ^ |   0 yes we can just make it in O(n+log(n) for the sort ) == O(n) .
 » 19 months ago, # |   +10 Imho, complexity of problem D is O(n). There is just iteration over an array.
•  » » 19 months ago, # ^ |   +10 Yea, my solution also runs in O(n). However, I think the priority queue solution is more intuitive to explain and understand. Thanks for mentioning that!
 » 19 months ago, # | ← Rev. 4 →   +25 In D, let's say we have the grid: x 1 2 x 3 4 5 6 x where x are empty cells. If we can move any card to any cell, as long as there is an empty cell, could someone explain how we could move 4 to the cell where 5 currently is — cell with coordinates (3,1) ?
•  » » 19 months ago, # ^ |   +10 You can't but you don't need too. Cards only move if they are the current next card going to the exit or as part of a rotation to let another card past (so you'd never need to move from one 2x2 square into the other). As long as there is an empty square any card can reach the exit which is all that matters.
•  » » » 3 months ago, # ^ | ← Rev. 3 →   0 Can you please explain how you concluded that it is impossible to move 4 to the place of 5 following the rules, I tried alot but am not able to move 4 and empty space together in the lower left 2 x 2 square, maybe that is the reason its impossible ?, Can you please explain your conclusion.Edit : I got it, its just because the size of cycle for the rotation is odd so we cannot shift it, had it been even, we could have.
•  » » 19 months ago, # ^ |   +15 Nice catch! We actually did not realise the possibility of that. The tutorial has been edited to have a more correct claim.
 » 19 months ago, # | ← Rev. 2 →   +20 Very pleasant round to participate in. Liked E and F a lot. C and B were also pretty nice. Looking forward to see more rounds from you!
 » 19 months ago, # | ← Rev. 3 →   +10 Did anyone solve problem I like this?Start from editorial's O(nm^2) knapsackLets say you have dp[i] after considering $f_0,...f_{i-1}$. divide $f_i$ into $K$ line segments. let the kth segment is $a_kx+b_k$ for $l_k\le x\le r_k$Then $dp[i+1][j]=\min_{0\le k •  » » 19 months ago, # ^ | +3 Yes, it can be done in$O(nm)$.  » 19 months ago, # | ← Rev. 2 → 0 In problem E, why is ai>maxj∈Di(aj) true in the optimal solution?Edit: Its clear now, thanks for editing the editorial  » 19 months ago, # | 0 For que 5 I thought of a solution where the answer will be (no of nodes — no of nodes with more then one children). It fails on test case 6. Can anyone please provide a counter example, I am unable to think of one. •  » » 19 months ago, # ^ | 0 6 1 2 2 2 2  » 19 months ago, # | +5 Animations like in D helps understand better  » 19 months ago, # | +8 In problem F, I got AC by$O(n^3\log n)$(Though it can be optimized to$O(n^2\log n)$easily).Too weak system test or too high efficiency?Submission  » 11 months ago, # | 0 A was really hard to believe its a CF question » 6 months ago, # | Rev. 4 0 ##### Explanation for problem C First approach that comes to mind is to first of all set two out of the three bags with the heaviest and the lightest bricks. Then, put all of the remaining bricks in the third bag. Eg 1:$B_1$= {heaviest},$B_2$= {lightest},$B_3$= {all the rest}. Bu Denglekk will automatically select the lightest brick of$B_3$(which will be second lightest globally). So$ans$= (heaviest — lightest) + (second_lightest — lightest) Alternatively, Eg 2 if we set$B_1$= {lightest} and$B_2$= {heaviest} and$B_3$= {all the rest}. Then, Bu Denglekk will select the heaviest brick of$B_3$(which will be second heaviest globally). So$ans =$(heaviest — lightest) + (heaviest — second_heaviest). This approach fails and the scenario where it might fail is the intuition for the correct solution. What if, we have$B_1$= {heaviest},$B_2$= {lightest, second_lightest},$B_3$= {all the rest}. Again Bu Denglekk, will select the lightest brick from$B_3$as before, (which will be third_lightest globally) Also, he will pick up the second_lightest from$B_2$. Now, if we compare it to eg 1. Although the term$(heaviest - lightest)$is certainly greater than$(heaviest - second\;lightest)$. Maybe, the second term of this new configuration makes up for the loss in score. That is, it might be possible that$(third\;lightest - second\;lightest)$is much greater than the term$(second\;lightest - lightest)$of Eg 1. Consider the following input if not clear:${51, 386, 2159, 2345, 2945}\$

Basically the catch is to notice and exploit the non-uniform differences between the consecutive elements in a sorted ordering of the weights of the bricks.

Thinking of how to handle these possibilities is all thats left now. (Left to the reader :P)

•  » » 6 weeks ago, # ^ |   0 Amazing explanation bro, I was stuck at the same approach and wondering why i am getting WA , but thanks for the clarification. I wish editorials could explain the thought process + pitfalls in the questions too.
•  » » » 6 weeks ago, # ^ |   0 Glad it helped you out. About editorials, I swear sometimes I feel like making a blog titled: "Tutorial: How to write Tutorials"
 » 5 months ago, # |   0 I don't understand jackshit about the editorial explanation for C so I'll outline how i thought of it here.At first I wasted about 6hrs trying the "fill two cups with smallest and largest blah blah" approach but after seeing someone's code thought of this.Think of the sorted array as points on a numberline, we need to maximize the length between the smallest and largest point.We can only "fix"/"choose" two points and the 3rd one is chosen for us.We can now think about how the differences between two different points are varying, so to use this we can simply pick the first and last point and then go on bruteforcing for the 2nd point, as for the 3rd point, it'll be the point just before the bruteforced point.This is a very bad explanation (so is the editorial lol) but it is what it is