By Vladosiya, history, 10 months ago, translation, 1790A - Polycarp and the Day of Pi

Idea: MikeMirzayanov

Tutorial
Solution

1790B - Taisia and Dice

Idea: Gornak40

Tutorial
Solution

1790C - Premutation

Idea: MikeMirzayanov

Tutorial
Solution

1790D - Matryoshkas

Idea: MikeMirzayanov

Tutorial
Solution

1790E - Vlad and a Pair of Numbers

Tutorial
Solution

1790F - Timofey and Black-White Tree

Idea: molney

Tutorial
Solution

1790G - Tokens on Graph

Idea: DmitriyOwlet

Tutorial
Solution Tutorial of Codeforces Round 847 (Div. 3) Comments (102)
 » I think F is interesting, many ways to solve
•  » » And it's very similar to E. Xenia and Tree
•  » » » meme » For F I would recommend tourist solution .
•  » » 10 months ago, # ^ | ← Rev. 3 →   It's TC is nlogn?Edit: It's TC is nlogn '.'
•  » » » 10 months ago, # ^ | ← Rev. 2 →   It actually is O(n log n). Without going into the details of the solution, it's easy to see from the code that its time complexity is O(sum of answers for all operations), i.e. the sum of all numbers that we are supposed to output.Claim: after k operations, the minimum length of path between black nodes is at most 2 * (n / k), for k > 0.Proof by contradiction: Suppose that the minimum path length is greater than 2 * (n / k). Then, for every black vertex v there are at least (n / k) nodes u such that dist(u, v) <= n / k, and by the assumption all these u's must be white, because (n / k) < 2 * (n / k). Let cnt(v) denote the exact number of these white nodes for vertex v. Let S = sum of cnt(v) across all black v. I claim that no white vertex is counted twice in S. Because if it is, then it has a distance of at most (n / k) to 2 different black nodes; but then the distance between those 2 black nodes is at most 2 * (n / k), which contradicts our original assumption. Thus, S counts each white vertex at most once, and is therefore bounded by (n — k). But we also know that each of the k black vertices contributes at least (n / k) to S. Then,S <= n — k --> k * (n / k) <= (n — k) --> k <= 0, which is a contradiction. This finishes the proof.Now the TC of the algorithm is the sum of 2 * (n / k) for all k from 1 to n. But this is just a sum of harmonic series, the value of which is O(n log n).
•  » » » » Finally understood the proof completely after trying to upsolve this problem for 1 day :)
•  » » » » 10 months ago, # ^ | ← Rev. 4 →   Another way to prove your claim is by thinking about the euler tour of a tree.Intuition: It's straightforward to calculate the maximum value of the minimum distance between any two elements after some $K$ operations in the case of a list instead of a tree. So let's try to represent the "tree like a list".Your claim would have been trivial if we were given a list instead of a tree in the problem. You can "transform" a tree and represent it using the list of nodes visited during the euler tour of the tree. The difference of position between any two elements in the list will always be greater or equal to the actual distance between the nodes (that are being represented through those elements) in the tree. Like in this case, the difference in position between nodes 6 and 1 is 4, whereas the actual distance is 3. Image Now we have a list of $2N - 1$ elements, and we know that even if these elements were independent of each other (same node does not appear in the list twice), the answer would be bounded by $\lceil\frac{2N - 1}{K}\rceil$.Note The bound works for the tree as well because we know whatever the answer for our list is, the actual answer for our tree is lesser or equal.
•  » » » » » This is indeed very simple, thanks a lot for the alternative approach!
•  » » Agree. tourist dfs is only to find the father of a vertice so it is easier to understand. And he use a while loop to update the answer. It's really ingenious.
 » the round was great thank you so much!
 » when will system tests take place?
 » 10 months ago, # | ← Rev. 2 →   Is this contest unrated?
•  » » Looks like it.
•  » » 10 months ago, # ^ | ← Rev. 2 →   No.(Maybe the system test isn't started is because there's still someone hacking.)
•  » » it isn't, updating rating for div3/4/edu rounds takes a while
•  » » » when will rating changes be seen?
 » All problems are interesting! Like them. But a little difficult for div 3:)
•  » » I thought that ABCDE were pretty easy, but FG were too hard
•  » » Will You Please Check If my solution is correct for problem E as i did it in O(1) Complexity 190841912
•  » » » Bruh it's accepted
•  » » » » Yeah I Know . Just wanted to confirm if this approach was also correct
•  » » » » » It's correct.
 » 10 months ago, # | ← Rev. 3 →   On task B, the values of the elements are bounded from $1$ to $6$. So, even if you set everything to $s-r$ and then subtract manually, the number of operations in the worst case is $5n-5$, which is still $O(n)$.
•  » » vector res is sorted as well so it should be O(nlogn)
•  » » » Nope, my solution goes for an entirely different approach. I just start with $s-r$ on every index, and loop through $[1,n)$ (0-indexed), subtracting $1$ everytime until we get $s$ sum in total. The worst case time complexity is $O(n)$ due to the limit in the elements' values.
•  » » » » okay I thought you were talking about the solution given in tutorial its complexity is given as O(N*N). Even I went with a similar approach that is printed s-r until sum becomes less than s-r then divided the sum left(if any) on the remaining number of elements to be printed. So the TC was O(N) in my approach.
•  » » » My solution for task B: 190928632 complexity: O(n)
•  » » » » Hi, I am kind of confused with your solution and is having difficulty with Problem B.Could you explain your code to me please?
•  » » » » » At First I store the maximum value, that is (s — r). After doing that, we have (n — 1) indexes, so the the minimum value we can store in that is 1..then the sum of (n — 1) numbers become (n — 1) as we store 1 in every (n — 1) indexes. Then I check that after storing 1 in every (n — 1) indexes, the sum of (n — 1) is equal to r or not. If the sum is equal to r, then we don't have to change any number. But if the sum is not equal to r, then we have to convert some 1's into some bigger numbers.. We can't convert 1 into some number bigger than the maximum of the entire array.. We can change some 1's into the biggest number in the array.. If we need value x to make the sum equal to r and x is less or equal to maximum number of the array, we change 1 to that number x and reduce from needtoadd variable. Also, we can store (s — r) in the array, but it maybe larger than the maximum number of the array. So I take the minimum between them
 » Thank you for the contest ,we were glad to see Div 3 and was waiting for it!
 » D is harder than E . E was quite Simple.
•  » » what??
•  » » » 10 months ago, # ^ | ← Rev. 2 →   190841912 Checkout My Submission
•  » » » » Can you pls explain how you arrived at your solution?
•  » » » » » 10 months ago, # ^ | ← Rev. 3 →   wrong approach .Sorry!!
•  » » » » » » Hey, I don't think XOR is distributive over addition. For example, take a = 10 and b = 11(in binary). Then a+b = 101 and (a+b)^a = 111. now take a^a+a^b= a^b = 01. So, (a+b)^a is not the same as a^a+a^b
•  » » » » » » » Yes My bad!!! Sorry.
•  » » True the solution was possible with TC O(1) as well.
•  » » » I also did it with Complexity O(1)
•  » » » » Yes I saw your submission
 » 10 months ago, # | ← Rev. 2 →   F is a standard problem for centroid decomposition — build centroid tree, process two types of requests — add a new black vertex and find closest. When we insert a new black vertex, just push into all vertexes before us the distance to that centroid. When we get the distance, just iterate over all ancestors and relax ans with min(ans, d(v, cent) + mn[cent])
•  » » what does relax ans mean?
•  » » » If $ans_i$ is on the i-th iteration, we take minimum of previous $ans_{i - 1}$ and the current calculated minimum distance.
•  » » » » ohk thx
•  » » https://mirror.codeforces.com/contest/1790/submission/190946802 Can you tell me the time complexity of f solution and optimize this solution?? btw let me tell my solution:Initially what I did is I found the min distance of all the node from black node and stored in an array.Then during any query I found minimum distance between two black nodes which takes order of 1 time then accordingly updates the minimum distance of the nodes using BFS and update until new distance is smaller then minimum
 » thanks for interesting problems
 » Can someone help me understand why my problem F will TLE?My idea is to go through each black node and do a dfs up to answer steps, (answer is the min distant so far) and see if I can find another black node that is closer than answer. If I find one, update answer so that next time we can dfs less depth. However, this will tle at test case 18. I thought the time complexity is O(n^1.5), please point out my mistake.My submission 190916688 Thank you for your help.
 » Can someone explain tourist solution for problem f in detail
 » To anyone who has understood tourist's solution, please correct me if I am wrongWhenever we convert a node N1 black, we consider it's ancestors in hierarchical order and recompute their best distance as the min of the distance from node N1 and it's previous best distance until we encounter the root of the tree or we encounter an ancestor which has best distance better than it's distance from N1 and we break( because all the rest ancestors can't possibly be benifited from node N1 ) or we reach an ancestor which has distance from N1 greater than current ans(as it will never ever contribute to our actual answer as the answer always decreases or stays the same)IDK about it's TC tho
 » There's a different solution for problem C. link
 » 10 months ago, # | ← Rev. 3 →   https://mirror.codeforces.com/contest/1790/submission/190946802 Can any body tell me the time complexity of f solution and optimize this solution?? btw let me tell my solution:Initially what I did is I found the min distance of all the node from black node and stored in an array.Then during any query I found minimum distance between two black nodes which takes order of 1 time then accordingly updates the minimum distance of the nodes using BFS and update until new distance is smaller then minimum
 » Please start the system testing. Why does it take so long when other platforms give the new ratings 15 mins after the end contest.
•  » » Now it finally starts.
 » can some one pls explain f solution to me i didn't get anything from the tutorial.in easy steps.much appreciated;
 » 10 months ago, # | ← Rev. 3 →   E can be done with binary search too
•  » » 10 months ago, # ^ | ← Rev. 3 →   here is my implementaion 191020389
•  » » for binary search doesn't f(x) have to be monotonic? sorry I'm not understanding how bs won't get stuck somewhere. In this case $\frac{X}{2}$is a solution for a so bs will always try it but how do you prove it without knowing the solution beforehand?
•  » » » we have fixed number of bits i.e. of x, now we will alway assign a<=b such that a+b=2*x, now if we increase a b will decrease and also xor value of both will decrease, which means xor value is monotic decreasing with respect to a, hence we apply binary search l=1 and r=x, for the value of a .
•  » » » » if x=6, and a=2 and b=10, a^b=8. now if increase a by 1, a=3, b=9. and a^b=10. so on increasing a => a^b increases. now we increase a by 2. 5^7= 2. therefore a^b is not monotonic so not a candidate for binary search
•  » » » » » 2 months ago, # ^ | ← Rev. 2 →   .
•  » » » » » » Let me think a better explanation give me 5 min
•  » » » » » I am applying binary search on a
•  » » » » » If a,b pair exists when x=y for a being mid y=mid^(2*x-mid) and if x>y Then we have to increase xor it can only be done by decreasing a as told u earlier we are finding solution a<=b and if xor of a nd b to increase a should be decreased and b should be increased,and vice-versa for y
 » Can someone tell me in detail why the time complexity of the F of √n question came about?
 » Problem E is similier to a GFG content. I think it will helpful for all. Find two numbers from their sum and XOR
 » 10 months ago, # | ← Rev. 2 →   I was asked to give more details on my solution of 1790E - Vlad and a Pair of Numbers. I will also share it there. Solution for problem EThis is a constructive problem. It has many different approaches and solutions.We have $2$ operations, sum of the numbers and $\mathrm{xor}$ of the numbers. They must be equal. To be more precise, we want this equation to hold$A + B = 2 \cdot (A\, \mathrm{xor}\, B) = 2 \cdot x$First thing we want to think about is: how fast can a naive iterative solution be? For given $x$ we can iterate over all $A$ and get $B = A \, \mathrm{xor}\, x$. This can be done in $\mathcal{O}(x)$. We will not solve the problem this way (TLE will be the result), but we can implement this iterative solution for all small tests, generate answers and use them to find patterns and properties.Take a look at this solution https://mirror.codeforces.com/contest/1790/submission/190992078Even from this cerr output we can see that for all numbers that have answers, the first way to build this answer is to take$A = x/2$$B = 3 \cdot x/2$Actually this is enough to solve this problem. But that will not always be the case and generally you also want to know why does it work and how to prove it.It makes sense to try analyzing individual bits for bitwise operations. Let's consider all possible combinations for bits in position $i$ in numbers $A$ and $B$: If $A[i] = 0, B[i] = 0$, then they have no effect on sum and on $\mathrm{xor}$ If $A[i] = 0, B[i] = 1$, then they simultaneously add bit $i$ to sum and to $\mathrm{xor}$ If $A[i] = 1, B[i] = 0$, completely matches previous one, no point to consider If $A[i] = 1, B[i] = 1$, then they have no effect on $\mathrm{xor}$, but they add bit $(i+1)$ to sum We want to set bits in $A$ и $B$ in such a way, that it balances out $A + B$ and $2 \cdot (A\, \mathrm{xor}\, B)$ and also matches $x$. There is only one way to add the same bit $i$ simultaneously to $A + B$ and to $2 \cdot (A\, \mathrm{xor}\, B)$: make $A[i] = 0$, $B[i] = 1$, $A[i-1] = 1$, $B[i-1] = 1$. This way we will get $2$ binary digit overflows in sum and $\mathrm{xor}$ will have only one bit $i$ set.010 $x$001 $A$011 $B$100 $A+B$010 $A\, \mathrm{xor}\, B$100 $(A\, \mathrm{xor}\, B) \cdot 2$This can be clearly seen on a small example $x=2, A=3, B=1$. The only way to set bit $i$ in number $x$ is to utilize two bits in numbers $A$ and $B$: bit $i$ and bit $(i-1)$. So if we want set bit $i$ in number $x$ to $1$, then bit $(i-1)$ must be set to $0$.Thus answer exists for all possible numbers $x$ in which each bit equal to $1$ is preceded with bit equal to $0$. In other words for all $i$ that match $x[i] = 1$ we must also have $x[i-1] = 0$. This is also true for the lowest bit, it does not have any preceding bit, so it also must be equal to $0$All possible answers for the same number $x$ differ from each other only in arrangements of bits in positions where those bits should be different ($A[i] = 1$ and $B[i] = 0$ or $A[i] = 0$ and $B[i] = 1$).
•  » » It can be easily done with binary search in same complexity
•  » » the most understandable tutorial, thank you
 » 10 months ago, # | ← Rev. 2 →   Is it possible to do F using a BFS? If I understand the tutorial correctly, we skip vertices v that have d[v] > ans. Similarly, we could do a BFS where the number of levels you visit does not exceed ans. Will this be as efficient as the DFS approach? Why or why not?
 » Can someone please explain solution of F more clearly. I didn’t understand the editorial solution that why it works and also the time complexity
»

Taisia and dice i am getting wrong answer on test 1 but same code is working fine in ide,here is my code

# include

using namespace std; int main() { int t; cin>>t; while(t--){ int n; cin>>n; int s; cin>>s; int r; cin>>r; int sum =0; vector ans; int th =6;

while( n>0 && sum <s){

if(sum + th <= s){

sum +=th;
n--;

if(sum == s && n !=0 && th > 0){

sum = sum - th;
n++;
th--;

}else if(th ==0){
th++;
}
else{
ans.push_back(th);

}
}else{
//  cout<<n<<" "<< th<<endl;
th--;
}
}

for(int i=0;i<ans.size();i++){
cout<<ans[i];
}
cout<<endl;

}

}

 » 190863984 E O(1)
 » I am really disappointing because its my first contest and i solved only two and i get only one right . but i think there is something wrong in q B-Taisia and Dice because in the q ( in any order /print any) and i got it right. like here : 3 12 6 my answer was : 6 3 3 3 + 3 = 6 and 6+ 6 = 12 and that is 3 dices. I am really disappointing but hey ! is my first one . thank you :) <3
•  » » Here is one test case that your code fails if you want to check it. 1 6 21 17 possible answer: 4 4 4 4 1 4 your output: 4 4 4 4 4 3 As you can see your answer goes above 21.
•  » » » Oh boy ! u r right! thank u bro i really appreciate it <3 . i will try fix my code next time before yelling at people :))
 » So, I was thinking when to show my way to solve problems and I think that it's best if I do it only when at least I can solve three problems. So, like always don't focus on my code because it has less value that what I was thinking in the contest. AThis problem I believe that everyone can solve but for example something curios that happen to me is that in the moment that I was writing the code for some reason I was doubting if 30 numbers after the first 3 or if the 3 count to the 30, but this problem it too simple and is about speed, so I decide to do the worst case that is that the count begin after the first three and search which are the first 30 numbers after the point and for the comparation of string use the length of the string that the problem give to make sure that I don't overflow or call something that doesn't exit. code#include using namespace std; /* */ int gcd(int a,int b) { if(b==0) return a; else return gcd(b,a%b); } int lcm(int a,int b) { return (a*b)/gcd(a,b); } void solve(){ string s; cin >> s; string y = "3141592653589793238462643383279"; int n = s.size(); int ans = 0; for(int i = 0;i < n;i++){ if(s[i] == y[i]){ans++;} else{break;} } cout << ans << '\n'; } int main(){ ios::sync_with_stdio(0); cin.tie(0); //cout.precision(10); int t = 0; cin >> t; for(int p = 0; p < t;p++){ solve(); } }  BFor this problem onward I began to use the best weapon that have the beginners in my opinion and is don't overcomplicate only try to solve it like if you were doing it at hand and write the way that your brain think to solved it and later past it to code, this is really useful when you see that they give you variables that are really small, for example in this problem $n = 50$, which mean that you only have to fill 50 values of the dice maximum and $t = 1000$, so even in the most wild case there are only 5000 dice to fill which is a number pretty small for a problem. My way of solving it was thinking in the simplest steps are the following: Make a dice value of $s-r$ that is the difference, in my code I think about the possibility that the difference is more than six and there is need of more dice, but they said that always exist the answer and only taking one dice will make the value of $s$ into $r$, so I had a bit of luck. Fill the rest of the dice with what is necessary to make the total value of the dice into $s$, but I fill it in the most brute force possible that is one for one and only given plus one each time that I pass for some dice. code#include using namespace std; /* */ int gcd(int a,int b) { if(b==0) return a; else return gcd(b,a%b); } int lcm(int a,int b) { return (a*b)/gcd(a,b); } void solve(){ int n,tots,r; cin >> n >> tots >> r; vector a(n); int ans = 0; int i = 0; int diff = tots-r; while(diff > 0){ if(diff > 6){a[i]=6;i++;diff-=6;} else{a[i]= diff;i++;break;} } tots -= (tots-r); int m = i; while(tots > 0){ for(i = m;i < n && tots >0;i++){ a[i]++; tots--; } } for(auto x : a){cout << x << " ";} cout << '\n'; } int main(){ ios::sync_with_stdio(0); cin.tie(0); //cout.precision(10); int t = 0; cin >> t; for(int p = 0; p < t;p++){ solve(); } }  CThe same as the problem B, begin to try to solved yourself and you will notice that if you sum the position that a number have been at each iteration the one that sum the smallest is the number that is first in the original and the one that sum the biggest is the last one in the original and with it if you see the other numbers the most true that is become, the small the total sum of position is of a number the more to the left this will be and in the other case the more big the more to the right is his position the original array. code#include using namespace std; /* */ int gcd(int a,int b) { if(b==0) return a; else return gcd(b,a%b); } int lcm(int a,int b) { return (a*b)/gcd(a,b); } void solve(){ int n; cin >> n; vector> a; map b; for(int i =0;i < n;i++){ for(int j = 0; j < n-1;j++){ int x; cin >> x; b[x] += j+1; } } for(int i = 1;i <= n;i++){ a.push_back({b[i],i}); } sort(a.begin(),a.end()); for(int i = 0;i < n;i++){ cout <> t; for(int p = 0; p < t;p++){ solve(); } }  DIn this problem the reason that I see why someone couldn't do it are 3: don't have confidence submitting something that maybe surpass time constraints don't know how to use binary search don't read right that is in the form $s, s+1, …, s+m−1$ To solve the first one as a beginner I think that you must realize that you don't have anything to lose and that you must use it to your favor in a way that you always are pushing yourself to try to at least do one submit to prove that you try your bestIn the second one, to be real I don't understand binary search to deeply but I know to use the functions lower_bound() and upper_bound(), this is my way to pass question that need low time complexityAnd the third is that for example a nest of dolls that don't work is ${1,2,4}$ but maybe you didn't read right that the different of each adjacent element is maximum 1, so my tip is to read a bit slower the more that you are close to the end of the contest. code#include using namespace std; /* */ int gcd(int a,int b) { if(b==0) return a; else return gcd(b,a%b); } int lcm(int a,int b) { return (a*b)/gcd(a,b); } void solve(){ int n; cin >> n; vector a(n); for(auto &x : a){cin >> x;} sort(a.begin(),a.end()); int ans = 0; int h = -1; while(a.size() > 0){ if(h == -1){h = a; // cout << "erase -> " << a << " "; a.erase(a.begin());} // for(auto x : a){cout << x << " ";} // cout << ans << '\n'; if(a.size()==1 ){ if(a == h+1){ans++;} else{ans+=2;} break; } if(a == a[a.size()-1] ){ if(a == h+1){ ans += a.size(); } else{ ans += a.size()+1; } break; } auto it = upper_bound(a.begin(),a.end(),h); if(it == a.end()){ans++;h=-1;} else{ if(*it == h+1){ // cout << h << " " << *it ; // cout << " erase ->" << *it << '\n'; h = *it; a.erase(it);} else{ ans++; h=-1; } } } cout << ans << '\n'; } int main(){ ios::sync_with_stdio(0); cin.tie(0); //cout.precision(10); int t = 0; cin >> t; for(int p = 0; p < t;p++){ solve(); } } Hope that my way of thinking can help you in some kind of way to your next contest.
 » Alternate appraoach for D: Divide and Conquer — https://mirror.codeforces.com/contest/1790/submission/190924408Like merge-sort, but instead the merge function calculates number of matryoshka sets that can be combined across left and right intervals.
 » can somebody explain how we get other values of of cubes except the maximum one in problem B?
•  » » you can initially make all numbers max value then start to decrease from all elelments except last one till you arrive to sum of all numbers except the last and equal to r
 » Could someone tell me why use unordered_map in D will TLE and map will not? unordered_map code:191060893 map code:191061543
•  » » Hi, it's probably due to collisions, check this post for reference https://mirror.codeforces.com/blog/entry/62393.
•  » » » Thanks~
 » 190861950 an absolute sort solution for D maybe easy to realize()
 »
 » 10 months ago, # | ← Rev. 2 →   The problem is expecting 'NO' for this particular test case,I am not sure why. 1 4 4 2 2 2 1 2 1 1 2 1 3 1 4 2 3 As a token is already present in '1' it should be considered as a 'WIN' ,isnt it?Or am I missing something?
•  » » it is not the 16th TC due to the empty line when b = 0
 » F problem. It seems that the author's solution is too slow for any languages other than c++ (3.3 s of 4 max): link
 » 10 months ago, # | ← Rev. 3 →   Can someone please help me to understand problem E? I don't understand what does this if statement in the author's solution do. if 2 * x — a — b >= (2 << i): I understand the statements inside the if, but I don't understand the if statement itself. Thankyou.
 » I really don't understand the tutorial of problem F !! can anyone explain it in more details and prove its time complexity $O(n\sqrt{n})$ ?
 » 10 months ago, # | ← Rev. 3 →   I wonder how the bound of $O(n \sqrt{n})$ with DFS/BFS is proven. Suppose that we only update nodes with distance smaller than $2$. Even in such case, we can visit all $n$ nodes on certain graphs. Even if we know that $dist \le \sqrt{n}$, how can we ensure the number of visited nodes?
•  » » Exactly I have the same question. Even if the max distance is √n, for every vertex we have to visit √n*edges connect to the vertex which overall sum to what I dont know(but how can you say it of order n root n),plus I think dfs is working correct because it randomly choses vertex and replaces ans which in turn reduces total actual iterations of the vertex. I am saying this beacause I remember a dude complaing his code was giving TLE at 7th test case with bfs but it got AC with dfs.
 » 10 months ago, # | ← Rev. 2 →   I can't understand this particular line's purpose in the editorial for problem F — "This DFS will visit only the vertices v with d[v] < ans".If we stop our DFS after finding a node such that d[v] >= ans, then we will not update the d[] values of the remaining vertices. This means that the initial claim which says that "d[] will consist of the minimum distances of the vertices from the nearest black ones at all points of time" fails.For eg., let's have a list-like tree with vertices numbered 1, 2, ... Now, if we color vertex 1 first, then d = 0, d = 1, d = 2, and so on. Now, if we color vertex 2 next, then ans = 1, d = 0, d = 0, d = 1, but we will stop our DFS and not update the values of the remaining vertices.Won't this create a problem?
 » I have a question about G. Why the result of below test case is "NO"?  1 10 9 3 3 3 9 10 6 7 8 1 2 2 3 3 4 4 8 8 9 8 10 3 5 5 6 6 7  I think vertex 3 can reach to vertex 6. And vertex 9 or vertex 10 can reach to vertex 1.
 » With regard to my comment above, I would like to request a thorough check on the given solution of problem 1790F. I am a bit confident that the solution isn't correct (i.e. it may not run in $O(n \log n)$ or $O(n \sqrt{n})$ for all trees). Thanks a lot!
 » Dude I am struggling to understand the solution of B. Am I an IDIOT??
 » Can someone explain more solution of E?
 » Why i am getting MLE for problem D on first test case?CODE
 » Hi, could someone help me with my solution to 1790B? Test case 1 says that my solution doesn't pass because it doesn't add up to 15, but my solution outputs 6 3 3 3, which does add up to 15?
»
3 months ago, # |
Rev. 3

#### DFS-ish solution for problem D. Matryoshkas

First, we build a frequency map as any approach would need to care of the counts of each toy. Here each key represents a toy-size and assosciated value represents number of such toys having the same size.

If we define a directed graph such that:-

• The available sizes of the toys are the nodes.
• There only exists either zero or one directed edge outgoing from any node.
• Any and every outgoing edge from a node always goes from a node representing a toy of size $S$ to the one representing a size of $S+1$.
• If we run a DFS, we only have one neighbour to try-out per node. (Simply look up in the frequency map, if there is a toy of size $source+1$ available or not, which is to say frequency of $source+1$ is nonzero or not ?)

We sort the available sizes(the input array) and start the DFS with the smallest size as the source of the dfs. Obviously the smallest toys are the toys that don't have any smaller neighbour to "try-out", thus it makes sense to start at the smallest toys. Since this is a directed graph and there is only one neighbour to "try-out" per node. We, don't need the usual visited array in DFS implementations. Instead, every time we visit a node $\implies$ we put one toy of that size in a set. Thus, we update the frequency map by subtracting 1.

Ultimately, the number of connected components found by the DFS is the minimum number of sets that can be formed with the toys.

Sample code: 223730733

 » 2 months ago, # | ← Rev. 2 →   I don't understand in G why this test case expects NO: test15 41 53 1 2 3 4 5 1 21 42 33 5There is a path from 3 to 1 all having bonus, so we can reach 1 using rules. Can someone explain? Also sorry for necroposting