We hope you've enjoyed the problems! (Certain problem editorials below may appear a bit late due to Polygon lag)
Tutorial is loading...
Tutorial is loading...
Tutorial is loading...
Tutorial is loading...
Tutorial is loading...
Tutorial is loading...
Tutorial is loading...
Tutorial is loading...








first.
yes this is an youtube video
Lol
123gjweq2 Can you please explain your D solution logic _/_
Sure, so imagine, instead of finding the longest good subsequence, you try constructing a good sequence from the ground up. Let's say you put down a $$$7$$$. Then you put down a few more $$$7$$$'s and you get an array like $$$[7, 7, 7, 7, 7]$$$ or so.
Now let's say you want to put down a $$$6$$$. You cannot put any $$$6$$$ before any $$$7$$$, cuz that would violate the condition. So the only place to put the $$$6$$$ would be after the $$$7$$$'s. Now you put some amount of $$$6$$$'s down, and you array must look like $$$[7, 7, 7, 7, 7, 6, 6, 6, 6]$$$ or something. And if you try to put down some amount of $$$5$$$'s, the same thing happens to you: the $$$5$$$'s must be appended to the array.
Now say you have the array $$$[7, 7, 7, 7, 7, 6, 6, 6, 6, 5, 5]$$$, and you want to put down a $$$3$$$ or even some value less than $$$3$$$. It doesn't actually matter where you put the $$$3$$$ (or the element lower than $$$3$$$), cuz the $$$3$$$ does not interact with the $$$5$$$'s (or anything above $$$5$$$). So basically, you can splice together your array with, say, $$$[3, 3, 3, 2, 2, 1]$$$ in any way you want, and it won't cause an error.
This means that any good array will basically consist of many subsequences, each one being of the form $$$[x, x, \dots, x, x - 1, x - 1, \dots, x - 1, x - 2, x - 2, \dots, x - r, x - r]$$$.
Also, if the highest value in one of these subsequences is $$$x$$$ and the lowest is $$$x - r$$$, then any value in the range $$$[x - r, x]$$$ must be found in that nonincreasing subsequence. Furthermore, we cannot have the values $$$x + 1$$$ or $$$x - r - 1$$$ (the values just beyond the bounds of the subsequence) in the array, because, if these values are not included in the subsequence, they will also cause the condition to fail (if we can include these values in the subsequence, we do).
Now we can define two $$$dp$$$ arrays to solve this problem. Let $$$dp_1[val] = $$$ the maximum length of a good array we can have if we only consider the elements with values $$$\le val$$$. The answer is just $$$n - dp_1[max(a)]$$$.
Now let $$$dp_2[val][i]$$$ be the max length of a good array given that we have considered all values $$$ \lt val$$$ and we have considered all values $$$= val$$$ such that they are the $$$\ge i^{th}$$$ occurrence of $$$val$$$ in the array, and we include the $$$i^{th}$$$ occurrence of $$$val$$$ in our good array. So like in the array $$$[1, 2, 1, 3]$$$, $$$dp_2[1][2]$$$ would only consider the second $$$1$$$, and $$$dp_2[1][1]$$$ would consider both the first and second $$$1$$$'s (and it must include the first $$$1$$$).
To calculate $$$dp_2[val][i]$$$, you can consider $$$3$$$ cases, but perhaps these can be simplified to just $$$2$$$ cases.
Let's consider the case where our subsequence including $$$val$$$ only includes values in the range $$$[val, val]$$$ (and no other distinct values). Then we can say that $$$dp_2[val][i] = cnt[val] - i + 1 + dp_1[val - 2]$$$, where $$$cnt[val]$$$ is the number of occurrences of $$$val$$$ in nums.
Now there's the case where our subsequence includes $$$val - 1$$$ directly after the $$$i^{th}$$$ occurrence of $$$val$$$. In this case, $$$dp_2[val][i] = max(dp_2[val - 1][j]) + 1$$$, for all $$$j$$$ such that the $$$j^{th}$$$ occurrence of $$$val - 1$$$ comes after the $$$i^{th}$$$ occurrence of $$$val$$$. Basically, we try to prepend $$$val$$$ to the best subsequence starting with $$$val - 1$$$ that comes after our current occurrence of $$$val$$$.
There's one final case: our $$$dp_2[val][i]$$$ contains the $$$(i + 1)^{th}$$$ occurrence of $$$val$$$ but it contains at least one $$$val - 1$$$. To cover this case, we can just say that $$$dp_2[val][i] = dp_2[val][i + 1] + 1$$$ — we just prepend $$$val$$$ to the best subsequence including the $$$(i + 1)^{th}$$$ occurrence of $$$val$$$.
$$$dp_2[val][i]$$$ is just the max over all three cases, and $$$dp_1[val] = max(dp_1[val - 1], dp_2[val][i])$$$ for all $$$1 \le i \le cnt[val]$$$. You could probably simplify these transitions a bit, but I hope that this helped.
Thanks 123gjweq2 for the detailed explaination.
I'll come to this problem after solving 1400-1600 level dp tagged problems, may be I can understand then _/_
Time to go back to pupil again XD
Man just saw your submissions
For A you got 2 WA I can understand that but for B you made 5 WA submissions
Man why?That 5 WA is doomed damn
I knew that i wasn't going anywhere with B, so just tried solutions hoping for them to pass, and these WA would have only affected me i have got B correct. Which isn't the case :)
U need to see mine then, 8 wa on A and got ac after 1 and half hrs,but still I was able to get under 2.5k rank after solving B and C too.C->A->B
Man really that B was much more difficult than C Like literally I took 1hr to think about B but only 5mins to get the idea of C and solve it
I think you should do this next time:
swap(B,C);It would be fun
For C problem I used dequeue to solve it! Any better way?
I use 90 min to solve B and 10 min to solve C.
I was still not able to solve B. But my intuition about it was correct! I moved on from B
B's black cells' pattern wasnt that much hard to find out after some paperwork, but it's implementation was quite difficult :-):-)
Just sort, and use 2 pointers Add smaller ones to make sum as close as possible to x which is < x, and then add largest one to get max points , and update you sum = (sum+ a[j])% x
I also thought about using 2 pointers but I didn't thought much on base condition to break the loop
But now when I think it was very simple as well like till
left < rightLesson to learn: think a bit more before disregarding a concept!
Pupil I am here
got saved.
I couldn't figure out how to implement B
This is the allowed Black cell's pattern if we consider the given condition that no 3 consecutive blacks allowed horizontally or vertically,
1. Either right-down-right-down ...
2. Or Left-down-left-down ...
3. Or else Just a single 2x2 black cell region.
So we just need to take any one of existing black from given grid and just trace the above 1/2 pattern and anything outside this pattern is black then answer is no
yeah actually we did that shit,
of implementing zic zacs in all 8 ways. got AC too.
but it has a very simple solution (just saw) and aparently idk how it works.
I thought I went too far with the implementation, yours also much cracked hehe
can you tell me what is wrong in my code[submission:346831649]
Here's my implementation of problem B:
Thanks, this is a cool solution. I never realized the i + j and i — j are constants
*Not rigorously constant, just taking a small set (maximum of 2) of values.
cute
How did you come up with this kind of thinking and inductive method? It’s truly amazing. Could you tell me how you did it? Where did you get the inspiration from? What was your thought process when creating this code? Could you share it with me? Thank you so much!!!
Think more to code less.
nice contest it's very beautiful problems i am happy in here
First time I have solved 3 problems in a div2!
I really enjoyed B. Very fun question, brought me so much joy
Actually, it is easy if you know about diagonals.
Okay
For D simple hint Only consecutive value pairs (i, i+1) can violate the constraint. Store positions of each value, then for each pair, check if value i appears before i+1. When found, greedily remove the rightmost occurrence of i+1 to maintain maximum flexibility. This greedy approach works in O(n) time
This is what I tried. It means I missed something in my implementation :D I assumed greedy was just wrong after 3 WA
Man same I also thought about it. I guess I got something wrong in implementation!
Can you explain more the approach and implementation?
How does this work with {1,2,2,2,2,2}?
Edit: Oh, I think you mean the same thing that peltorator mentions a few posts further down.
Can you add spoilers for each problem
orz Lever you became Expert
Well, that's one fast editorial, thanks for the effort!
Boring B but happy to be able to solve it
for D there is a very simple greedy solution 346684700. consider the graph where numbers are connected if they form a violating pair. we need to delete a minimum vertex cover in this graph. as the graph is bipartite, the minimum vertex cover size is equal to the maximum matching size by König's theorem. after that, it is pretty easy to see that maximum matching in this graph can be found greedily. consider the smallest value $$$x$$$ in the array. if there are multiple, consider the rightmost one. say, it's position is $$$i$$$. this vertex is only connected to vertices with value $$$x+1$$$ that have indices $$$j \gt i$$$. it is now easy to see that if we consider the maximum matching, we can adjust it slightly to ensure that $$$i$$$ is connected to the maximum $$$j$$$ with value $$$x+1$$$. therefore, we just repeat this greedy procedure of taking the right-most value $$$i$$$ and matching it to the rightmost value $$$i+1$$$ if possible.
Can you explain your procedure regarding the way you would form the graph before finding out the maximum matching size, particularly the way you would form the graph? Naively would be n^2
i considered the graph only for the proof. the actual algorithm is described at the end of my message.
I think the problem was created with this theorem in mind and then there was an adhoc solution, similar to some slope trick problems
I was had the similar base idea of converting it to graph, I created a graph where $$$a_i$$$ would be connected to "first" $$$a_j$$$ where $$$j \gt i$$$ && $$$a_j = a_i + 1$$$ and it would also be connected to "last" $$$a_j$$$ where $$$j \lt i$$$ && $$$a_j = a_i - 1$$$
then every component would have consecutive numbers and would be solvable independent of others. now for each component, answer would be the minimum between count of odd numbers and count of even numbers. final answer being the sum of answers across all components.
I got WA on 2, I can't seem to figure out why this fails as most of the cases i built it was passing, could you tell me if my idea has any faults ?
Consider an array [1, 1, 2, 3, 4, 4]. This forms just 1 connected component in your graph, but the answer is just 2 (remove 2 and 3), despite there being 3 evens and 3 odds.
I saw your code but I have a question is there any possibility that if you start choosing i randomly and do the operation on it with the i+1 ... Is not it will give you the optimal answer (maximum matching) ??
Can you explain why/how to prove that taking the first elements in that specific order (from smaller to largest, then from right to left) works?.
ok. sure. this is a classical optimality argument for optimization problems. say, the smallest value is $$$x$$$, and the rightmost position where $$$x$$$ occurs is $$$i$$$. we match it to value $$$x+1$$$ (as $$$x$$$ is the smallest value, no value $$$x-1$$$ exists) at the largest position $$$j$$$. if $$$j \lt i$$$, there is nothing we can do with $$$a_i$$$ so we just skip it. otherwise we match $$$i$$$ to $$$j$$$. if there exists an optimal solution in which $$$i$$$ is matched to $$$j$$$, then taking this pair is valid (we still can achieve optimal objective). otherwise, for contradiction assume that no optimal solution contains the pair $$$(i, j)$$$. consider an arbitrary optimal solution. what are $$$i$$$ and $$$j$$$ matched to in this solution? if either $$$i$$$ is not mached to anything, we can rematch $$$j$$$ to $$$i$$$, and we will still have an optimal solution. this time it contains the pair $$$(i, j)$$$ thus leading to a contradiction with our assumption that no optimal solution contains this pair. anslogously, if $$$j$$$ is not matched to anything. otherwise, say $$$i$$$ is matched to $$$k$$$ and $$$j$$$ is matched to $$$\ell$$$. i claim that rematching $$$(i, j)$$$ and $$$(k, \ell)$$$ always works. i will leave this for you to verify. this requires considering the two cases of what $$$a_{\ell}$$$ is equal to.
I came to a similar conclusion but broke it down in two parts. I'll post it here in case others find it useful to have a more detailed proof.
It follows the strategy of first solving a restricted version of the problem, and then generalizing it, which is a very useful strategy to learn especially for lower ranked coders, since it works for many medium difficulty problems.
Part 1
Consider the simpler subproblem where the array $$$a$$$ has only two different values (w.l.o.g. $$$1 ≤ a_i ≤ 2$$$). There is a simple greedy solution to find a maximum matching in this graph:
Let $$$i$$$ be the highest index where $$$a_i = 1$$$, then either $$$i = n$$$ so we can erase $$$a_i$$$ because it cannot match anything, or $$$i \lt n$$$ which implies $$$a_j = 2$$$ for all $$$i \lt j ≤ n$$$ and we choose some j, add (i, j) to the matching, and erase $$$a_i$$$ and $$$a_j$$$. Repeat until the array contains no more 1s.
The proof that this greedy approach works is by contradiction: consider a maximum matching that does not contain (i, j). Then j must be matched with some vertex h ($$$h \lt j$$$, $$$a_h = 1$$$), since if j were unmatched we could add (i, j) for free, contradicting the assumption the the matching was maximum. But we can also replace (h, j) with (i, j) without changing the cardinality of the matching, showing that there is some maximum matching including (i, j).
This explains the inner loop, though note that the argument above shows that it's not actually necessary to pick the last 2 every time. Any 2 following the last 1 would work, but picking the last 2 (i.e. choosing $$$j = n$$$ every time) is easiest to implement efficiently.
Part 2
Now consider the general case where $$$1 ≤ a_i ≤ n$$$. The crucial insight is this: if we consider the subgraph restricted to edges between values 1 and 2, then for any maximum matching in the subgraph, there is a maximum matching in the full graph that contains it as a subset!
Suppose edge (i, j) is in the maximum matching of the subgraph but not in the maximum matching of the full graph. That means $$$a_i = 1$$$ and $$$a_j = 2$$$, and j must be matched with some vertex k in the full graph (otherwise, we could just add (i, j) for free, again contradicting the assumption that the matching was maximum), and we can replace (j, k) with (i, j) without changing the cardinality of the matching.
This directly leads to the greedy algorithm (your outer loop): find a maximum matching involving only edges between 1 and 2 (using the solution from part 1), then discard all 1s (which cannot be matched anymore) and all the matched 2s. Then the lowest remaining value is 2 and we can do the same with with edges between 2 and 3, and so on, $$$n - 1$$$ times in total.
but cannot we do this same argument with the approach of choosing x(smallest) the leftmost x index i and get the leftmost j x+1 having j>i and go on with this ? the optimality argument can be applied same for this as well isn't ? what's the reason behind choosing rightmost i rightmost j, why not leftmost i and leftmost j ?
i don’t see why this works. also, it is less obvious how to find minimum j>i without ignoring the smaller j’s.
Wow! That’s a beautiful approach.
for B, how is the answer YES for this ?
[# . #]
[# . #]
[# . #]
For this grid answer is NO.
Hey, thanks. Can you take a look at my second submission. I just do not understand what is wronng
Im not proficient in java so dont make fun of me if im wrong. Im guessing your taking the smallest(x,y) and then trying to go diagonally from that point. but for this test case:
none of the 8 half diagonals you have implemented have all the black squares im guessing combining with the other half should give you AC (dont forget to not double count x,y)
i think that is it, thanks for going through java code :)
tourist always come back
there is basically no editorial for the first three problems. this is a big issue in general on codeforces. ease problems seem obvious to the contest authors, and it is hard for them to emphasize with lower-rated participants. by saying "it is easy to see" you are robbing the beginner contestants from learing how to prove their solutions. people see this phrase being used all the time and just assume that problemsolving is about guessing the solution rather than proving it. the worst example is problem B. it is not easy to see that these are all possible shapes! it requires a proof. and reading such a proof would be extremely beneficial for some.
The thing I hate the most, and it comes very frequently in Div.3, and easy Div.2, is the assumption that the problem is easy because it's a Div.3C or Div.2B for example, not the other way around!
Anyways, I say to myself that it's a skill issue and I need to get better.
you literally wrote "it is easy to see" in one of your comments under this very blog lmao
(it was indeed not easy to see for a newb like me, pls clarify)
https://mirror.codeforces.com/blog/entry/147941#comment-1322316
i think we should distinguish between official editorials for contests and random users putting their thoughts in comments. i didn’t have either time nor intension to give a comprehensive problem editorial there. the solution i presented is conceptually harder than the editorial solution so my comment was directed at higher-rated participants who would be interested and able to fill in the gaps themselves. furthermore, i gave a general direction for proving this fact.
tourist on top again.
In Problem B, in my first submission(WA on test case 2 -> 150th input) i checked the conditions for the first black cell in column major order, in my second submission(AC) i checked conditions for every cell(white or black) and it got accepted. Can someone help me to find which testcase my first submission fails.
on both test cases, your code should output 'YES' but it fails.
Got it, thanks.
Though problem D is tagged with DP, data structures, and binary search, the solution by ksun48 was genius! He solved it using a clever greedy approach https://mirror.codeforces.com/contest/2161/submission/346675405. So simple and impressive!
B is much more harder than C....
For D solution, I think it is worth mentioning that $$$dp[i]$$$ means max we can keep until $$$i$$$ while keeping the element $$$b[i]$$$ itself.
thanks for superfast editorial but still struggling to understand DP for
D!!! :(If you understand the logic of the DP and don't know how to optimize, then you can see this code to understand the optimizations needed to find out dp[i].
Following optimizations -
// * A global maximum among such j that b_j,1 ≤ b_i,1 − 2
// * Suffix maxima for such j that b_j,1 = b_i,1 − 1
// * The maximum among such j that b_j,1 = b_i,1
Thanks for detailed explanation finally understood the approach for d really good question learnt a lot from this question
thank you so much for your reply.
I was able to upsolve this question because of your comment and very helpful comment in your code which explains some ideas around sorting.
things which I was confused with
b[j].first == b[i].first-1.. if we takejthen we also take previous values, so if we sorted in non-decreasing order .. then we might pickup some indexes which are smaller thanb[i].secondasdp[j]would have used it .. but in this reverse sort..dp[j]only consumes greater indices so all of them are validb[j].second = b[i].second-1.. in a sense we take indices which are greater thanb[j].secondso it is suffix max but actually cozbis sorted in non-increasing order.. in a range ofbwe are actually storingprefixmax .. which I figured while coding it out.thanks again.
...
In C, it would’ve been funnier if array a weren’t bounded by X. The solution is simply to take modulo X and do the same thing.
I don't think this was an intended solution, but I think it's a good one nonetheless, and it passed in 200ms(Maybe faster, but I use extra stuff).
https://mirror.codeforces.com/contest/2161/submission/346714483
For problem $$$D$$$, Claim : For each value $$$x$$$, in the optimal solution we only remove a prefix and a suffix from $$$pos[x]$$$ (where $$$pos[x]$$$ stores all indices having value $$$x$$$.)
Proof : Removing an element $$$x$$$ from the array is equivalent to deleting its index from $$$pos[x]$$$. Consider the smallest value $$$x$$$ in the array. Suppose it appears $$$k$$$ times and we remove $$$r$$$ of them in the optima solution. Note that we shall remove smallest positions ,if we remove any non-prefix elements, a smaller index of $$$x$$$ may occur before some occurrence of $$$x + 1$$$ , hence greedily we shall remove only the prefix of them i.e. smallest $$$r$$$ positions from $$$pox[x]$$$.
Now for $$$x + 1$$$, to keep all remaining $$$x$$$ after it, we must remove some largest positions (suffix) from $$$pos[x + 1]$$$. From here all the violations between $$$x$$$ and $$$x + 1$$$ are taken care of. Now we need to take care from $$$x + 1$$$ to remaining , and again by similar reasoning as earlier we would remove a prefix of $$$pos[x + 1]$$$ and continue in the same manner. Thus, each $$$pos[x]$$$ loses elements only from its prefix and suffix.
one of the toughest contest for me till now...
https://mirror.codeforces.com/contest/2161/submission/346696450 why is this submission wrong for B? Can someone please point out the flaw in this.
Please someone check my code of problem B . it gives wrong answer in test 3 . But I can.t find my wrong and is it logical or any syntaxial wrong ? ~~~~~
include <bits/stdc++.h>
using namespace std;
define ll long long
define endl "\n"
define fast_io ios_base::sync_with_stdio(false);cin.tie(NULL); cout.tie(NULL);
define pb push_back
using vl=vector;
define vvl vector
define asort(v) sort(v.begin(),v.end())
define dsort(v) sort(v.rbegin(),v.rend())
define pll pair<ll,ll>
define vpl vector
define rev(v) reverse (v.begin(),v.end());
define upb(vec, val) ((upper_bound((vec).begin(), (vec).end(), (val))) — (vec).begin())
define lwb(vec, val) ((lower_bound((vec).begin(), (vec).end(), (val))) — (vec).begin())
define sl set
define spl set
define in insert
define c1 __builtin_popcountll
vpl dir={{1,0},{0,1},{-1,0},{0,-1}};
ll n;
void dfs(vector&g,vvl &vis,ll i,ll j){
vis[i][j]=1; for(ll k=0;k<4;k++){ ll x=i+dir[k].first, y=j+dir[k].second; if(x>=0 && x<n && y>=0 && y<n && !vis[x][y] && g[x][y]=='#'){ dfs(g,vis,x,y); } }}
ll check(vector&g,vector<vector>&tmp , ll n){ for(ll i=0;i<n;i++){ for(ll j=0;j<n;j++){ if(g[i][j]=='#' && tmp[i][j]=='.') return 0; } }
}
int main() { fast_io;
ll t; cin>>t; while(t--){ cin>>n; vector<string>g(n); for(auto &d:g) cin>>d; ll f=1; for(ll i=0;i<n && f ;i++){ ll cnt=0; for(ll j=0;j<n && f ;j++){ if(g[i][j]=='.'){ if(cnt>=3){ f=0; break; } cnt=0; } else cnt++; } if(cnt>=3){ f=0; break; } } for(ll i=0;i<n && f ;i++){ ll cnt=0; for(ll j=0;j<n && f ;j++){ if(g[j][i]=='.'){ if(cnt>=3){ f=0; break; } cnt=0; } else cnt++; } if(cnt>=3){ f=0; break; } } if(!f){ cout<<"NO"<<endl; continue; } ll com=0; vector<vl>vis(n,vl(n,0)); for(ll i=0;i<n;i++){ for(ll j=0;j<n;j++){ if(g[i][j]=='#' && !vis[i][j]){ com++; dfs(g,vis,i,j); } } } if(com<2){ cout<<"YES"<<endl; continue; } ll ans=0; vector<vector<char>>tmp(n,vector<char>(n,'.')); for(ll i=0;i<n;i++){ tmp[i][i]='#'; if(i+1<n)tmp[i][i+1]='#'; } if(check(g,tmp,n))ans=1; for(auto &d:tmp)rev(d); if(check(g,tmp,n))ans=1; rev(tmp); if(check(g,tmp,n))ans=1; for(auto &d:tmp)rev(d); if(check(g,tmp,n))ans=1; if(ans) cout<<"YES"<<endl; else cout<<"NO"<<endl; } return 0;}
~~~~~
Please, someone check my code of problem B . it gives wrong answer in test 3 . But I can.t find my wrong and is it logical or any syntaxial wrong ?
https://mirror.codeforces.com/contest/2161/submission/346779118
Bro, your code gives the wrong answer for this test case:
1
20
....................
....................
....................
....................
....................
.#..................
..#.................
....................
....................
....................
....................
....................
....................
....................
....................
....................
....................
....................
....................
....................
The correct answer is "YES", but your code outputs "NO". I think you shouldn't check only the two main diagonals — you need to check all adjacent diagonals according to your logic.
Here's my code (it's a bit messy, but the logic is the same — only the checking part is different): code
thanks bro
i thought that B can be solve with dfs :(
DFS can solve B :)
Check my solution. Tell me if you don't get it
oh. thank you very much
No problem
For problem D, there is another solution using dp. dp[i][j] is showing answer for values 1, .. i in array and if the first time when we not delete i is i's jth position. We can keep it when there are no i — 1 behind it. So for every a[i] we will keep first a[i] — 1 after it. And if the first a[i] — 1 is j or bigger than j we can use a[i]. This es my implementation 346766931
For problem C:
This is wrong, consider $$$a = [4, 1],\ X=2$$$. I guess this mistake could've been caught if the solution description was more formal itself (as mentioned above by peltorator). Would be great if someone can fix it.
Can you also take a look at $$$/cdot$$$ in the first formula of problem F, please?
Each element of $$$a[i] \lt = X$$$ .
oops... Thank you, I missed it and juggled a bit in my implementation.
how much time it gonna take to reach up to that thinking level ;)
Endagorion For problem $$$E$$$, the period length is supposed to be $$$K-1$$$. It is mentioned as $$$K$$$ in the editorial. Subsequently, the conditions should be changed to $$$s_j = s_{j-k+1}$$$ for all $$$j \gt l + k - 1$$$.
I think Problem B was much harder than regular problem B's
Struggling to see where the inspiration for D's solution might come from. Perhaps I haven't solved enough hard DP problems to build the intuition yet?
I am also unable to understand the solution for D. There are no hints or intuition or even a proof.
Can you please explain problem D solution. Tried to understand others solution as well, unable to understand the logic of dp.
Intellegent can you please explain Problem D dp approach.
The editorial just focuses on the implementation without explaining why or what we are doing. I'll explain what the DP states are and why they're sufficient in this comment, you can probably follow the editorial or find your own implementation from there. Forget about DP states temporarily, our problem wants to maximize the number of elements that we can retain without erasing. Let's analyze a case where only values x — 1 and x exist (the generalization will follow), you'll note that if you keep the jth index with value x in your final array, then it is also optimal to keep all indices lesser than j with this value (since, to include the j_th index, we need to erase all indices with value x — 1 and index less than j, but that means we erase all the elements which could bother x with lower indices, so we add those to our final array to maximize elements)
This yields a "brute force" DP solution, where our states are the current value x (n possibilities), and the position of the last index with this value we are including in our answer (naively also n possibilities), and we can make transitions considering the last index with value x — 1 that was taken
This can be reduced to a linear solution by noting that you don't need to consider all possibilities for the last index taken with a certain value since the total number of such states across all value is obviously n, so you can speed this up with a two-pointer + suffix max solution like the one that the editorial describes (you can also consult my submission if that helps)
peltorator has a very neat greedy solution in a comment above that is very insightful, highly recommend checking it out
In E, there is this line "s[j]=s[j-k] for all j>l+k". However, I believe it should be s[j]=s[j-k+1] for all j>=l+k, because this ensures that d[j+1,j+k-1]=0 for all j>=l so the substring [j,j+k-1] is always good. I am not sure if my belief is wrong.
Does anyone know what testcase 76 in 'test2' is? I'm failing only because of that specific testcase. for the question B
For problem F,after reading I feel a little awkward Is the editorial discussing all about the weighted tree version,which is strictly harder than the unweighted tree version?
An alternative solution for Problem F.
First, you need to have solved the linear-time version of CODE FESTIVAL 2017 Final J — Tree MST (note that the official editorial is not the linear version). I will first explain that version here (Chinese solution).
This problem is similar to one from The 3rd Universal Cup, Stage 33: Yet Another MST Problem:
The intended solution is to perform a multi-source BFS with all key vertices inserted into the queue. Each vertex records which key vertex it is closest to, and the corresponding distance. For every edge $$$(u,v,w)$$$, if their nearest key vertices are different (say $$$s$$$ and $$$t$$$), we add an edge $$$(s,t, \text{dist}(u,s) + \text{dist}(v,t) + w)$$$ into a candidate edge set $$$E$$$. Finally, sort $$$E$$$ and run Kruskal’s algorithm. The complexity is $$$O(n(\log n + \alpha(n)))$$$.
The correctness can be argued informally: for any path between two different key vertices $$$p$$$ and $$$q$$$, there must exist an edge whose endpoints belong to different nearest key vertices. So it suffices to consider all such edges.
If you understand that problem, then:
For each vertex $$$i$$$, create a virtual vertex $$$i+n$$$ and connect them with an edge of weight $$$x_i$$$. Use the virtual vertices as the key vertices and apply the above algorithm. You just need to replace the multi-source BFS with a multi-source Dijkstra.
Reference implementation still runs in $$$O(n(\log n + \alpha(n)))$$$.
However, since the graph is actually a tree, we can compute shortest distances with two DFS passes—one upward, one downward—achieving the same result.
Furthermore, note that there are exactly $$$n-1$$$ edges satisfying “the two endpoints have different nearest key vertices.”
This is because if we treat the “nearest key vertex” as a color, then each vertex has one of $$$n$$$ colors, each color appears at least once, and the set of vertices of the same color forms a connected component.
It can be proven that there are exactly $$$n-1$$$ edges connecting vertices of different colors (consider contracting each color component).
Since there are only $$$n-1$$$ such edges, all of them appear in the MST. Thus, we can directly sum their contributions to the answer without running Kruskal. Note that the implementation computing “nearest key vertex” must ensure that vertices of the same color form a connected component.
In the reference implementation, key vertices are not explicitly constructed, but the idea is the same—when accumulating the final answer, add the contribution for cases where the colors of $$$i$$$ and $$$i+n$$$ differ.
The time complexity is $$$O(n)$$$.
Now, inspired by this idea, consider this problem.
We enumerate all pairs of vertices $$$(u,v)$$$ in the tree. Since all edge weights are $$$1$$$, any edge $$$x \leftrightarrow y$$$ satisfying “the nearest key vertices of its two endpoints are $$$u$$$ and $$$v$$$” must lie around the center of the path between $$$u$$$ and $$$v$$$:
Ignoring for now the case where the path length is even, we see that for this to happen, every vertex closer to $$$x$$$ than $$$u$$$ cannot be chosen as a key vertex. Similarly for $$$y$$$ and $$$v$$$. That’s the rough idea.
However, a issue: for a given vertex, there may be multiple vertices at the same distance from it. How do we handle that?
We can treat each vertex as a pair $$$( \text{dist}(u,x), u )$$$. If two vertices have the same distance, we sort by the second element as a tie-breaker. Thus, we can always determine a total order, and the “nearest key vertex” is the one with the smallest such pair.
We can then process all pairs in the subtree rooted at $$$x$$$ (with parent $$$y$$$), obtaining all pairs $$$(\text{dist}(u,x), u)$$$, sorted via radix sort. Similarly for $$$y$$$ with respect to $$$v$$$.
Now, for each case where $$$\text{dist}(x,u) = \text{dist}(y,v)$$$, let $$$c_0$$$ be the number of pairs smaller than $$$u$$$, and $$$c_1$$$ be the number of pairs smaller than $$$v$$$. These vertices are fixed as non-selectable. All others are free to choose, so the contribution is $$$2^{n-2-c_0-c_1} \times \text{dist}(u,v)$$$.
When $$$\text{dist}(u,v)$$$ is even, since we always choose the “nearest and smallest-index” key vertex, we simply shift the midpoint $$$x \leftrightarrow y$$$ slightly toward the larger of $$$u,v$$$. (When the midpoint is equidistant from both sides, the smaller of $$$u,v$$$ is chosen as the key vertex, so the edge $$$x \leftrightarrow y$$$ effectively moves one step toward the larger side.)
347219078
Can someone explain their code for B part ??also it would be great if u can explain how you thought of the solution as after reading editorial i wrote a code which passed test case 1 but after that there was no progress.
Would someone explain how to compute binomial coefficient sums in E? I managed to solve it but was a bit tedious.
EDIT: Nvm, I figured it out.
qp