#  User  Rating 

1  tourist  3757 
2  jiangly  3647 
3  Benq  3581 
4  orzdevinwang  3570 
5  Geothermal  3569 
5  cnnfls_csy  3569 
7  Radewoosh  3509 
8  ecnerwala  3486 
9  jqdai0815  3474 
10  gyh20  3447 
#  User  Contrib. 

1  maomao90  171 
2  awoo  164 
2  adamant  164 
4  TheScrasse  159 
5  nor  155 
6  maroonrk  154 
7  isthisfft  152 
8  Petr  147 
9  orz  146 
10  pajenegod  145 
+2
Oh right. So it's not too bad. 
+2
I ended up with WA7 so I may be wrong, but I believe most of it is correct (and I probably have a bug):
Assume now that $$$s$$$ has no asterisks, and $$$t$$$ has at least one. Remove equal prefixes and suffixes, so that $$$t$$$ begins and ends with asterisk. If all of $$$t$$$ is asterisks, answer is "Yes". Now, you need to take every substring of $$$t$$$ between a pair of asterisks, and match it to the first substring of $$$s$$$ that matches it, greedily. It's a classical algorithm to determine whether a string can be anothers' substring, when both may contain '' symbols, using fft in $$$O(n \log n)$$$ where $$$n$$$ is the larger of those lengths. Now, to find the first position in which a substring of $$$t$$$, of length $$$m$$$, matches in $$$s$$$, starting from a given index $$$i$$$, you can do an exponential search: Check if the substring appears in the substring of $$$s$$$, starting at $$$i$$$ of length $$$m$$$. If it isn't found, increase to length $$$2m$$$, and so on until it is found. The total complexity should be $$$O(n \log^2 n)$$$. 
+16
Suppose $$$v[0] = \lbrace 0 \rbrace$$$, so that it is of length $$$2^n$$$. Solve recursively, on a given $$$v$$$ of length $$$2^m$$$:
Then solve recursively for both $$$u$$$'s (both of length $$$2^{m1}$$$), while carrying your decision. While this may seem naive, the time complexity is $$$O(n2^n)$$$. 
On
abcsumits →
Reading problem and not submitting any solution should be counted as participation., 4 weeks ago
0
Create a temporary account to view the problems and always register with both the temporary and the real. View the problemset from the temporary and if you decide to participate, open the contest from the real one. 
On
abcsumits →
Reading problem and not submitting any solution should be counted as participation., 4 weeks ago
+49
It doesn't prevent someone from viewing the problems from another user (or even without being logged in). If you try to solve this by only allowing someone to view the problems if they registered, then someone can just register with two accounts and let one of them take the hit (always). I thought of multiple ideas, and the best of them seems to be allowing registration only 1 hour before the round begins, and whoever is registered is counted (and people can unregister). This way:

On
ko_osaga →
Anarchy in the APSP: Algorithm and Hardness for Incorrect Implementation of FloydWarshall, 6 weeks ago
0
I'm a bit confused on why theorems 1.2, 1.3 imply that solving the incorrect variant in subcubic time leads to solving APSP in subcubic time. I see why it is correct the other way around 
+8
What do you mean by consecutive intervals? Their lengths can only be even. If you mean consecutive when we only look at even values, then this fails if the string is "abcd" repeated. Also, if both $$$l, l+2$$$ are good lengths for a fixed $$$r$$$, then so is every even length from $$$2$$$ to $$$l+2$$$. Which means the consecutive intervals are only $$$2[1, k]$$$ and other singletons, so idk why it would be logarithmic. Did I get the question correctly? 
+34
Order the nodes by some topological order, suppose WLOG that the order is $$$1 \rightarrow 2 \rightarrow \ldots \rightarrow n$$$ (any edge $$$i \rightarrow j$$$ satisfies $$$i < j$$$). Compute $$$r_i$$$ as the smallest vertex reachable from vertex $$$i$$$, which is simply its edge going to the smallest vertex (if none exists, $$$r_i = \infty$$$). Symmetrically compute $$$l_i$$$ as the largest vertex that can reach $$$i$$$, which is also a neighboring edge (if none exists, $$$l_i = \infty$$$). Lemma: A vertex $$$v$$$ is a "super delivery point" iff $$$r_u \leq v$$$ for all $$$u < v$$$, and $$$v \leq l_u$$$ for all $$$v < u$$$. Proof: If the condition isn't met, WLOG there exists $$$u < v$$$ such that $$$r_u > v$$$, then by definition $$$u$$$ cannot reach $$$v$$$, and because of the topological ordering, $$$v$$$ also cannot reach $$$u$$$. If the condition is met, then for every $$$u < v$$$ you will reach $$$v$$$ if you keep walking on the path $$$u \rightarrow r_u \rightarrow r_{r_u} \rightarrow \ldots \rightarrow v$$$, and symmetrically if $$$v < u$$$ then the reverse of $$$u \rightarrow l_u \rightarrow l_{l_u} \rightarrow \ldots \rightarrow v$$$ is a path. $$$\blacksquare$$$ Therefore, compute $$$l_i,r_i$$$ in $$$O(n)$$$, then compute prefix maximum of $$$r_i$$$ and suffix minimum of $$$l_i$$$, and then a vertex can be checked in $$$O(1)$$$ if it is special, so overall $$$O(n)$$$. 
0
A counterexample: $$$[3,4,5,8]$$$, with $$$K = 10$$$. But I want to emphasize that this is an NPhard problem, so an efficient solution cannot be this simple. One can solve the partition problem with a reduction to this, by choosing $$$K = \frac{sum}{2}$$$ (as I did above). 
+53
There's been no official statement regarding the round, for 5 days now. Even if Mike does not think it is a serious issue, the community obviously does. Maybe Mike is having a hard time lately and is too busy to deal with us. At the very least we must have a clarification on what type of rounds should become unrated going forward, because there seems to be a huge misunderstanding. I think that a round which has a problem with a semigooglable solution, another problem whose model solution is incorrect and another problem (F) which appears to be almost stolen from another source (according to what some people posted), then it should become unrated. For me, this silence means that such rounds are still valid as being rated, which worries me very deeply about the quality of codeforces rounds going forward and how the rating is less trustworthy. It is still my opinion, maybe you think that such rounds are completely valid and of proper quality, you make the rules after all... Just make the rules clear please. Maybe going forward I'll begin every round by copypasting the complete problemset into chatGPT and asking it whether any of the problems looks familiar, and provide me some sources, maybe even scripting it ahead of time. Then I'll read the sources and implement what I read. Sounds like a shitty experience. I value my enjoyment of competitive programming, yet I value my rating as well, seems to be an issue. 
0
I used the fact that 31, 13 and 14 work for $$$n = 3$$$ and extended it with zeros, explained by the following example for $$$n = 7$$$: $$$1300, 1030, 1003, 3100, 3010, 3001, 1400$$$ (all squared) 
0

On
SamuraiBebop →
Need help with the problem "Icy Roads of Nomel" regarding convex hulls, 11 months ago
+33
The two observations are:
When you find the first observation, it may seem like it's sufficient to solve the problem — repeatedly find a minimum intersection, and split into subproblems (from start to intersection, and from intersection to end). The only issue is when the minimum intersection is the start or the end, which is where the second observation helps: At first, set $$$r = \infty$$$ as above. Now the minimum intersection is $$$(1, 1)$$$. Now imagine slowly decreasing $$$r$$$ until the minimum intersection isn't unique. This will happen when some $$$a_i + ir = a_1 + r$$$ (or vice versa for some $$$b_j + jr = b_1 + r$$$), and when this happens, you know that there's an optimal solution that goes from $$$(1, 1)$$$ to $$$(1, i)$$$ (or vice versa for some $$$(j, 1)$$$). So the idea is that if you keep decreasing $$$r$$$, the optimal path remains while you can build it edge by edge (either advancing horizontally or vertically, depending on the next event). The final observation is that, if you map the set of points $$$(i, a_i + ir)$$$ on the plane, then its lower convex hull consists of the same set of points regardless of $$$r$$$. So instead of "decreasing $$$r$$$ slowly", you can compute the lower hull for both $$$a$$$ and $$$b$$$, and from this you can simulate decreasing $$$r$$$ (which means advancing with two pointers over the hulls, depending on the slopes of the hull edges). 
+34
I'd like to use this opportunity to mention another thing that in my opinion, is important in an editorial: I prefer it much better when the solution explains the steps the author took to solve the problem, rather than arriving at sudden observations, and only later you can see how they relate to the problem. Many authors like to provide the "shortest path" to the solution, resulting in a few lines of a solution that generally require me to read them at least 3 times to understand (some university professors also like to provide such unintuitive proofs, without the thought process behind them). I believe that if a person fails to solve a problem, it is best to read the editorial only after the person took a sufficient amount of time to be stuck with their ideas. In a way, I like to think about problemsolving as a tree of approaches: when someone is stuck with all of their approaches, then he requires a large amount of work to extend any of the nodes to more ideas. Thus I think the best editorial is one where you can comprehend the approaches the author took, and can compare the text you read to your tree of approaches, until at some point the editorial shows you how to extend one of your ideas to a complete solution, increasing your capability for the next problem. Comparing that to "shortest path" solutions, I think they can come up with ideas that don't relate to anything your tree of approaches has, and you (or me, personally) learn less about how to generalize the solution and more about a single trick (that you may forget). It may not be perfectly possible with every problem but I think this is what authors should aim for. 
0
Idk if this relates to your question; we qualified but neither participated nor are on the list. There's a multitude of reasons, essentially due to politics (our funding was cancelled, we didn't get visas in time, etc). 
On
Pyqe →
Codeforces Round #831 (Div. 1 + Div. 2, based on COMPFEST 14 Final) Editorial, 19 months ago
+3
You can conclude and prove this lemma with mincut maxflow — if you try to find a maximum matching between a chosen multiset of sizes, and the frequencies of the elements, then try to find the condition by which all cuts are of size at least $$$n$$$. It's pretty lengthy (I can elaborate if you want), but you can arrive at this condition without magically guessing it (the magic here comes from the magic of MCMF). 
0
Your comment is more proper as a feedback to the people who prepared the competition, that they should have prepared it otherwise. And the standings are nothing like "nearly random", the relative positioning did not change too much for most of the people. In fact it's similar to a usual CF round, with the exception of the duration. So in my opinion you don't present a proper complaint, as I mentioned in my other replies. Edit: didn't see that you talked specifically on GP1 (thought you meant P1). In that case you're right, but idk how the authors could predict this (unless they really put unrealistic testcases, but I can't compare between them). Still, the MLE should be more predictable than simply "noncorrelative testcases". 
0
We should differentiate between what is unfair, and what the contest should have been. I do agree that the contest would have been more valuable to Huawei, if there were no secret cases, and furthermore it would be more realistic to give us the entirety of the testcases and make the problem "output only", without a small time limit (yet this poses the problem of people with different computers). Your case is extremely unfortunate and I'm sorry for that. But in my opinion, once all the contest rules have been set as they are (even if we disagree with them), no rules were changed and by participating you are knowledgable to all the outcomes, you can't call this unfair. 
+32
I'm sorry for your decrease in a single position, but I don't think it's unfair. If anything, I think it was predictable that the final positioning is ought to have some deviation from the standings throughout the contest. In the "Scoring" section you can see it's stated to have additional secret tests (and furthermore, that the pretests are not counted to the final score). I don't want you to feel down, but it should be clear that overfitting solutions to the pretests (whether overfitting on time, memory or specific instances that appear on the pretests) is a bad idea. It definitely, partially relied on luck, the point was to approximate your average position throughout the contest. 600k vs 650k is unfortunate but if you put yourself tight on the 5 seconds time limit on pretests, you can't expect to pass everything for sure. I think it's an exaggeration to call this a "competition for luck" or a "huge mistake to participate", and you should be grateful for your 2nd position and considerable prize. 
0
Well if I recall correctly, it's sufficient for 200k+ on both subproblems. Of course you need to use other approaches to nail more difficult cases (on GP2). 
0
If you fix some upperbound $$$B$$$ on the component size you allow, and fix some spanning tree, you can try to split the tree into minimum number of connected components of size upto $$$B$$$. This dp problem is solvable in $$$O(n)$$$, so you can try to fix multiple values of $$$B$$$ and take the best found. For instance I tried iterating on bounds starting from 1 to $$$n$$$, each time multiplying by 1.7. There isn't really any point in trying small bounds but it barely hurts (the actual useful attempts are those around $$$\sqrt{N}$$$). You can independently try this on multiple spanning trees. This works well on GP1 since the weights don't matter, and on GP2 this is decent to try this on the MST (as a heuristic). 
+20
Right, it might make it asymptotically less than $$$k^2 \log k$$$, my bad. Not sure if this invalidates the construction but I feel like it does. 
+55
Here's a construction that does $$$\omega(n \log n)$$$ for infinitely many $$$n$$$, disproving the lemma: Choose some $$$k$$$ and consider the set of elements $$$i\cdot j$$$ for $$$1 \leq i,j \leq k$$$ — sorted, after removing duplicates and with a $$$0$$$ in the beginning. This sequence has cost $$$\Omega(k^2 \log k)$$$: any starting element $$$ij$$$ has a sequence of length at least $$$\frac{k}{i}$$$ which is enough to show this bound. However, after removing duplicates we have $$$o(k^2)$$$ elements in our sequence — as found here: https://mathoverflow.net/questions/108912/numberofelementsintheset1cdotsncdot1cdotsn/108939#108939 Choosing $$$k = \sqrt{n}$$$ gives the contradiction. We still don't know the exact bound though. 
0
I only know persistent BST from that list, but I guess you're right — there is something I know that's not in the library. I guess we can add on that some faster maxflow algorithms since the fastest there is $$$O(EV^2)$$$ dinic. 
+64
Incredible library. After a quick scan I still can't find anything I know that could be added. What I think you can add is whether a data structure / algorithm assumes 0indexing or 1indexing, or if you work with ranges then do you represent them with right end inclusive or exclusive (for example, segment tree) — to generalize, either above a struct or above a function, I think it would be nice if you can detail how it takes input and provides output in a short comment. And just to make sure — do you permit other people to use your library? with credit of course. EDIT: I can't seem to find Alien's trick. Also I found even my little contribution to the library (queue undo trick), so I feel very honored, however it does require a small fix; In my blogpost I explain that we must maintain the bottom pointer of the stack so that we never pass it — so at the current state of the code, it may run in $$$O(n \sqrt n)$$$ operations. Just a few lines of code to add (whenever you feel like it). 
+34
I don't think there's any benefit in providing such tough constraints, the crux of the problem is essentially to solve it in polynomial time. If anything, small bounds on an interactive problem probably help the system testing (and, we can upperbound the number of interactions by $$$nm$$$, so even very bad solutions can't do too many turns). 
+36
For simplicity, I'll denote their (constant) sum as $$$s$$$ and then the recurrence you mentioned as: In terms of computation, as you've mentioned the immediate approach is dp but since the states form cycles, it doesn't quite work. However the equalities are linear in the values $$$f(k)$$$ so the next idea should be to solve these linear equations, in time $$$O(s^3)$$$. Now, I can't quite find a closed form, but I can give an $$$O(s)$$$ algorithm to compute $$$f$$$: first we can reorder the equality in the following way: This is a recurrence on the adjacent differences, with conditions $$$f(0) = f(s) = 0$$$. If we denote $$$f(1) = c$$$, all the values of $$$f$$$ are then a function of $$$c$$$, where by the condition $$$f(s) = 0$$$ the answer is unique. Specifically, if my calculations are right: And then it must hold that $$$f(s) = 0$$$, but this looks disgusting. As for the algorithm, we can define the homogeneous version of the equation above: and with the single condition $$$g(0) = 0$$$, it's easy to find one solution: choose $$$g(1) = 1$$$, and then: Observe that for any $$$f$$$ which is a solution to the nonhomogeneous equation, with the condition $$$f(0) = 0$$$ (but without $$$f(s) = 0$$$), and for any constant $$$c$$$, the function $$$f + cg$$$ is also a solution. So we can find any single $$$f$$$ and then shift it using $$$g$$$ so that $$$f(s) + cg(s) = 0$$$. To find a single $$$f$$$, just choose $$$f(1) = 0$$$ and then compute the adjacent differences in $$$O(s)$$$, and then prefix sum. 
+117
damn 
+10
It's worth mentioning that, because all "updates" are offline, and we know from the start exactly how the updates and undo's are ordered, then there's also a D&C solution with equal complexity (but probably shorter code). Still, I'm happy to hear that. 
+10
Thank you! And your implementation really is nicer. 
0
Indeed, because the proposed algorithm is already provided a data structure that can support normal undo's, so its complexity is already taken into account. 
0
In the case you've provided, regardless of my blogpost, DSU with just path compression has a bad time complexity; any user of your data structure can first merge $$$n1$$$ times, undo all of them and execute them in reverse order. Then for $$$n$$$ more times, the user can merge and undo, thus achieving $$$\mathcal{O}(n^2)$$$ time. So for your proposed DS, it takes upto $$$\mathcal{O}(n^2)$$$ time for any sequence of $$$n$$$ operations, thus the new DS takes upto $$$\mathcal{O}(n^2 \log n)$$$ time for any sequence of $$$n$$$ operations. 
0
What I was trying to express is that, regardless of whether the original data structure worked in amortized time or not, we still apply to it a sequence of $$$\mathcal{O}(U \log U)$$$ operations. Basically, we're an algorithm outside the data structure that just calls a bit more updates than what we were given. So for example, if the original data structure worked in amortized time of $$$\mathcal{O}(q \log q)$$$ for any sequence of $$$q$$$ operations, then we would make it work in $$$\mathcal{O}(q \log^2 q)$$$ for any sequence of $$$q$$$ operations. As for your example, if there exists a scenario with a heavy update that takes $$$\mathcal{O}(n)$$$ each time it is called, and our algorithm just so happens to call it $$$\mathcal{O}(n)$$$ time, then even without this algorithm, the original data structure is not very efficient; any user can call that heavy update/undo $$$\mathcal{O}(n)$$$ times. If you're wondering about what we should do in case some updates consume more time than others, I put it at the "Open Questions" sections, as I still haven't thought about it. Hope this clarifies. 
0
Thank you! 
+10
It's been pointed out that I should give this trick a name. Does anybody have a name better than "Queue Undo Trick"? 
+17
It is correct. In group terms, every operation transforms P > P^2, so any number of operations gives you P^(2^k) for some k. This can become the identity iff the order of P is a power of 2, iff every cycle is of size power of 2. Then the answer is log2(biggest cycle size). 
0
The decision problem of clique is, given a graph G and integer k, does there exist a clique of size k. The decision problem of longest path is, given a graph G and integer k, does there exist a simple path of length k. It is easy to solve the maximum clique or longest path, using the above decision problems, some polynomial amount of times, so you can think of these as the same problem. Still, by definition, we only talk here about decision problems. Edit: as for your second question; perhaps we can formulate the maximum clique problem you describe, as given a graph and a clique, is this clique maximum (note that this is a decision problem now). I don't know if we already know about a polynomial verifier for this, but I feel like it would imply P = NP. Edit2: actually, the above mentioned decision problem has a polynomial nondeterministic solver that just tries all subsets... so it's NP. What is the definition of maximum clique you have in mind? 
+16
I agree with your first two points, as in, this is a possible scenario — if too many people mark such problems as their favorites. I find it unlikely though, if enough experienced users like you think otherwise, then you would like other problems that suit you. Still, it's very possible that new people will be directed to specific areas (not something I want). Hopefully this can be taken care of with, for example, weighing the votes of experienced users more (a comment above suggested to put more weight on the votes of people who solved more problems, which sounds like a good heuristic). I'm not sure I understand what you're saying in the 3rd point... is it that you believe right now people encounter "the average problem", compared to facing only handpicked problems (if the feature would be implemented)? If this is what you meant, I personally think that practicing on handpicked problems is a better situation. As for your 4th point, I don't understand where that's coming from... do you mind explaining? 
+35
This problem really asks you to use Pick's theorem, which just states that for a polygon with integer coordinates on the plane, its area equals $$$i + \frac{b}{2}  1$$$ where $$$i$$$ is the number of integer points inside the polygon and $$$b$$$ is the number of integer points on its border. Compute $$$p = 1 + \gcd(A_x  B_x, A_y  B_y)$$$, the number of points on the line from A to B, including endpoints. We want the triangle to have $$$i = 0, b = p + 1$$$, which means we want to have area $$$\frac{p1}{2}$$$. But this also goes the other way; if our triangle has this area, since $$$b \geq p + 1$$$, we must have $$$i = 0, b = p + 1$$$. So we solve for its area now instead: We want to satisfy: $$$A_xB_y  A_yB_x + B_xC_y  B_yC_x + C_xA_y  C_yA_x = p1$$$, where the sign of the left term just depends on the orientation of the triangle. So we can try to solve it for both $$$p1, (p1)$$$, without the absolute value. Either way, move the first two (constant) terms to the RHS, and reorder the LHS, we get: $$$(A_y  B_y) \cdot C_x + (B_x  A_x) \cdot C_y = D$$$ for some constant D. You can find whether this has 0 or infinite solutions, this is a standard equation ("extended gcd"). 
+11
Can you be more accurate about what you want or give an example? To me, "sorting subarrays" is pretty vague (assuming you don't want to literally list all subarrays sorted per query) 
+13
This has been asked before here. 
+19
The problem is equivalent to set cover, with univerise size log(max), and number of subsets simply N. So unless P = NP, your solution will be either exponential in N (just no...), or exponential in log(max), which could be more promising but seems difficult. It's nothing new that GFG spreads around wrong solutions. 
+39
Consider the problem on the prefix sum array; $$$1 \leq A_i  A_{i1} \leq M$$$, while all pair differences are distinct (let's ignore the 0 that the sequence should start with, it should affect the answer by a constant additive factor). Denote with $$$f(M)$$$ the maximum $$$n$$$ such that there exists an array of length $$$n$$$ with $$$1 \leq A_i  A_{i1} \leq M$$$, with all pair differences distinct. Observe that if there exists a pair of equal differences: $$$A_y  A_x = A_j  A_i$$$, then there also exists a pair of pairs with equal sum: $$$A_y + A_i = A_x + A_j$$$. Similarly, if there exists a pair of equal sums, then there exists a pair of equal differences. So it's equivalent to consider distinct pair sums instead of differences. Apparently it's some studied sequence, link. There seems to be a conjecture that for some $$$n$$$, the largest number in the array is about $$$\frac{n^3}{\log^2{n}}$$$. If we follow approximations, we want the derivative to be less than $$$M$$$; So (about) $$$\frac{3n^2}{\log^2{n}} \leq M$$$, so we can approximate (yet again), at $$$\mathcal{O}(\sqrt{M}\log{M})$$$. With this in mind, you can run a bruteforce to find $$$f(M)$$$ about exactly, quite quickly, then for any $$$n \leq f(M)$$$ naively check all pairs (otherwise you know the answer is negative). Honestly I'm a bit disappointed at the amount of approximations, but that's all I got. 
+10
If you're still interested, I have a solution in $$$\mathcal{O}(n^3 + m\log m)$$$ (or $$$\mathcal{O}(n^3 + m)$$$, negligible). It passed on oj.uz in 96ms, so it's probably a fine (not too tight) solution: Solution As expected, for each edge we will compute the minimum path from 1 to n and backwards, if this edge was reversed. Let's focus just on the path from 1 to n. First, we run dijkstra (or floyd warshall) to find the shortest path on the given graph, let this path be $$$(v_1, \dots , v_k)$$$, where $$$k \leq n$$$. We will then run 2 floyd warshall's: $$$D_{i,j}$$$ denotes the minimum path from $$$i$$$ to $$$j$$$ on the given graph, $$$D'_{i,j}$$$ denotes the minimum path from $$$i$$$ to $$$j$$$ on the given graph, but without any of the edges on the minimum path found by dijkstra. For each edge $$$(u, v)$$$ with cost $$$c$$$, that is not on the minimum path: if we decide not to use it, then it is best to go on the minimum path we found (as it's just like removing the edge from the graph). If we decide to use it, then we can safely use the value $$$D_{1, v} + c + D_{u, n}$$$ (and compare it with the minimum path); even though $$$D_{1, v}$$$ might use the edge unflipped, if it did then the resulting value will be larger than the minimum path. Same with $$$D_{u, n}$$$. For each edge $$$(v_i, v_{i+1})$$$ with cost $$$c$$$, on the minimum path: if we decide not to use it, then, if we view the path as one that begins at $$$v_1$$$, ends at $$$v_k$$$ and sometimes leaves the minimum path (then returns), then we can claim it leaves the minimum path exactly once (it can't leave less since the minimum path is not a path after the flip): First, for all $$$x < y$$$, we won't leave the minimum path at $$$v_y$$$ and return at $$$v_x$$$; depending on whether $$$y \leq i, x \leq i < y, i < x$$$, we can get rid of one of these vertices (and this detour around the minimum path), and instead use a part of the minimum path (while the path still doesn't use the flipped edge). So now we know that all those jumps around the minimum path just cover contiguous, disjoint segments of edges in the minimum path. So any jump around a contiguous segment which doesn't contain the flipped edge can be replaced with that segment in the minimum path. This yields the simple solution: for each $$$1 \leq l \leq i, i+1 \leq r \leq k$$$, we consider the path with cost $$$D_{1,v_l} + D'_{v_l, v_r} + D_{v_r, n}$$$. We do this in $$$\mathcal{O}(k^2)$$$ for each edge on the minimum path, so $$$\mathcal{O}(k^3)$$$. And let's not forget, if we decide to use the flipped edge on the minimum path: well actually it will never be optimal: take any such path, and right before you use the flipped edge to go from $$$v_{i+1}$$$ to $$$v_i$$$, just instead use $$$D_{v,n}$$$, which is guaranteed to be shorter. So it's a very light cubic complexity. My submission (which is not meant to be readable, just for reassurance: https://oj.uz/submission/201505 
+42
Probably Hirschberg's algorithm: The idea is originally in 2D: view the problem through a weighted 2D grid (with very specific moves), and find the best path from topleft corner to bottomright. Split the grid to half horizontally, compute from start to all of middle the best result, and from all of middle to end, without maintaining the actual path. Find the best breakpoint in the middle and solve recursively, it ends up at $$$\mathcal{O}(nm)$$$ time and linear memory, since you can now remember a constant number of rows per dp. Here it is 3D, we can view it as a cuboid, where we also go up and down depending on bracket balance. However we should be able to break the cuboid in the same way (not breaking by balance) and still obtain $$$\mathcal{O}(n^3)$$$ time, $$$\mathcal{O}(n^2)$$$ memory. 
0
Since there are multiple ways to implement this recursion (like computing $$$M(G)$$$ efficiently), I'll instead give an upperbound to the number of recursion calls, let it be $$$T(n)$$$ where $$$n = G$$$, and $$$G$$$ is connected. So, if $$$n = 2$$$ then both nodes have degree 1, this case has O(1) calls. Otherwise, there exists a node with degree at least 2, which you will remove, so an upperbound is: $$$T(n) = 2 + T(n  1) + T(n  3)$$$. Even though you can run dp to compute this, it is known $$$T(n) = c^n$$$ (well actually a polynomial, but let's only look at the largest exponent) for some constant $$$c$$$. So, $$$c^4 = 2 + c^3 + c$$$, which appears to be about $$$1.73^n$$$. Of course we didn't consider whenever a component splits and such (and also some degrees will be more than 3). Also, if $$$M(G) = 2$$$ then the graph is either a line of a circle. You can precompute with dp the answer for such a graph, and now the recursion finds a degree at least 3. With the same process, the result is about $$$1.58^n$$$. 
+58
Sounds awful, I'd rather implement a treap 5 times in a row. 
+9
I think what is meant, is that a problem requiring a BBST (unlike a set, possibly augmented with rank and such) is solvable using any of them, indifferent to constants in runtime or functionality — that is, you only need to know one of them, not all. IOI13 Game requires a DS (on the 2nd dimension) that can update a cell and query for range gcd, when the array is of size 1e9. You can use a sparse segtree, but together with the 1st dimension this adds a log^2 factor to memory. The optimization is to instead use a BBST on the 2nd dimension, then you cut a log off memory. It doesn't need treap, you can use any BBST. 
+23
I doubt these can be of use in the IOI, since they try to stay strict with the syllabus. Maybe for some partial points. Anyway, good work. 
+61
How are wavelet, segment tree beats and lichao relevant to IOI? 
+3
Why should F be the number of buildings + 1? For example, the case of a 3x3 with a hole in the middle has 8 buildings, 10 faces and is connected. 
+20
I mean, if you run it for each component independently then worsecase is improved by a factor of 8... there's no excuse for being lazy :) 
On
AnotherRound →
Answering queries of the type "longest interval contained in the given interval", 5 years ago
+22
Map update intervals to points $$$(L, R)$$$ on the plane with weight $$$R  L$$$. Queries are the maximum weight of a point in the square with corners $$$(L, L), (R, R)$$$. You can do polylog with any of your favorite data structures like 2D segtree, segtree of BBST and whatnot. 
+71
I would be very surprised if someone has never gone far lower than their expectations. Ideally, when you fail you need to look for reasons behind the failure, so you could focus on improving said points. On the other hand, taking part in a contest means you need to accept the possibility of failure. I'm not saying you shouldn't care about failing, but rather if you accept it, you can forget about that fear, possibly even perform better, and not take it hard when it happens. To me it seems that E869120 is devastated about the wrong things. "I spent 91 out of 135 minutes for problem B, and because of this issue, my round result became historic and rare failure. [begin complaint paragraph]" — Okay, you can point out the bad time management, but how about you focus on "next time I can prepare myself for such events and manage time better", instead of ranting about this specific performence? You're worrying too much about things that, frankly, are not under your control;
Maybe having a good day to you is affected by how well you slept the night before, but it's useless to consider it during or after a contest, so what good is it to focus on such things? You need to be aware that it can always happen and move on. In my opinion you should resolve this mentality before the IOI. If you're driven insane by accidents on CF div2 rounds, I can't imagine how you could feel if you don't meet your exact expectations on IOI. 
+113
Yes First, if we extend 1x2 pieces to 2x3 (by adding a row to the bottom and column to the right), 2x1 to 3x2, and the board to (n+1)x(m+1), then we now only care about actual overlaps. This gives an upperbound of $$$\frac{(n+1)(m+1)}{6}$$$. Let's see how it's always possible to achieve the upperbound (from now we consider n and m as increased by 1): If we can always leave less than 6 free cells then we achieved the upperbound. Also notice that now $$$n, m \ge 2$$$. First, observe that whenever $$$6nm$$$, we can cover the whole board. It can be seen easily for small $$$n$$$ (upto 7), and for larger $$$n$$$ we can always break the board into pieces with small $$$n$$$ and areas divisible by 6. This means that from a big board we can always remove pieces with height and width bigger than 1, and area divisible by 6. It only remains to leave less than 6 free cells when $$$n, m \le 7$$$ (if one of them is at least 8 we can remove 6 and obtain the subproblem).

+17
On the task Sum of Four Values, I think the judge may be wrong... specifically, the output section asks for any solution, but I think the judge checks whether the provided answer is identical to its own. 
0
For each index $$$j$$$ precompute the last index $$$i$$$ such that $$$A_i = A_j$$$, then we can obtain the last 4 occurences of a value in $$$O(1)$$$. Deciding to remove 3, 4 or 5 then considers $$$O(1)$$$ subproblems. Edit: Now I see the issue, we don't necessarily group $$$A_j$$$ with the ones immediately before it. My bad. 
0
Edit: wrong. Maybe something like this?: Let's suppose we can only remove 3, 4, or 5 equal consecutive elements, this is still equivalent. $$$dp_{i, j} = 1$$$ iff we can completely remove the subarray $$$[i, j]$$$. To compute the transition, we must remove element $$$A_j$$$. If we intend to remove $$$A_j$$$ with 2 more elements at indicies $$$x, y (x < y)$$$, then we must first be able to remove $$$[x + 1, y  1]$$$ and $$$[y + 1, j  1]$$$, then we remove the 3 elements, and then we should remove $$$[i, x  1]$$$, so these are the subproblems we should look at. If we want to remove $$$k$$$ elements, there are $$$k$$$ subproblems to look at. Since we only allow removing 3, 4 or 5, the amount of time to compute $$$dp_{i, j}$$$ is $$$O(1)$$$. Onto the original problem, if we mark all non removed elements as "special", then between adjacent special indicies we must remove the whole subarray. If we let $$$D_i$$$ be the minimum amount of special indicies we can keep in prefix $$$[0, i]$$$, then the transition is: $$$D_j = \min(D_{j  1} + 1, D_{i  1})$$$ for all $$$i$$$ such that $$$dp_{i, j} = 1$$$. Seems too simple, am I missing something? 
+51

+15
Suppose each person is a vertex and every hand is an edge, and we can also assume that every right hand holds someone else's left hand. If the right hand is a directed edge outwards of the vertex (left hand inwards), we can see that this problem is equal to the expected number of cycles in a permutation, which is easier to google. 
+29
Here's a similar solution but maybe easier to implement. Denote $$$D_{i, j}$$$ as the minimum cost to split prefix $$$i$$$ to $$$j$$$ subarrays. we fix $$$j$$$ and solve the layer in $$$\mathcal{O}(n \log n)$$$. We compute the answer for all $$$i$$$ in a D&C fashion; to compute the answer for all $$$l \le i \le r$$$, first recursively solve for $$$[i, m], [m + 1, r]$$$ where $$$m = \frac{l + r}{2}$$$, and then we consider transitions between the two halves. define $$$A_i = \max(a[m+1...i]), B_i = \max(a[i...m])$$$, the transition is $$$D_{i, j} = \min_{k \le m} (D_{k, j1} + (i  k) * \max(B_k, A_i))$$$ Notice that due to monotonicity of $$$A, B$$$, for fixed $$$i$$$ there is a suffix of the first half where $$$A_i$$$ is larger, and a prefix where $$$B_j$$$ is larger. We can solve both cases independently; $$$D_{i, j} = \min_{k \le m} (D_{k, j1} + (i  k) * A_i) = \min_{k \le m} (D_{k, j1}  k * A_i) + i * A_i$$$ We have a linear function inside (for constant $$$k$$$), so we can maintain those in a cht. When we increase $$$i$$$, the suffix in which $$$A_i$$$ dominates increases, and we repeatedly add functions with smaller slopes, so we can do this in $$$\mathcal{O}(n)$$$. The case when $$$B_k$$$ dominates is almost identical, same analysis. 
0
Why did you need a segment tree on A? I think the common solution is finding period and sweepline. 
+11
The ideas were not difficult to figure, and I also think the solutions relied heavily on constant factor... overall, didn't really like the problems. 
+24
I agree, solving a hard problem is satisfying... this doesn't diminish from rating satisfaction. As I said, CF rating is like proof that you are skilled and you put effort into improving yourself. If you're stating your opinion, don't be surprised that I disagree. Also my last sentence was just to say that a lot of people call rating just a number, even though there's a meaning behind it. 
+56
Rating mostly stands for skill. Increase in your rating should give you satisfaction by proving progress, and decrease could (should?) give you motivation to work to fix your mistakes... at least that's how I view it. money is also "just a number", this statement is weak. 
0
Auto comment: topic has been updated by Noam527 (previous revision, new revision, compare). 
0
Sure, thanks for the information. 
0
How did you know? 
0
Seems like 203 is pretty common result. Here we had two people with 203, and I've also heard of another 8 people with 203. 
+15
Interesting. 4 is also optimal for $$$n$$$ large enough, since 3 is impossible when $$$2^{1.5n} < 3^n/6$$$. 
0
Even though the editorial for E is simple, there is another easy solution to E (one with no edges cases); We can treat this as a 1 player game for P1. For each removal of a $$$b$$$, P1 gets 1 point. After all operations, for each sequence of $$$X$$$ $$$a$$$'s, P1 loses $$$\lfloor X/3 \rfloor$$$ points. P1 wins iff the maximal amount of points he can get is positive. We can solve with dp: $$$dp_{i, j}$$$ is the maximal amount of points P1 can get, if we only consider the prefix of length $$$i$$$, and the last sequence of $$$a$$$'s was of length $$$j$$$ (modulo 3). Transitions are easy in constant time. 
0
"The contestants will be able to receive full feedback on a limited number of submissions per task." Can you give more details about this? Sounds like a new feature. How many submissions are allowed per task, and how many of those can get full feedback? 
+8
Problem C can also be solved with the same idea, but without binary searching. Incrementally add nodes as possible sinks from the bottom up. We iterate over the coins (after pulling them upwards to their subtree root) in decreasing order of depth (this implies the ranges the nodes cover as we keep going cannot be contained by previous ranges, which is used to prove the greedy), and add a node as another necessary sink, while the current coin doesn't have a free sink in its subtree. Once there is a free sink we can remove it (we maintain sinks with a set). $$$\mathcal{O}(n \log n)$$$. Also, you can solve the first two tests of B optimally with dp in $$$\mathcal{O}(n^4)$$$. 
+30
Solutions: A We must find for each star the time frame in which it's inside the circle. Then we sort those to find the time in which we could see the most stars. To find the intersections, if they exist, I changed every star to a function and found intersection points with the circle by a formula on paper. Because of imprecision, after I find the intersection points and the times, I just iterate on the close integers (+ 3) to find the exact time frame. B We will never want to change a height below 0 anyway (exactly 0 is at least as good), so we can ignore this restriction. Let's subtract from position $$$1 \le i \le n$$$ the value $$$i * M$$$. Now the sequence of heights we iterate over is nonincreasing. When we adjust the height of some pole, it's best to change it to the height of the one before it. We can view it as ignoring all the poles which we picked to change the height of. Since we're looking to minimize the number of ignored poles, we're looking to maximize the number of unchanged poles, which form a nonincreasing subsequence. So, the task is to find the longest nonincreasing subsequence of the array (and make sure it begins with a 0 before the start of the array). C Let's define $$$C_i = A_i  B_i$$$, as the number of extra fertilizers position $$$i$$$ has. Interestingly, we do not care whether our operations for some solution make some position have a negative amount of fertilizers; we will always be able to reorder the operations to satisfy this restriction. So we can ignore it. Also, we can regard every operation as taking 1 fertilizer from position $$$i$$$ and moving it to position $$$i1$$$ or $$$i+1$$$, with the cost of 1. Let's transform $$$C$$$ to its prefix sum. Our operations become to change a value in the array from $$$x$$$ to $$$x+1, x1$$$ with the cost of 1 = changing $$$x$$$ to $$$y$$$ with the cost of $$$x  y$$$. We can only do this operation for all indicies except the last (which is the sum of all $$$C$$$), and the goal is for the array to be nondecreasing, and starting from (at least) 0. So first, for every value that is less than 0 or more than the sum of the array, adjust it to fit. Notice that this already solves subtask 1. We can now ignore the last value and the restriction with beginning with 0. The following part is the complex one (making the array sorted). Our process will be to iterate on values from small to large, and in every iteration we have a set of "changeable" values — values which will be at least the current ones in the answer sequence, and we keep incrementing them until it's not beneficial. There will also be a group of values which we already fixed — they will also form a prefix since we fix values from small to large. All values which we have not iterated on are considered unknowns (but we know they are larger than what we have now). Also notice that changeable values will always be the same (they become different when we fix some of them). Suppose the case in which we have one changeable value, and two unknowns to its left. If we decide to fix this value now, then both unknowns to the left must become this value. Observe that since our value is the smallest between those 3, it will be at least as good to increment our changeable value until it's equal to the smaller between the other two (in other words, we decide to keep it changeable). On the contrary, if we had 2 changeables and 1 unknown to their left, then it's better to decrease that unknown towards them instead of increasing them. Also, this makes us fix those 3 positions. Let's formulate this; imagine the array, in which every fixed value (in the prefix) implies a 0 in its position. Every changeable value is a +1, and every unknown is a 1. If there exists a prefix, whose sum is positive, then we should fix all the values in this prefix. We can implement this array with a lazy segment tree — when we change a value between $$$1, 0, 1$$$, we update a suffix by adding some value to it. Then a query is the maximum value and its position. Fixing values can be done by bruteforce, since overall we fix $$$n$$$ values. 
0
In F, it says "...using path compression in DSU here is meaningless...". Is it meaningless? I think it can actually make the solution slower, since path compression is amortized, and if the structure allows undoing operations, then amortization should fail (might give TLE on countertests). 
0
Maybe one could only able to solve A and B in some specific cf round, but get 75% of the points on C and 40% of the points on D (if those were partial score tasks). Then there's clearly a difference. 
+1
I think there's also a big impact coming from:
These two points are possible to cover, by going to more onsites when possible. So, if you have an opportunity to participate in some onsite round, you should grab it with both hands. 
+3
When you do dfs, you can consider vertices of 3 types; unvisited, visited but in the stack, or visited and finished (we called dfs on all their neighbors and returned already). If your dfs detects a cycle, then all the cycle vertices are the type that's in the stack (they must have been visited, but they cannot be finished because you will find a cycle before finishing any vertex on it). So, one easy way to find the cycle is to maintain the stack along with a dfs (vector or stack, either global or passed by reference). When you find a visited neighbor, iterate from the back of the stack until you reach that visited node, and put all the vertices you iterated on in some vector, representing the cycle. 
+3
You can break down a triangle query to a constant amount of "ray" queries; how many points are under a ray. Specifically let's break it to "prefix" rays — those which extend infinitely to the left. If we sort the points according to xaxis, then the prefix ray queries become queries to count points under a line inside some prefix. Let's break the array to $$$\frac{N}{K}$$$ blocks of size $$$K$$$. For the tail of the prefix query (the cells which are not in a complete block), count normally in $$$\mathcal{O}(K)$$$. As for the blocks, you can do some precomputation described nicely here. You end up with $$$\mathcal{O}(\sqrt{N} \cdot polylog N)$$$ per query for an optimal choice for $$$K$$$ (the polylog depends on how you implement whichever approach is described in the linked post). This approach is probably not noticeably faster for acceptable values of $$$N, Q$$$, but at least it's sublinear. Also, this works online, but if you allow yourself offline then you can do cuts on complexities, some shown in the linked post, and some can be done here to cut down memory a lot (build the structure per block on its own, and move to the next one using the same memory, for instance). 
+62
Done :) (you can view in the standings) The first time I solved the problem I wrote a lot of case analysis to do the best thing I found on paper. This time I tried to write something more general, which was able to solve with 1 operation upto 44, 2 operations upto 101 and 4 operations upto 257. For some reason it bugs for 3 operations (bound should be around 180190 I think), but I tried submitting without it (solving either with 1,2,4). Jury's answers are weak enough to let this pass with 99 points, and then I had to improve just n=112 so I did the stupid thing that solves in 8 + 36T for some limit T. tl;dr: My code is not general, just solves whatever is good enough for the jury. I couldn't bother fixing it, as it was unnecessary (my general solution improves jury's answers for tests 12,13). 
+45
I don't think it's possible in 5 hours, but overall it's possible... I was given the task about 12 years ago, and gave it 2 weeks, after which I got 100. I also lost the source code (this is the only thing I have left), but if you want to challenge me to rewrite it (maybe given a judge?) then sure. I barely remember the idea, but it doesn't give away the solution;
Edit: Well apparently I have some source code that is incomplete, and some of the output files for some inputs. This is one of them, solving n=255 using upto 4 consecutive S operations at any time. I think this expresses all that's necessary to solve for fullscore. Specifically, the bounds I kind of remember are: 8, 44, (I think 97 or 101), (I think 137 or 173), 257. 
On
cdkrot →
Codeforces Round #545 (Div.1, Div.2, based on Moscow Open Olympiad in Informatics), 5 years ago
0
The title was #544, and I pointed it out so they would fix it. 
On
cdkrot →
Codeforces Round #545 (Div.1, Div.2, based on Moscow Open Olympiad in Informatics), 5 years ago
+24
You mean round #545? Edit: got fixed. 
0
clearly desperate... anyway, when is the expected time for the results? 
0
Will submissions that weren't judged before the end be judged and considered in the results? 
+8
I guess everybody is doing their part... sir, (10^{2})^{105} is not equal to 10^{10}. 
+26
I find all points valid. 
+12
Binary search assumes the function you search on is monotonic. Ternary search assumes the derivative of the function you search on is monotonic. The reason for which what you said isn't true: f is increasing and g is decreasing, therefor f' is always positive and g' is always negative (using the tag' for derivative). Let h = f + g, then h' = f' + g'. We are hoping to show that h' is monotonic for ternary search, but the information about f', g' is not enough (we can make h' jump between positive and negative repeatedly, for instance). As for your problem: we have the function and we look only at x_{1} ≤ t ≤ x_{2}, since anything else is suboptimal. (Also we can assume that x_{1} ≤ x_{2}, otherwise we can swap and the answer won't change). We have more information than just monotonicity for each term. If you want to prove that you can use binary or ternary search, you should probably analyze this function. 
+14
I'd guess the authors had no problem to place as 1C (possibly because the problem that was meant to be there turned out to be a copy of some existing problem), and so they were desperate enough to insert this one instead. 
+8
My solution to E Transform the expression to a tree; each internal node is an operation with 2 children, and a leaf has a value. Every updates changes something in some node and you need to update on the path up, and finally output the value of the root. My solution consists of a daunting HLD with about 300 lines (although I did not try conserving on lines, I tried to have the least amount of bugs possible). I ended up with only 3 bugs and managed to fix them in time (and then also had to do some constant optimization). I will leave the remaining details for you to figure. If you manage to find some sqrt decomposition solution, I highly suggest using it instead. I didn't find one. 
+1
In D1, from some state you can always add verticals on any column for as much as you'd like, since anyway you can return to the state before with all columns being increased by some constant C (which makes no difference). Therefor, you can always keep at a state where the difference between the maximal and minimal column is less than 2, this implies you can take all columns modulo 2. 
0
Another issue is that it relies on the modulo being prime (the inverse is found using fermat's). 
0
I started using this once I heard '\n' is quicker than endl, and I was used to write endl, but I had to write this template enough times that now I don't mind writing either of them, but now I'm used to this template so I'm still using endl because I keep forgetting to remove the #define, and if it's there then I should use it anyway. It's all pointless. 
0
I also used a map for intervals for each row, but I cleared it out for each new row, so overall its size is upto K at any time, so I'm not sure where the memory issue is. I don't see what's false in this approach, but I can't tell since I ended up with WA for every test case... 
+167
Hello, Thank you for the invaluable information. At the moment I am reporting from the depths from South America, as I proceed my search for Rajat De with my fellow team of highly expert professors for the topic of finding missing CodeForces members (it consists of only about 14 people, and we picked the best 14 out of approximately 34 million candidates, so I doubt you have any chance to actually join). Within the 14 we have some psychologists who might have a good guess as to where he could go after such an extremely unusual behaviour (who in their right mind will stop solving problems!). You are probably asking yourself "but why in south america?"... well if you are then it seems like you are not updated. Searches over the world in specific locations have already been conducted, with no good results. These locations include India, Russia, USA, Canada and a couple more. If we are already on this topic, then I'll also share that recently there was also a search deep in the Pacific Ocean (and I mean underwater... the surface of it was already searched beforehand). I mean if you think about it, it would make sense — if I wanted to disappear and possibly practice without nobody noticing, the ocean would probably be my goto place. Anyway, they planned to search down to the depth of 600 meters, and interestingly at the depth of 583 meters, they actually found a puppet of Rajat De! I can say, with no doubt, that Rajat De knew the depths of the Pacific Ocean will be searched, and he is planting traps. Anyway, some parts of South America have not been searched yet (mostly underground), which is why we are here right now... but we currently have no results. Finally, why is your information important, you may ask. Well it's helpful because the very specific problem he solved is of type "DP". Only experts will even consider the possibility that this is a clue! For example, it could be possible that he is actually hiding in some Domino's Pizza, or maybe that he is drinking Diet Pepsi at the moment. This definitely diminishes the possibilities. I will update as soon as new discoveries are found... hopefully we find clues before the planned trip to Mars. 
0
Intended solution for F? I can only bound my solution with , where S is the sum of lengths of all strings (that also may be slightly wrong, to the better or worse), but the constant is incredibly low and I can't come up with a worstcase. It passed with 233ms or so. 
0
You don't need to detect when you should use 2 or more hashes. One could say you should do according to your intuition, but I suggest always using multiple hashes, depending on how memory and time consuming it is to build this many hashes. Say, 2 or 3 is the usual amount I use. 
+16
Brief solutions/hints for the problems (except A because there's no reason to): B For both rows and columns, you get 2 ends X, Y such that before X and after Y there must be white, X and Y must be black and between them it's unknown. For each row find these endpoints, and then iterate over all columns and check whether some cell that needs to be black according to a column, has to be white according to a row (this can be checked in constant time). After all these checks, swap between the rows and columns and check again. C Hint: solve separatedly for each bit (so each node has a value 0 or 1). You can solve it using a dfs run on the tree. D If K is large enough, then at some point you will jump between 2 adjacent cells. K should be at least (approximately) . If K is smaller than twice this limit, you solve by doing dp in . Otherwise you solve upto the limit, and find the best pair of points you should jump between (It's probably one with the maximal sum but there's no need to solve further). E Brief explanation/hint of : We first reverse the order of rectangles (and also do coordinate compression). A rectangle is good now, if none of the previous rectangles took any space of the current rectangle. We split the rectangles to blocks of size K, within each block we check all pairs of rectangles for intersections in (total). For each block we also compare it with all previous blocks in (per block) using sweepline and a segtree: past rectangles represent updates, current rectangles represent queries. The (lazy)segtree I made supports setting each value V in a range to max(V, C) for some constant C, and range query for max. K should be . Edit: Looking at the results, I think my solution to D takes a little too much time. Can anybody else explain their solution? Also for E I'm not sure if my solution can pass in time, I didn't submit it, will check soon. Edit2: This solution for E recieves TLE on the last 2 testcases when K is 1000, but it passes with 1500. Anyway, did anybody have a better solution? 
+3
No need to assume it is undirected, The approach works either way. 
+22
It seems like you want to find the biconnected components in the graph. The answer for some vertex is then the amount of such components it is in. 
Name 
