On Bazoka13 → Codeforces Round #947 (Div. 1 + Div. 2), 4 days ago +2 Oh right. So it's not too bad.
 On Bazoka13 → Codeforces Round #947 (Div. 1 + Div. 2), 4 days ago +2 I ended up with WA7 so I may be wrong, but I believe most of it is correct (and I probably have a bug): If none of the strings contain asterisks, then it is easy. If both contain asterisks, then check both prefixes and suffixes for equality, until you reach asterisks. If prefixes and suffixes were equal, then it should be "Yes". Assume now that $s$ has no asterisks, and $t$ has at least one. Remove equal prefixes and suffixes, so that $t$ begins and ends with asterisk. If all of $t$ is asterisks, answer is "Yes".Now, you need to take every substring of $t$ between a pair of asterisks, and match it to the first substring of $s$ that matches it, greedily.It's a classical algorithm to determine whether a string can be anothers' substring, when both may contain '-' symbols, using fft in $O(n \log n)$ where $n$ is the larger of those lengths.Now, to find the first position in which a substring of $t$, of length $m$, matches in $s$, starting from a given index $i$, you can do an exponential search:Check if the substring appears in the substring of $s$, starting at $i$ of length $m$. If it isn't found, increase to length $2m$, and so on until it is found.The total complexity should be $O(n \log^2 n)$.
 On Bazoka13 → Codeforces Round #947 (Div. 1 + Div. 2), 4 days ago +16 Suppose $v[0] = \lbrace 0 \rbrace$, so that it is of length $2^n$. Solve recursively, on a given $v$ of length $2^m$: If $m = 0$, then it is solveable only if $0 \in v[0]$. If $m - 1 \in S$, then you can reduce to a new vector $u$ such that $u[i] = v[i] \cap (v[i + 2^{m-1}] - 1)$. Else, reduce to $u[i] = v[i] \cap v[i + 2^{m-1}]$. Then solve recursively for both $u$'s (both of length $2^{m-1}$), while carrying your decision. While this may seem naive, the time complexity is $O(n2^n)$.
 0 Create a temporary account to view the problems and always register with both the temporary and the real. View the problemset from the temporary and if you decide to participate, open the contest from the real one.
 +49 It doesn't prevent someone from viewing the problems from another user (or even without being logged in).If you try to solve this by only allowing someone to view the problems if they registered, then someone can just register with two accounts and let one of them take the hit (always).I thought of multiple ideas, and the best of them seems to be allowing registration only 1 hour before the round begins, and whoever is registered is counted (and people can unregister).This way: People commit to participating with their user before seeing the problems. People won't register and forget they did, since the registration window is only 1 hour. If a person want to join late to the round, they only need be to online for any moment in the 1 hour window. If a person wants to start the round on time, it is expected that they will be online before the round begins in order to register.
 0 I'm a bit confused on why theorems 1.2, 1.3 imply that solving the incorrect variant in subcubic time leads to solving APSP in subcubic time. I see why it is correct the other way around
 On tril0713 → A guess about strings: is it correct ?, 7 weeks ago +8 What do you mean by consecutive intervals? Their lengths can only be even.If you mean consecutive when we only look at even values, then this fails if the string is "abcd" repeated.Also, if both $l, l+2$ are good lengths for a fixed $r$, then so is every even length from $2$ to $l+2$. Which means the consecutive intervals are only $2[1, k]$ and other singletons, so idk why it would be logarithmic.Did I get the question correctly?
 On gkO_o → A Problem About DAG, 2 months ago +34 Order the nodes by some topological order, suppose WLOG that the order is $1 \rightarrow 2 \rightarrow \ldots \rightarrow n$ (any edge $i \rightarrow j$ satisfies $i < j$).Compute $r_i$ as the smallest vertex reachable from vertex $i$, which is simply its edge going to the smallest vertex (if none exists, $r_i = \infty$). Symmetrically compute $l_i$ as the largest vertex that can reach $i$, which is also a neighboring edge (if none exists, $l_i = -\infty$).Lemma: A vertex $v$ is a "super delivery point" iff $r_u \leq v$ for all $u < v$, and $v \leq l_u$ for all $v < u$.Proof: If the condition isn't met, WLOG there exists $u < v$ such that $r_u > v$, then by definition $u$ cannot reach $v$, and because of the topological ordering, $v$ also cannot reach $u$. If the condition is met, then for every $u < v$ you will reach $v$ if you keep walking on the path $u \rightarrow r_u \rightarrow r_{r_u} \rightarrow \ldots \rightarrow v$, and symmetrically if $v < u$ then the reverse of $u \rightarrow l_u \rightarrow l_{l_u} \rightarrow \ldots \rightarrow v$ is a path. $\blacksquare$Therefore, compute $l_i,r_i$ in $O(n)$, then compute prefix maximum of $r_i$ and suffix minimum of $l_i$, and then a vertex can be checked in $O(1)$ if it is special, so overall $O(n)$.
 On Aspergillus → Need help with a problem, 4 months ago 0 A counterexample: $[3,4,5,8]$, with $K = 10$.But I want to emphasize that this is an NP-hard problem, so an efficient solution cannot be this simple. One can solve the partition problem with a reduction to this, by choosing $K = \frac{sum}{2}$ (as I did above).
 +53 There's been no official statement regarding the round, for 5 days now. Even if Mike does not think it is a serious issue, the community obviously does. Maybe Mike is having a hard time lately and is too busy to deal with us.At the very least we must have a clarification on what type of rounds should become unrated going forward, because there seems to be a huge misunderstanding. I think that a round which has a problem with a semi-googlable solution, another problem whose model solution is incorrect and another problem (F) which appears to be almost stolen from another source (according to what some people posted), then it should become unrated.For me, this silence means that such rounds are still valid as being rated, which worries me very deeply about the quality of codeforces rounds going forward and how the rating is less trustworthy. It is still my opinion, maybe you think that such rounds are completely valid and of proper quality, you make the rules after all...Just make the rules clear please. Maybe going forward I'll begin every round by copy-pasting the complete problemset into chatGPT and asking it whether any of the problems looks familiar, and provide me some sources, maybe even scripting it ahead of time. Then I'll read the sources and implement what I read. Sounds like a shitty experience. I value my enjoyment of competitive programming, yet I value my rating as well, seems to be an issue.MikeMirzayanov
 On marzipan → Good Bye 2023, 5 months ago 0 I used the fact that 31, 13 and 14 work for $n = 3$ and extended it with zeros, explained by the following example for $n = 7$:$1300, 1030, 1003, 3100, 3010, 3001, 1400$ (all squared)
 On Noam527 → [Tutorial] Supporting Queue-like Undoing on DS, 6 months ago 0 Yes, check out Monogon's extension here
 +33 The two observations are: Take some minimal $a_i$ (across array $a$) and minimal $b_j$ (across array $b$). Then there exists an optimal solution that passes through the cell $(i, j)$. This can be proven by taking any other path and altering it to pass through $(i, j)$. If you choose some number $r$, and alter the costs so that $ir$ is added to $a_i$ and $jr$ to $b_j$, then the cost of all paths changes by a value that is invariant to the path (depending on 0-indexing or 1-indexing, but by induction you can show that any path increases by about $rnm$). When you find the first observation, it may seem like it's sufficient to solve the problem — repeatedly find a minimum intersection, and split into subproblems (from start to intersection, and from intersection to end). The only issue is when the minimum intersection is the start or the end, which is where the second observation helps:At first, set $r = \infty$ as above. Now the minimum intersection is $(1, 1)$. Now imagine slowly decreasing $r$ until the minimum intersection isn't unique. This will happen when some $a_i + ir = a_1 + r$ (or vice versa for some $b_j + jr = b_1 + r$), and when this happens, you know that there's an optimal solution that goes from $(1, 1)$ to $(1, i)$ (or vice versa for some $(j, 1)$).So the idea is that if you keep decreasing $r$, the optimal path remains while you can build it edge by edge (either advancing horizontally or vertically, depending on the next event).The final observation is that, if you map the set of points $(i, a_i + ir)$ on the plane, then its lower convex hull consists of the same set of points regardless of $r$. So instead of "decreasing $r$ slowly", you can compute the lower hull for both $a$ and $b$, and from this you can simulate decreasing $r$ (which means advancing with two pointers over the hulls, depending on the slopes of the hull edges).
 On okwedook → [Problemsetting] How to write editorials for a contest, 16 months ago +34 I'd like to use this opportunity to mention another thing that in my opinion, is important in an editorial:I prefer it much better when the solution explains the steps the author took to solve the problem, rather than arriving at sudden observations, and only later you can see how they relate to the problem. Many authors like to provide the "shortest path" to the solution, resulting in a few lines of a solution that generally require me to read them at least 3 times to understand (some university professors also like to provide such unintuitive proofs, without the thought process behind them).I believe that if a person fails to solve a problem, it is best to read the editorial only after the person took a sufficient amount of time to be stuck with their ideas. In a way, I like to think about problem-solving as a tree of approaches: when someone is stuck with all of their approaches, then he requires a large amount of work to extend any of the nodes to more ideas. Thus I think the best editorial is one where you can comprehend the approaches the author took, and can compare the text you read to your tree of approaches, until at some point the editorial shows you how to extend one of your ideas to a complete solution, increasing your capability for the next problem. Comparing that to "shortest path" solutions, I think they can come up with ideas that don't relate to anything your tree of approaches has, and you (or me, personally) learn less about how to generalize the solution and more about a single trick (that you may forget).It may not be perfectly possible with every problem but I think this is what authors should aim for.
 On FedeNQ → Teams going to ICPC WF 2021 (Dhaka 2022) — WIP List, 19 months ago 0 Idk if this relates to your question; we qualified but neither participated nor are on the list.There's a multitude of reasons, essentially due to politics (our funding was cancelled, we didn't get visas in time, etc).
 +3 You can conclude and prove this lemma with mincut maxflow — if you try to find a maximum matching between a chosen multiset of sizes, and the frequencies of the elements, then try to find the condition by which all cuts are of size at least $n$.It's pretty lengthy (I can elaborate if you want), but you can arrive at this condition without magically guessing it (the magic here comes from the magic of MCMF).
 On ICPCNews → ICPC 2022 Online Challenge powered by HUAWEI, 20 months ago 0 Your comment is more proper as a feedback to the people who prepared the competition, that they should have prepared it otherwise.And the standings are nothing like "nearly random", the relative positioning did not change too much for most of the people. In fact it's similar to a usual CF round, with the exception of the duration. So in my opinion you don't present a proper complaint, as I mentioned in my other replies.Edit: didn't see that you talked specifically on GP1 (thought you meant P1). In that case you're right, but idk how the authors could predict this (unless they really put unrealistic testcases, but I can't compare between them). Still, the MLE should be more predictable than simply "non-correlative testcases".
 On ICPCNews → ICPC 2022 Online Challenge powered by HUAWEI, 20 months ago 0 We should differentiate between what is unfair, and what the contest should have been.I do agree that the contest would have been more valuable to Huawei, if there were no secret cases, and furthermore it would be more realistic to give us the entirety of the testcases and make the problem "output only", without a small time limit (yet this poses the problem of people with different computers).Your case is extremely unfortunate and I'm sorry for that. But in my opinion, once all the contest rules have been set as they are (even if we disagree with them), no rules were changed and by participating you are knowledgable to all the outcomes, you can't call this unfair.
 On ICPCNews → ICPC 2022 Online Challenge powered by HUAWEI, 20 months ago +32 I'm sorry for your decrease in a single position, but I don't think it's unfair.If anything, I think it was predictable that the final positioning is ought to have some deviation from the standings throughout the contest. In the "Scoring" section you can see it's stated to have additional secret tests (and furthermore, that the pretests are not counted to the final score). I don't want you to feel down, but it should be clear that overfitting solutions to the pretests (whether overfitting on time, memory or specific instances that appear on the pretests) is a bad idea.It definitely, partially relied on luck, the point was to approximate your average position throughout the contest. 600k vs 650k is unfortunate but if you put yourself tight on the 5 seconds time limit on pretests, you can't expect to pass everything for sure.I think it's an exaggeration to call this a "competition for luck" or a "huge mistake to participate", and you should be grateful for your 2nd position and considerable prize.
 On ICPCNews → ICPC 2022 Online Challenge powered by HUAWEI, 20 months ago 0 Well if I recall correctly, it's sufficient for 200k+ on both subproblems. Of course you need to use other approaches to nail more difficult cases (on GP2).
 On ICPCNews → ICPC 2022 Online Challenge powered by HUAWEI, 20 months ago 0 If you fix some upperbound $B$ on the component size you allow, and fix some spanning tree, you can try to split the tree into minimum number of connected components of size upto $B$. This dp problem is solvable in $O(n)$, so you can try to fix multiple values of $B$ and take the best found. For instance I tried iterating on bounds starting from 1 to $n$, each time multiplying by 1.7. There isn't really any point in trying small bounds but it barely hurts (the actual useful attempts are those around $\sqrt{N}$).You can independently try this on multiple spanning trees. This works well on GP1 since the weights don't matter, and on GP2 this is decent to try this on the MST (as a heuristic).
 On Fly_37 → Tricky lemma, 2 years ago +20 Right, it might make it asymptotically less than $k^2 \log k$, my bad. Not sure if this invalidates the construction but I feel like it does.
 On Fly_37 → Tricky lemma, 2 years ago +55 Here's a construction that does $\omega(n \log n)$ for infinitely many $n$, disproving the lemma:Choose some $k$ and consider the set of elements $i\cdot j$ for $1 \leq i,j \leq k$ — sorted, after removing duplicates and with a $0$ in the beginning.This sequence has cost $\Omega(k^2 \log k)$: any starting element $ij$ has a sequence of length at least $\frac{k}{i}$ which is enough to show this bound.However, after removing duplicates we have $o(k^2)$ elements in our sequence — as found here: https://mathoverflow.net/questions/108912/number-of-elements-in-the-set-1-cdots-n-cdot-1-cdots-n/108939#108939Choosing $k = \sqrt{n}$ gives the contradiction. We still don't know the exact bound though.
 On YouKn0wWho → (The Ultimate) Code Library, 3 years ago 0 I only know persistent BST from that list, but I guess you're right — there is something I know that's not in the library.I guess we can add on that some faster maxflow algorithms since the fastest there is $O(EV^2)$ dinic.
 On YouKn0wWho → (The Ultimate) Code Library, 3 years ago +64 Incredible library. After a quick scan I still can't find anything I know that could be added. What I think you can add is whether a data structure / algorithm assumes 0-indexing or 1-indexing, or if you work with ranges then do you represent them with right end inclusive or exclusive (for example, segment tree) — to generalize, either above a struct or above a function, I think it would be nice if you can detail how it takes input and provides output in a short comment.And just to make sure — do you permit other people to use your library? with credit of course.EDIT: I can't seem to find Alien's trick. Also I found even my little contribution to the library (queue undo trick), so I feel very honored, however it does require a small fix; In my blogpost I explain that we must maintain the bottom pointer of the stack so that we never pass it — so at the current state of the code, it may run in $O(n \sqrt n)$ operations. Just a few lines of code to add (whenever you feel like it).
 On ipaljak → CEOI 2021 Mirror, 3 years ago +34 I don't think there's any benefit in providing such tough constraints, the crux of the problem is essentially to solve it in polynomial time.If anything, small bounds on an interactive problem probably help the system testing (and, we can upperbound the number of interactions by $nm$, so even very bad solutions can't do too many turns).
 On _Bishop_ → Some complex expectations, 3 years ago +36 For simplicity, I'll denote their (constant) sum as $s$ and then the recurrence you mentioned as: $f(0) = f(s) = 0,\ f(k) = 1 + \frac{k}{s} f(k-1) + \frac{s-k}{s} f(k+1)$In terms of computation, as you've mentioned the immediate approach is dp but since the states form cycles, it doesn't quite work. However the equalities are linear in the values $f(k)$ so the next idea should be to solve these linear equations, in time $O(s^3)$.Now, I can't quite find a closed form, but I can give an $O(s)$ algorithm to compute $f$: first we can reorder the equality in the following way: $\frac{k}{s} f(k) + \frac{s-k}{s} f(k) = 1 + \frac{k}{s} f(k-1) + \frac{s-k}{s} f(k+1)$ $f(k+1) - f(k) = \frac{k}{s-k} (f(k) - f(k-1)) - \frac{s}{s-k}$This is a recurrence on the adjacent differences, with conditions $f(0) = f(s) = 0$. If we denote $f(1) = c$, all the values of $f$ are then a function of $c$, where by the condition $f(s) = 0$ the answer is unique. Specifically, if my calculations are right: $f(k) = \sum\limits_{m=0}^{k-1} \left( \dfrac{c}{\binom{s-1}{m}} - \sum\limits_{j=0}^{m-1} \dfrac{m!(s-1-m)!}{j!(s-1-j)!} \right)$And then it must hold that $f(s) = 0$, but this looks disgusting. As for the algorithm, we can define the homogeneous version of the equation above: $g(k+1) - g(k) = \frac{k}{s-k} (g(k) - g(k-1))$and with the single condition $g(0) = 0$, it's easy to find one solution: choose $g(1) = 1$, and then: $g(k+1) - g(k) = \dfrac{1}{\binom{s-1}{k}}$ $g(k) = \sum\limits_{m=0}^{k-1} \dfrac{1}{\binom{s-1}{m}}$Observe that for any $f$ which is a solution to the non-homogeneous equation, with the condition $f(0) = 0$ (but without $f(s) = 0$), and for any constant $c$, the function $f + cg$ is also a solution. So we can find any single $f$ and then shift it using $g$ so that $f(s) + cg(s) = 0$.To find a single $f$, just choose $f(1) = 0$ and then compute the adjacent differences in $O(s)$, and then prefix sum.
 On fried-chicken → Say anything you want about staying up late., 3 years ago +117 damn
 On Noam527 → [Tutorial] Supporting Queue-like Undoing on DS, 4 years ago +10 It's worth mentioning that, because all "updates" are offline, and we know from the start exactly how the updates and undo's are ordered, then there's also a D&C solution with equal complexity (but probably shorter code).Still, I'm happy to hear that.
 On Noam527 → [Tutorial] Supporting Queue-like Undoing on DS, 4 years ago +10 Thank you! And your implementation really is nicer.
 On Noam527 → [Tutorial] Supporting Queue-like Undoing on DS, 4 years ago 0 Indeed, because the proposed algorithm is already provided a data structure that can support normal undo's, so its complexity is already taken into account.
 On Noam527 → [Tutorial] Supporting Queue-like Undoing on DS, 4 years ago 0 In the case you've provided, regardless of my blogpost, DSU with just path compression has a bad time complexity; any user of your data structure can first merge $n-1$ times, undo all of them and execute them in reverse order. Then for $n$ more times, the user can merge and undo, thus achieving $\mathcal{O}(n^2)$ time.So for your proposed DS, it takes upto $\mathcal{O}(n^2)$ time for any sequence of $n$ operations, thus the new DS takes upto $\mathcal{O}(n^2 \log n)$ time for any sequence of $n$ operations.
 On Noam527 → [Tutorial] Supporting Queue-like Undoing on DS, 4 years ago 0 What I was trying to express is that, regardless of whether the original data structure worked in amortized time or not, we still apply to it a sequence of $\mathcal{O}(U \log U)$ operations.Basically, we're an algorithm outside the data structure that just calls a bit more updates than what we were given. So for example, if the original data structure worked in amortized time of $\mathcal{O}(q \log q)$ for any sequence of $q$ operations, then we would make it work in $\mathcal{O}(q \log^2 q)$ for any sequence of $q$ operations.As for your example, if there exists a scenario with a heavy update that takes $\mathcal{O}(n)$ each time it is called, and our algorithm just so happens to call it $\mathcal{O}(n)$ time, then even without this algorithm, the original data structure is not very efficient; any user can call that heavy update/undo $\mathcal{O}(n)$ times.If you're wondering about what we should do in case some updates consume more time than others, I put it at the "Open Questions" sections, as I still haven't thought about it.Hope this clarifies.
 On Noam527 → [Tutorial] Supporting Queue-like Undoing on DS, 4 years ago 0 Thank you!
 On Noam527 → [Tutorial] Supporting Queue-like Undoing on DS, 4 years ago +10 It's been pointed out that I should give this trick a name. Does anybody have a name better than "Queue Undo Trick"?
 On R.A.N.K.A. → Help in Permutation P[i]=P[P[i]], 4 years ago +17 It is correct. In group terms, every operation transforms P -> P^2, so any number of operations gives you P^(2^k) for some k. This can become the identity iff the order of P is a power of 2, iff every cycle is of size power of 2. Then the answer is log2(biggest cycle size).
 On balsa_knez → NP vs NP-complete vs NP-hard, 4 years ago 0 The decision problem of clique is, given a graph G and integer k, does there exist a clique of size k.The decision problem of longest path is, given a graph G and integer k, does there exist a simple path of length k.It is easy to solve the maximum clique or longest path, using the above decision problems, some polynomial amount of times, so you can think of these as the same problem. Still, by definition, we only talk here about decision problems.Edit: as for your second question; perhaps we can formulate the maximum clique problem you describe, as given a graph and a clique, is this clique maximum (note that this is a decision problem now). I don't know if we already know about a polynomial verifier for this, but I feel like it would imply P = NP.Edit2: actually, the above mentioned decision problem has a polynomial nondeterministic solver that just tries all subsets... so it's NP. What is the definition of maximum clique you have in mind?
 On Noam527 → Problem Likeability Feature, 4 years ago +16 I agree with your first two points, as in, this is a possible scenario — if too many people mark such problems as their favorites. I find it unlikely though, if enough experienced users like you think otherwise, then you would like other problems that suit you. Still, it's very possible that new people will be directed to specific areas (not something I want). Hopefully this can be taken care of with, for example, weighing the votes of experienced users more (a comment above suggested to put more weight on the votes of people who solved more problems, which sounds like a good heuristic).I'm not sure I understand what you're saying in the 3rd point... is it that you believe right now people encounter "the average problem", compared to facing only hand-picked problems (if the feature would be implemented)? If this is what you meant, I personally think that practicing on hand-picked problems is a better situation.As for your 4th point, I don't understand where that's coming from... do you mind explaining?
 On aditya_sheth → How to solve this problem?, 4 years ago +35 This problem really asks you to use Pick's theorem, which just states that for a polygon with integer coordinates on the plane, its area equals $i + \frac{b}{2} - 1$ where $i$ is the number of integer points inside the polygon and $b$ is the number of integer points on its border.Compute $p = 1 + \gcd(|A_x - B_x|, |A_y - B_y|)$, the number of points on the line from A to B, including endpoints. We want the triangle to have $i = 0, b = p + 1$, which means we want to have area $\frac{p-1}{2}$. But this also goes the other way; if our triangle has this area, since $b \geq p + 1$, we must have $i = 0, b = p + 1$. So we solve for its area now instead:We want to satisfy: $|A_xB_y - A_yB_x + B_xC_y - B_yC_x + C_xA_y - C_yA_x| = p-1$, where the sign of the left term just depends on the orientation of the triangle. So we can try to solve it for both $p-1, -(p-1)$, without the absolute value. Either way, move the first two (constant) terms to the RHS, and reorder the LHS, we get: $(A_y - B_y) \cdot C_x + (B_x - A_x) \cdot C_y = D$ for some constant D. You can find whether this has 0 or infinite solutions, this is a standard equation ("extended gcd").
 On SuperJ6 → Subarray Sorting Queries DS, 4 years ago +11 Can you be more accurate about what you want or give an example? To me, "sorting subarrays" is pretty vague (assuming you don't want to literally list all subarrays sorted per query)
 On elements → The smallest subset with maximum bitwise OR., 4 years ago +13 This has been asked before here.
 On LovesProgramming → Can this bitwise OR problem be solved ?, 4 years ago +19 The problem is equivalent to set cover, with univerise size log(max), and number of subsets simply N. So unless P = NP, your solution will be either exponential in N (just no...), or exponential in log(max), which could be more promising but seems difficult.It's nothing new that GFG spreads around wrong solutions.
 On YouKn0wWho → If all Sub-Array Sums are Distinct or Not, 4 years ago +39 Consider the problem on the prefix sum array; $1 \leq A_i - A_{i-1} \leq M$, while all pair differences are distinct (let's ignore the 0 that the sequence should start with, it should affect the answer by a constant additive factor). Denote with $f(M)$ the maximum $n$ such that there exists an array of length $n$ with $1 \leq A_i - A_{i-1} \leq M$, with all pair differences distinct.Observe that if there exists a pair of equal differences: $A_y - A_x = A_j - A_i$, then there also exists a pair of pairs with equal sum: $A_y + A_i = A_x + A_j$. Similarly, if there exists a pair of equal sums, then there exists a pair of equal differences. So it's equivalent to consider distinct pair sums instead of differences.Apparently it's some studied sequence, link. There seems to be a conjecture that for some $n$, the largest number in the array is about $\frac{n^3}{\log^2{n}}$. If we follow approximations, we want the derivative to be less than $M$; So (about) $\frac{3n^2}{\log^2{n}} \leq M$, so we can approximate (yet again), at $\mathcal{O}(\sqrt{M}\log{M})$.With this in mind, you can run a bruteforce to find $f(M)$ about exactly, quite quickly, then for any $n \leq f(M)$ naively check all pairs (otherwise you know the answer is negative).Honestly I'm a bit disappointed at the amount of approximations, but that's all I got.
 +10 If you're still interested, I have a solution in $\mathcal{O}(n^3 + m\log m)$ (or $\mathcal{O}(n^3 + m)$, negligible). It passed on oj.uz in 96ms, so it's probably a fine (not too tight) solution: SolutionAs expected, for each edge we will compute the minimum path from 1 to n and backwards, if this edge was reversed. Let's focus just on the path from 1 to n. First, we run dijkstra (or floyd warshall) to find the shortest path on the given graph, let this path be $(v_1, \dots , v_k)$, where $k \leq n$. We will then run 2 floyd warshall's: $D_{i,j}$ denotes the minimum path from $i$ to $j$ on the given graph, $D'_{i,j}$ denotes the minimum path from $i$ to $j$ on the given graph, but without any of the edges on the minimum path found by dijkstra.For each edge $(u, v)$ with cost $c$, that is not on the minimum path: if we decide not to use it, then it is best to go on the minimum path we found (as it's just like removing the edge from the graph). If we decide to use it, then we can safely use the value $D_{1, v} + c + D_{u, n}$ (and compare it with the minimum path); even though $D_{1, v}$ might use the edge unflipped, if it did then the resulting value will be larger than the minimum path. Same with $D_{u, n}$.For each edge $(v_i, v_{i+1})$ with cost $c$, on the minimum path: if we decide not to use it, then, if we view the path as one that begins at $v_1$, ends at $v_k$ and sometimes leaves the minimum path (then returns), then we can claim it leaves the minimum path exactly once (it can't leave less since the minimum path is not a path after the flip):First, for all $x < y$, we won't leave the minimum path at $v_y$ and return at $v_x$; depending on whether $y \leq i, x \leq i < y, i < x$, we can get rid of one of these vertices (and this detour around the minimum path), and instead use a part of the minimum path (while the path still doesn't use the flipped edge). So now we know that all those jumps around the minimum path just cover contiguous, disjoint segments of edges in the minimum path. So any jump around a contiguous segment which doesn't contain the flipped edge can be replaced with that segment in the minimum path.This yields the simple solution: for each $1 \leq l \leq i, i+1 \leq r \leq k$, we consider the path with cost $D_{1,v_l} + D'_{v_l, v_r} + D_{v_r, n}$. We do this in $\mathcal{O}(k^2)$ for each edge on the minimum path, so $\mathcal{O}(k^3)$.And let's not forget, if we decide to use the flipped edge on the minimum path: well actually it will never be optimal: take any such path, and right before you use the flipped edge to go from $v_{i+1}$ to $v_i$, just instead use $D_{v,n}$, which is guaranteed to be shorter.So it's a very light cubic complexity.My submission (which is not meant to be readable, just for reassurance: https://oj.uz/submission/201505
 On neal → Memory challenge: Round 605 Problem F, 4 years ago +42 Probably Hirschberg's algorithm: The idea is originally in 2D: view the problem through a weighted 2D grid (with very specific moves), and find the best path from topleft corner to bottomright. Split the grid to half horizontally, compute from start to all of middle the best result, and from all of middle to end, without maintaining the actual path. Find the best breakpoint in the middle and solve recursively, it ends up at $\mathcal{O}(nm)$ time and linear memory, since you can now remember a constant number of rows per dp.Here it is 3D, we can view it as a cuboid, where we also go up and down depending on bracket balance. However we should be able to break the cuboid in the same way (not breaking by balance) and still obtain $\mathcal{O}(n^3)$ time, $\mathcal{O}(n^2)$ memory.
 On eidan → Efficiency on heuristic solution, 5 years ago 0 Since there are multiple ways to implement this recursion (like computing $M(G)$ efficiently), I'll instead give an upperbound to the number of recursion calls, let it be $T(n)$ where $n = |G|$, and $G$ is connected.So, if $n = 2$ then both nodes have degree 1, this case has O(1) calls. Otherwise, there exists a node with degree at least 2, which you will remove, so an upperbound is: $T(n) = 2 + T(n - 1) + T(n - 3)$. Even though you can run dp to compute this, it is known $T(n) = c^n$ (well actually a polynomial, but let's only look at the largest exponent) for some constant $c$.So, $c^4 = 2 + c^3 + c$, which appears to be about $1.73^n$. Of course we didn't consider whenever a component splits and such (and also some degrees will be more than 3).Also, if $M(G) = 2$ then the graph is either a line of a circle. You can precompute with dp the answer for such a graph, and now the recursion finds a degree at least 3. With the same process, the result is about $1.58^n$.
 On Errichto → IOI preparation lectures and contests, 5 years ago +58 Sounds awful, I'd rather implement a treap 5 times in a row.
 On Errichto → IOI preparation lectures and contests, 5 years ago +9 I think what is meant, is that a problem requiring a BBST (unlike a set, possibly augmented with rank and such) is solvable using any of them, indifferent to constants in runtime or functionality — that is, you only need to know one of them, not all.IOI13 Game requires a DS (on the 2nd dimension) that can update a cell and query for range gcd, when the array is of size 1e9. You can use a sparse segtree, but together with the 1st dimension this adds a log^2 factor to memory. The optimization is to instead use a BBST on the 2nd dimension, then you cut a log off memory. It doesn't need treap, you can use any BBST.
 On Errichto → IOI preparation lectures and contests, 5 years ago +23 I doubt these can be of use in the IOI, since they try to stay strict with the syllabus. Maybe for some partial points. Anyway, good work.
 On Errichto → IOI preparation lectures and contests, 5 years ago +61 How are wavelet, segment tree beats and lichao relevant to IOI?
 On majk → CEOI 2019 Mirror, 5 years ago +3 Why should F be the number of buildings + 1? For example, the case of a 3x3 with a hole in the middle has 8 buildings, 10 faces and is connected.
 On vovuh → Codeforces Round #575 (Div. 3) Editorial, 5 years ago +20 I mean, if you run it for each component independently then worsecase is improved by a factor of 8... there's no excuse for being lazy :)
 +22 Map update intervals to points $(L, R)$ on the plane with weight $R - L$. Queries are the maximum weight of a point in the square with corners $(L, L), (R, R)$. You can do polylog with any of your favorite data structures like 2D segtree, segtree of BBST and whatnot.
 On arsijo → Codeforces Round #571 (Div. 2) , 5 years ago +113 YesFirst, if we extend 1x2 pieces to 2x3 (by adding a row to the bottom and column to the right), 2x1 to 3x2, and the board to (n+1)x(m+1), then we now only care about actual overlaps. This gives an upperbound of $\frac{(n+1)(m+1)}{6}$.Let's see how it's always possible to achieve the upperbound (from now we consider n and m as increased by 1):If we can always leave less than 6 free cells then we achieved the upperbound. Also notice that now $n, m \ge 2$.First, observe that whenever $6|nm$, we can cover the whole board. It can be seen easily for small $n$ (upto 7), and for larger $n$ we can always break the board into pieces with small $n$ and areas divisible by 6.This means that from a big board we can always remove pieces with height and width bigger than 1, and area divisible by 6. It only remains to leave less than 6 free cells when $n, m \le 7$ (if one of them is at least 8 we can remove 6 and obtain the subproblem). $n = 2, 3$: trivial. $n = 4$: place 2x3 blocks from the left. If you have < 2 columns remaining then it's okay, otherwise place a 3x2 piece. $n = 5$: when $m < 5$ we can use the above cases. $m = 5$ has a construction with a hole in the middle, $m = 6$ is divisible by 3, and $m = 7$ can use the hole from $m = 5$ + a 3x2 piece. $n = 6$: divisible by 6. $n = 7$: when $m < 7$ we can use the above cases, so now $m = 7$; there is a construction very similar to $n, m = 5$, which leaves a hole in the middle.
 +17 On the task Sum of Four Values, I think the judge may be wrong... specifically, the output section asks for any solution, but I think the judge checks whether the provided answer is identical to its own.
 On bablu_45 → Interview Question, 5 years ago 0 For each index $j$ precompute the last index $i$ such that $A_i = A_j$, then we can obtain the last 4 occurences of a value in $O(1)$. Deciding to remove 3, 4 or 5 then considers $O(1)$ subproblems.Edit: Now I see the issue, we don't necessarily group $A_j$ with the ones immediately before it. My bad.
 On bablu_45 → Interview Question, 5 years ago 0 Edit: wrong.Maybe something like this?:Let's suppose we can only remove 3, 4, or 5 equal consecutive elements, this is still equivalent.$dp_{i, j} = 1$ iff we can completely remove the subarray $[i, j]$.To compute the transition, we must remove element $A_j$. If we intend to remove $A_j$ with 2 more elements at indicies $x, y (x < y)$, then we must first be able to remove $[x + 1, y - 1]$ and $[y + 1, j - 1]$, then we remove the 3 elements, and then we should remove $[i, x - 1]$, so these are the subproblems we should look at. If we want to remove $k$ elements, there are $k$ subproblems to look at. Since we only allow removing 3, 4 or 5, the amount of time to compute $dp_{i, j}$ is $O(1)$.Onto the original problem, if we mark all non removed elements as "special", then between adjacent special indicies we must remove the whole subarray. If we let $D_i$ be the minimum amount of special indicies we can keep in prefix $[0, i]$, then the transition is:$D_j = \min(D_{j - 1} + 1, D_{i - 1})$ for all $i$ such that $dp_{i, j} = 1$.Seems too simple, am I missing something?
 On kostka → List of the IOI 2019 participants, 5 years ago +51 Team Israel: Noam Gutterman Noam527 Roee Sinai Almog Wald almogwald Itay Yehuda
 On nchn27 → Cycles in a classic team-building activity, 5 years ago +15 Suppose each person is a vertex and every hand is an edge, and we can also assume that every right hand holds someone else's left hand. If the right hand is a directed edge outwards of the vertex (left hand inwards), we can see that this problem is equal to the expected number of cycles in a permutation, which is easier to google.
 On awoo → Educational Codeforces Round 66 [Rated for Div. 2], 5 years ago +29 Here's a similar solution but maybe easier to implement.Denote $D_{i, j}$ as the minimum cost to split prefix $i$ to $j$ subarrays. we fix $j$ and solve the layer in $\mathcal{O}(n \log n)$.We compute the answer for all $i$ in a D&C fashion; to compute the answer for all $l \le i \le r$, first recursively solve for $[i, m], [m + 1, r]$ where $m = \frac{l + r}{2}$, and then we consider transitions between the two halves.define $A_i = \max(a[m+1...i]), B_i = \max(a[i...m])$, the transition is$D_{i, j} = \min_{k \le m} (D_{k, j-1} + (i - k) * \max(B_k, A_i))$Notice that due to monotonicity of $A, B$, for fixed $i$ there is a suffix of the first half where $A_i$ is larger, and a prefix where $B_j$ is larger. We can solve both cases independently;$D_{i, j} = \min_{k \le m} (D_{k, j-1} + (i - k) * A_i) = \min_{k \le m} (D_{k, j-1} - k * A_i) + i * A_i$We have a linear function inside (for constant $k$), so we can maintain those in a cht. When we increase $i$, the suffix in which $A_i$ dominates increases, and we repeatedly add functions with smaller slopes, so we can do this in $\mathcal{O}(n)$.The case when $B_k$ dominates is almost identical, same analysis.
 On Noam527 → APIO 2019 Discussion?, 5 years ago 0 Why did you need a segment tree on A? I think the common solution is finding period and sweepline.
 On Noam527 → APIO 2019 Discussion?, 5 years ago +11 The ideas were not difficult to figure, and I also think the solutions relied heavily on constant factor... overall, didn't really like the problems.
 On kostka → Rating decay, 5 years ago +24 I agree, solving a hard problem is satisfying... this doesn't diminish from rating satisfaction. As I said, CF rating is like proof that you are skilled and you put effort into improving yourself. If you're stating your opinion, don't be surprised that I disagree.Also my last sentence was just to say that a lot of people call rating just a number, even though there's a meaning behind it.
 On kostka → Rating decay, 5 years ago +56 Rating mostly stands for skill. Increase in your rating should give you satisfaction by proving progress, and decrease could (should?) give you motivation to work to fix your mistakes... at least that's how I view it.money is also "just a number", this statement is weak.
 On Noam527 → APIO 2019 Discussion?, 5 years ago 0 Auto comment: topic has been updated by Noam527 (previous revision, new revision, compare).
 On Noam527 → APIO 2019 Discussion?, 5 years ago 0 Sure, thanks for the information.
 On Noam527 → APIO 2019 Discussion?, 5 years ago 0 How did you know?
 On Noam527 → APIO 2019 Discussion?, 5 years ago 0 Seems like 203 is pretty common result. Here we had two people with 203, and I've also heard of another 8 people with 203.
 On Lewin → Forethought Future Cup Final Round Tutorial, 5 years ago +15 Interesting. 4 is also optimal for $n$ large enough, since 3 is impossible when $2^{1.5n} < 3^n/6$.
 On Juve45 → FIICode Final Round [CSAcademy online mirror — rated], 5 years ago 0 Even though the editorial for E is simple, there is another easy solution to E (one with no edges cases);We can treat this as a 1 player game for P1. For each removal of a $b$, P1 gets 1 point. After all operations, for each sequence of $X$ $a$'s, P1 loses $\lfloor X/3 \rfloor$ points. P1 wins iff the maximal amount of points he can get is positive.We can solve with dp: $dp_{i, j}$ is the maximal amount of points P1 can get, if we only consider the prefix of length $i$, and the last sequence of $a$'s was of length $j$ (modulo 3). Transitions are easy in constant time.
 On ipaljak → Croatian Olympiad in Informatics (COI) 2019, 5 years ago 0 "The contestants will be able to receive full feedback on a limited number of submissions per task."Can you give more details about this? Sounds like a new feature. How many submissions are allowed per task, and how many of those can get full feedback?
 +8 Problem C can also be solved with the same idea, but without binary searching. Incrementally add nodes as possible sinks from the bottom up. We iterate over the coins (after pulling them upwards to their subtree root) in decreasing order of depth (this implies the ranges the nodes cover as we keep going cannot be contained by previous ranges, which is used to prove the greedy), and add a node as another necessary sink, while the current coin doesn't have a free sink in its subtree. Once there is a free sink we can remove it (we maintain sinks with a set). $\mathcal{O}(n \log n)$.Also, you can solve the first two tests of B optimally with dp in $\mathcal{O}(n^4)$.
 +30 Solutions: AWe must find for each star the time frame in which it's inside the circle. Then we sort those to find the time in which we could see the most stars.To find the intersections, if they exist, I changed every star to a function and found intersection points with the circle by a formula on paper. Because of imprecision, after I find the intersection points and the times, I just iterate on the close integers (+- 3) to find the exact time frame. BWe will never want to change a height below 0 anyway (exactly 0 is at least as good), so we can ignore this restriction.Let's subtract from position $1 \le i \le n$ the value $i * M$. Now the sequence of heights we iterate over is non-increasing. When we adjust the height of some pole, it's best to change it to the height of the one before it. We can view it as ignoring all the poles which we picked to change the height of. Since we're looking to minimize the number of ignored poles, we're looking to maximize the number of unchanged poles, which form a non-increasing subsequence. So, the task is to find the longest non-increasing subsequence of the array (and make sure it begins with a 0 before the start of the array). CLet's define $C_i = A_i - B_i$, as the number of extra fertilizers position $i$ has. Interestingly, we do not care whether our operations for some solution make some position have a negative amount of fertilizers; we will always be able to reorder the operations to satisfy this restriction. So we can ignore it.Also, we can regard every operation as taking 1 fertilizer from position $i$ and moving it to position $i-1$ or $i+1$, with the cost of 1.Let's transform $C$ to its prefix sum. Our operations become to change a value in the array from $x$ to $x+1, x-1$ with the cost of 1 = changing $x$ to $y$ with the cost of $|x - y|$. We can only do this operation for all indicies except the last (which is the sum of all $C$), and the goal is for the array to be nondecreasing, and starting from (at least) 0. So first, for every value that is less than 0 or more than the sum of the array, adjust it to fit. Notice that this already solves subtask 1. We can now ignore the last value and the restriction with beginning with 0.The following part is the complex one (making the array sorted). Our process will be to iterate on values from small to large, and in every iteration we have a set of "changeable" values — values which will be at least the current ones in the answer sequence, and we keep incrementing them until it's not beneficial. There will also be a group of values which we already fixed — they will also form a prefix since we fix values from small to large. All values which we have not iterated on are considered unknowns (but we know they are larger than what we have now). Also notice that changeable values will always be the same (they become different when we fix some of them).Suppose the case in which we have one changeable value, and two unknowns to its left. If we decide to fix this value now, then both unknowns to the left must become this value. Observe that since our value is the smallest between those 3, it will be at least as good to increment our changeable value until it's equal to the smaller between the other two (in other words, we decide to keep it changeable). On the contrary, if we had 2 changeables and 1 unknown to their left, then it's better to decrease that unknown towards them instead of increasing them. Also, this makes us fix those 3 positions.Let's formulate this; imagine the array, in which every fixed value (in the prefix) implies a 0 in its position. Every changeable value is a +1, and every unknown is a -1. If there exists a prefix, whose sum is positive, then we should fix all the values in this prefix. We can implement this array with a lazy segment tree — when we change a value between $-1, 0, 1$, we update a suffix by adding some value to it. Then a query is the maximum value and its position.Fixing values can be done by bruteforce, since overall we fix $n$ values.
 On awoo → Educational Codeforces Round 62 Editorial, 5 years ago 0 In F, it says "...using path compression in DSU here is meaningless...". Is it meaningless? I think it can actually make the solution slower, since path compression is amortized, and if the structure allows undoing operations, then amortization should fail (might give TLE on countertests).
 On I_Love_Tina → Does IOI rank correlate with CF rating?, 5 years ago 0 Maybe one could only able to solve A and B in some specific cf round, but get 75% of the points on C and 40% of the points on D (if those were partial score tasks). Then there's clearly a difference.
 On I_Love_Tina → Does IOI rank correlate with CF rating?, 5 years ago +1 I think there's also a big impact coming from: Your experience with onsite competitions, how well you perform under pressure, how long you're able to be focused for. Even your mentality on a specific day (maybe if you're stressed before the competition for some reason). I participate on CF while I'm home, which is literally the comfort zone. I found that, when I lacked more experience with onsite rounds, my results were incredibly worse during onsite rounds, than how I usually perform from home (and I'm not talking from a "single experience" point of view). This is hopefully being improved at the moment though :). These two points are possible to cover, by going to more onsites when possible. So, if you have an opportunity to participate in some onsite round, you should grab it with both hands.
 On aman_naughty → Can someone help in printing cycle in a graph, 5 years ago +3 When you do dfs, you can consider vertices of 3 types; unvisited, visited but in the stack, or visited and finished (we called dfs on all their neighbors and returned already).If your dfs detects a cycle, then all the cycle vertices are the type that's in the stack (they must have been visited, but they cannot be finished because you will find a cycle before finishing any vertex on it). So, one easy way to find the cycle is to maintain the stack along with a dfs (vector or stack, either global or passed by reference). When you find a visited neighbor, iterate from the back of the stack until you reach that visited node, and put all the vertices you iterated on in some vector, representing the cycle.
 On smurf2 → Number of points in a triangle, 5 years ago +3 You can break down a triangle query to a constant amount of "ray" queries; how many points are under a ray. Specifically let's break it to "prefix" rays — those which extend infinitely to the left.If we sort the points according to x-axis, then the prefix ray queries become queries to count points under a line inside some prefix. Let's break the array to $\frac{N}{K}$ blocks of size $K$. For the tail of the prefix query (the cells which are not in a complete block), count normally in $\mathcal{O}(K)$. As for the blocks, you can do some precomputation described nicely here. You end up with $\mathcal{O}(\sqrt{N} \cdot polylog N)$ per query for an optimal choice for $K$ (the polylog depends on how you implement whichever approach is described in the linked post). This approach is probably not noticeably faster for acceptable values of $N, Q$, but at least it's sublinear.Also, this works online, but if you allow yourself offline then you can do cuts on complexities, some shown in the linked post, and some can be done here to cut down memory a lot (build the structure per block on its own, and move to the next one using the same memory, for instance).
 On ko_osaga → ...And another boring OI solving stream, 5 years ago +62 Done :) (you can view in the standings)The first time I solved the problem I wrote a lot of case analysis to do the best thing I found on paper. This time I tried to write something more general, which was able to solve with 1 operation upto 44, 2 operations upto 101 and 4 operations upto 257. For some reason it bugs for 3 operations (bound should be around 180-190 I think), but I tried submitting without it (solving either with 1,2,4). Jury's answers are weak enough to let this pass with 99 points, and then I had to improve just n=112 so I did the stupid thing that solves in 8 + 36T for some limit T.tl;dr: My code is not general, just solves whatever is good enough for the jury. I couldn't bother fixing it, as it was unnecessary (my general solution improves jury's answers for tests 12,13).
 On ko_osaga → ...And another boring OI solving stream, 5 years ago +45 I don't think it's possible in 5 hours, but overall it's possible... I was given the task about 1-2 years ago, and gave it 2 weeks, after which I got 100.I also lost the source code (this is the only thing I have left), but if you want to challenge me to rewrite it (maybe given a judge?) then sure.I barely remember the idea, but it doesn't give away the solution; Try to maintain values to print, but don't print necessarily right away. You can reorder elements (not necessarily an increasing sequence) for easier implementation. Edit: Well apparently I have some source code that is incomplete, and some of the output files for some inputs. This is one of them, solving n=255 using upto 4 consecutive S operations at any time. I think this expresses all that's necessary to solve for fullscore.Specifically, the bounds I kind of remember are:8, 44, (I think 97 or 101), (I think 137 or 173), 257.
 0 The title was #544, and I pointed it out so they would fix it.
 +24 You mean round #545?Edit: got fixed.
 On geniucos → Info(1) Cup 2019 (In memory of Mihai Patrascu), 5 years ago 0 clearly desperate...anyway, when is the expected time for the results?
 On geniucos → Info(1) Cup 2019 (In memory of Mihai Patrascu), 5 years ago 0 Will submissions that weren't judged before the end be judged and considered in the results?
 On KAN → Codeforces Global Round 1, 5 years ago +8 I guess everybody is doing their part...sir, (102)105 is not equal to 1010.
 On nitesh_gupta → CodeCraft-19 and Codeforces Round #537 (Div. 2), 5 years ago +26 I find all points valid.
 On parth_15 → How to solve/prove this type of problems?, 5 years ago +12 Binary search assumes the function you search on is monotonic. Ternary search assumes the derivative of the function you search on is monotonic.The reason for which what you said isn't true:f is increasing and g is decreasing, therefor f' is always positive and g' is always negative (using the tag' for derivative). Let h = f + g, then h' = f' + g'. We are hoping to show that h' is monotonic for ternary search, but the information about f', g' is not enough (we can make h' jump between positive and negative repeatedly, for instance).As for your problem: we have the functionand we look only at x1 ≤ t ≤ x2, since anything else is suboptimal. (Also we can assume that x1 ≤ x2, otherwise we can swap and the answer won't change).We have more information than just monotonicity for each term. If you want to prove that you can use binary or ternary search, you should probably analyze this function.
 +14 I'd guess the authors had no problem to place as 1C (possibly because the problem that was meant to be there turned out to be a copy of some existing problem), and so they were desperate enough to insert this one instead.
 On niyaznigmatul → Innopolis Open, Elimination round 2, 2018-2019, 5 years ago +8 My solution to ETransform the expression to a tree; each internal node is an operation with 2 children, and a leaf has a value. Every updates changes something in some node and you need to update on the path up, and finally output the value of the root.My solution consists of a daunting HLD with about 300 lines (although I did not try conserving on lines, I tried to have the least amount of bugs possible). I ended up with only 3 bugs and managed to fix them in time (and then also had to do some constant optimization).I will leave the remaining details for you to figure.If you manage to find some sqrt decomposition solution, I highly suggest using it instead. I didn't find one.
 On vovuh → Codeforces Round #527 (Div. 3), 5 years ago +1 In D1, from some state you can always add verticals on any column for as much as you'd like, since anyway you can return to the state before with all columns being increased by some constant C (which makes no difference).Therefor, you can always keep at a state where the difference between the maximal and minimal column is less than 2, this implies you can take all columns modulo 2.
 On madhur4127 → Avoid silly overflow mistakes taking modulo in C++, 5 years ago 0 Another issue is that it relies on the modulo being prime (the inverse is found using fermat's).
 On leafvillageninja → using #define endl "\n" macro, 5 years ago 0 I started using this once I heard '\n' is quicker than endl, and I was used to write endl, but I had to write this template enough times that now I don't mind writing either of them, but now I'm used to this template so I'm still using endl because I keep forgetting to remove the #define, and if it's there then I should use it anyway.It's all pointless.
 0 I also used a map for intervals for each row, but I cleared it out for each new row, so overall its size is upto K at any time, so I'm not sure where the memory issue is. I don't see what's false in this approach, but I can't tell since I ended up with WA for every test case...