| # | User | Rating |
|---|---|---|
| 1 | Benq | 3792 |
| 2 | VivaciousAubergine | 3647 |
| 3 | Kevin114514 | 3603 |
| 4 | jiangly | 3583 |
| 5 | strapple | 3515 |
| 6 | tourist | 3470 |
| 7 | dXqwq | 3436 |
| 8 | Radewoosh | 3415 |
| 9 | Otomachi_Una | 3413 |
| 10 | Um_nik | 3376 |
| # | User | Contrib. |
|---|---|---|
| 1 | Qingyu | 158 |
| 2 | adamant | 152 |
| 3 | Proof_by_QED | 146 |
| 3 | Um_nik | 146 |
| 5 | Dominater069 | 144 |
| 6 | errorgorn | 141 |
| 7 | cry | 139 |
| 8 | YuukiS | 135 |
| 9 | chromate00 | 134 |
| 9 | TheScrasse | 134 |
|
+39
I am submitting The discrete derivative, falling factorials and polynomials together. |
|
On
CelioPassos →
Snapshot of finite calculus and using it to solve a 3000 rated problem, 19 months ago
+27
Alright, there is more to it, if you are familiar with generating functions and polynomial Taylor shifts (see adamant's blog). Just had this idea today and it concretizes the previous intuition about differential equations. Let's take a look again at our set of equations: Note that we only defined $$$\Delta^2 W_k$$$ for integers $$$k \in [1, m - 2]$$$. Let us forget about this restriction and think of $$$\Delta^2 W_k$$$, $$$\Delta W_k$$$ and $$$W_k$$$ simply as usual polynomials in $$$k$$$ (allow their "continuation"). We know that, for a polynomial $$$P$$$, Thus where $$$I$$$ is the identity operator. Of course, we can iterate this and we get $$$\Delta^2 P = (\exp(D) - I)^2 P$$$. Multiplying by $$$\frac{D}{\exp(D) - I}$$$ twice on both sides yields Letting $$$P(x) := W_x$$$ and using the fact that we get We got ourselves a differential equation! Integrating twice gives us two constants: The initial value conditions yield $$$C_1 = \frac{nm}{6}$$$ and $$$C_2 = 0$$$. Solution: 281804074. |
|
On
CelioPassos →
Snapshot of finite calculus and using it to solve a 3000 rated problem, 19 months ago
0
Ops, that’s right! Thanks for spotting it. |
|
+88
Hey folks, I am submitting my blog: Snapshot of finite calculus and using it to solve a 3000 rated problem. |
|
+45
Some may be standard for IGMs and LGMs, but I don't think it holds true for most Masters and low reds. Many, not all, Div 2 rounds are definitely suitable for a "Div 1.5" and the coordinator should be able to decide this. |
|
+10
Add a betting system to Codeforces, so we can place bets on who will win each round and also on ourselves against our rivals. |
|
+129
Whenever you feel interested in it. |
|
+11
Alternatively, you can put new nodes in a |
|
On
orz →
First Ever Users to Reach Some Rating Milestones Revised: ratings above 1500 and greatest falls, 4 years ago
+10
Yeah, those are most likely cheaters. |
|
On
orz →
First Ever Users to Reach Some Rating Milestones Revised: ratings above 1500 and greatest falls, 4 years ago
+10
Resubmission also causes (previous) submissions to be skipped. |
|
+9
True. |
|
-16
I referred to "common sense" when I was talking about differentiating a correct solution with a random bug from an outright wrong solution originated from a guess, not on what should FST (in this I was clear, I want the former to fail pretests and the latter to fail system tests). But yeah, I guess I am more ok with some extra subjectivity from the setters than others. :( |
|
-19
In my proposal there would no such questions:
Is it a huge issue if we're not perfectly consistent? I never set a contest, but I was under the impression that creating tests were like: 1) generate a bunch of random small tests, 2) generate some random big tests, 3) add special tests to catch specific things; and what differentiates weak tests from strong tests is basically part 3). I am only suggesting a different approach to that part, it doesn't seem like it would make a big difference for the case of an almost correct solution compared to current approach (it should fail something on part 3 if the setters foresee it, or something on part 1 by chance). How is the current approach better in this case? This seems more like argument against system tests in general.
True, but that's part of my point. Making it riskier.
This shouldn't happen, if I'm not completely wrong in what I just wrote. |
|
-40
Sorry for calling you a robot, sir. |
|
+25
It's more fun. |
|
-32
It only creates problems for people who guess solutions. You haven't stated any other problem until now.
I just said above that I am in favor of max-tests.
Is this participant able to come up with a proven correct solution in contest time? Or an unproven correct solution? If it's the former, my approach does not affect them. If it's the latter, then they are just guessing. Should they be rewarded for guessing correctly, as in a casino?
We're not robots following instructions to launch a rocket. Human beings can work with loosely defined concepts. Although you may be an exception. |
|
+5
Then pretests should be just strong enough to catch silly bugs, and nothing else. |
|
-39
Yes, everything should be either black or white like this. No middle ground... Let's apply such reasoning to other things. What's your opinion about DP problems? Do you like them? How many problems in a contest should be about DP? Do you have an answer or some ill-defined middle ground? Should we choose between all DP or no DP then? You don't need a well defined middle ground as long as you use common sense, ffs. |
|
-38
I never said I was against extreme tests in pretests (ok, maybe I wasn't clear enough). I am in favor of it actually. What I am against are tests specifically designed to catch outright wrong solutions. For example, let's say there is some quantity $$$X$$$ in a problem (that is not part of the input) which some people are likely to assume is $$$O(N)$$$ when actually it is $$$O(N^2)$$$. I don't think setters should create a test case where $$$X \approx N^2$$$, unless the expected time complexity of a solution is $$$\Omega(N^2)$$$. |
|
-42
I'd be in favor of adding pretests to catch the slightly wrong implementation, though it's unlikely problem setters would even come up with all possible slightly wrong implementations.
To make hacking great again and, as I already said, avoid proofs by AC. |
|
-74
I didn't say they were 'less wrong'. Technically they are as wrong as any other bug, they are just uninteresting and annoying in comparison to other stuff like whether or not your solution has the correct complexity.
Bruh, just use common sense. Any decision when it comes to organizing a contest is arbitrary, or do we have exact rules to decide if a problem set is good or bad? |
|
+100
Unpopular opinion, but I think pretests should be made weaker, not stronger, in order to discourage guessing and proof by AC. Pretests should be there just to catch uninteresting and annoying bugs like swapping $$$N$$$ and $$$M$$$ by mistake, corner cases, etc. They shouldn't check if your solution is $$$O(N^2)$$$ instead of $$$O(N)$$$ or if your greedy is actually correct. |
|
+141
"A couple"? Pfft, such a noob. I only needed one contest to reach purple. |
|
+54
Mike, please create Division 1.5. Some (not all, of course) Div 2 rounds have hard enough problems at the end to make an interesting contest for oranges and low reds (packing purples into this division would be good too). I don't think it makes sense to have participants in this rating range waiting until the next Div 1 just because we don't have hard enough Div 1 E/F problems for LGMs. |
|
+39
You should not be using that list as a roadmap, it's just there for referencing. Use a book or problemset like CSES instead. |
|
+23
SOS DP but with multisets. |
|
+71
I suggest putting the voting buttons in the editorial. People should provide feedback only after they have seen the solution. |
|
+13
As I see it, cheaters don't need more than 1 minute do copy-paste solutions and submit. Sure, longer contests means the source of leaked solutions would solve more problems, but so would honest participants. Why would that affect the difference between ranks of cheaters and non-cheaters? |
|
0
Please state how much time do you think is optimal and why. |
|
+202
Just cancel all future Div 2 rounds. This way, 99% of cheaters will have no time to cheat. |
|
+68
I am not going to write vectors of vectors of vectors of vectors to solve 4-dimensional DPs... |
|
+106
Just avoid global variables like the plague in problems with multiple test cases. |
|
0
Greedy works. Root the tree somewhere and run DFS. After you visit all children of a particular node $$$u$$$, either:
To see that it works just note that, after this, we'll have the optimal pair (maximum number of marked nodes in the subtree of $$$u$$$, depth of highest marked node). |
|
+3
I also received two emails (but only used the key in the first one). |
|
+88
Normally, I wouldn't worry too much about 30 mins of speedtyping, but the fact that this combined round is just 2 hours long makes me nervous, since it leaves only 1.5 hours for Div 1 problems. That is, if we do not fuck up and waste more time on the easy ones... |
|
+8
Your argument in the comment I replied to was literally "this hasn't been done before, therefore it's not good". Is that not being unreasonably conservative? |
|
+8
I think it was clear I was speaking in loose terms and not literally referring to the subject at hand. But since you feel the need to attack strangers on the internet, maybe you're the one that needs to get a life? Chill down. |
|
+1
Yes? Some people are too conservative and unreasonably against change, no matter how little that change would cost or how much it'd improve our lives. They will really fight against change, even when they can't cite one downside related to it. Just like you here. |
|
+13
Sorry, I don't see an argument being made here, just you rambling... If we could have something at basically no cost, why not have it? Why are you so against the idea? Will you stop participating if editorials get posted right after the rounds? |
|
+17
I think her point is that this is exactly the type of problem that comes up as Div2 A, B and C. Problems that look hard at first sight but are trivialized by simple observations. You could open almost any Div2 round and find problem like these (well, except problem 4 which is unlike the others and has nothing to do with the title). What's so interesting about a blog post with random unspecial problems? |
|
+15
I think you are just listing non-issues. Could you use your imagination a little bit? All the problems that you mention could be easily be solved by not scheduling a round before the editorial is ready or almost ready. Just like you won't schedule a round before having problems, you could also not schedule it before having an editorial... |
|
+78
This is very annoying. If they are not going to post the editorial right away, they should at least say when they will do it, so we don't have to keep checking CF blogs to see whether it was posted or not. |
|
+24
I have no idea why the editorial uses the word $$$mask$$$... I found it easier to define $$$dp_{node,l,r} = $$$ minimum cost to prevent $$$s[l, ..., r]$$$ from appearing as a subsequence of the interval corresponding to $$$node$$$ (and assuming that $$$s[l, ..., r - 1]$$$ has already appeared), where $$$s$$$ is the string "abc". Transitions to merge two nodes are straightforward (just three nested loops, yielding $$$20$$$ operations per merge). |
|
+18
Interesting. I have also noticed these parallels. I wonder if we should try to apply sports periodization to competitive programming. I guess the analogy would be: a long period where you solve a bunch of standalone hard and slightly hard problems (by standalone I just mean outside a competition environment) and upsolve problems from previous contests you did, followed by a shorter period where you take lots of contests and virtual participations and ignore upsolving/practice of standalone problems. |
|
+50
CF staff just ghosts problemsetters. It is no wonder that we are out of contests. |
|
+42
Law of diminishing marginal returns. |
|
0
Is this supposed to make a problem interesting? Checking small tests until you find a pattern and then YOLO submit it with no understanding as to why it works? |
|
+84
Round coordinated by antontrygubO_o -> guaranteed guessable problems where you can AC without proving. |
|
+30
Alternative solution: For a subset of candies $$$S$$$, let $$$w(S)$$$ be the polynomial where the coefficient of $$$x^k$$$ is the number of ways to select $$$k$$$ candies from $$$S$$$ (in this case, that's simply $$$\binom{|S|}{k}$$$) and $$$s(S)$$$ the polynomial where coefficient of $$$x^k$$$ is the sum of the number of distinct colors over all subsets of size $$$k$$$ in $$$S$$$. If $$$S$$$ only contains candies of a single color, then $$$s(S)$$$ is trivial to compute. Also, if the colors in $$$S$$$ are all different from the colors in $$$T$$$, $$$s(S \cup T)$$$ is simply $$$s(S)w(T) + w(S)s(T)$$$. So this gives an algorithm: start with the sets $$$S_c$$$ of candies with color $$$c$$$ and their polynomials and, while we have more than one set, pick two sets $$$S$$$ and $$$T$$$, compute the polynomials of $$$S \cup T$$$ and discard $$$S$$$ and $$$T$$$. If we proceed as in Huffman coding order, this runs in $$$O(N\log(N)^2)$$$. |
|
0
It wouldn't be perfect, and it doesn't have to be. But if we had like 10 or so high rated testers, I'm sure we'd get something meaningful.
That's fine? If a contest is only interesting to LGMs because of a few problems at the end, then most participants would find it a bad contest, because it actually is a bad contest for them.
My point is not to have less people participating, but having more accurate expectations before a round. |
|
0
Yeah, but Anton and Mike are whining about the current mechanism. People need a voting system so they can vote "1 out of 5 stars" anonymously instead of saying "contest is shit" in the anouncement. Also, it doesn't help that currently there will always be testers telling how wonderful the problemset is, regardless of quality. The testers that do dislike the problems just stay shut. If we could see the actual testers' average rating of a contest before it takes place, it would help tremendously everyone's expectations and we'd see less of this offensive criticism (these folks wouldn't even participate in that case). |
|
+11
That does not guarantee optimality of the construction. |
|
+23
Maintain the suffix array of the reversed string (prefix array?) online with hashes and binary search (in a set with custom comparator). Now just count the total number of different substrings and subtract the number of substrings that appear exactly once. Both of these things are easy to calculate via the LCP of adjacent suffixes in the suffix array, so you can easily calculate the difference that each update causes in the answer. |
|
0
I think a struct like the one below is really convenient for these types of problems. Code The usage is basically:
|
|
0
|
|
+27
Antontrybug rejected all the problems, and now the setters went on strike. Probably. |
|
+11
The tax included prices are strictly increasing, so if the tax included price of integer $$$A$$$ is $$$X$$$, there are $$$X - A$$$ skipped integers before $$$X$$$. Now you can use this to find some closed formula, or just binary search. |
|
+68
Maybe forcing the queries to be answered online would've been a good option here. |
|
+3
You don't need to change anything inside the segment tree class when passing a struct like that. Full implementation if you want to see it. |
|
0
Another way is basically wrapping up everything that you need to define for the operations inside another struct, like: Then you just need to pass |
|
0
What's the difference between committing yourself in the last 10 minutes before a round starts (by confirming your registration) vs. 5 minutes after it starts when you hit submit button on problem A? |
|
+10
If you know about the rank-nullity theorem and that the rank of matrix equals the rank of its transpose, the problem becomes: for each vector with a single non-zero entry $$$u$$$, mark vertex $$$u$$$, otherwise it has another non-zero entry $$$v$$$, so add an edge between $$$u$$$ and $$$v$$$, now count the number of connected components without a marked vertex (this equals the nullity of the $$$n$$$-by-$$$m$$$ matrix with rows equal to the vectors given in the problem). No need to create a virtual vertex. |
|
+4
|
|
+12
Do they need to be though? I have never noticed this fact when solving problems in gym. Look at most ICPC contests, they have all kinds of problems with all kinds of difficulties. Why do we need ABC to be completely different from EFG? What's the point? Why not have separate contests, one with ABC type of problems and others with DEFG type?
Of course it is, this whole blog post is about people who have not been enjoying the last CF rounds... |
|
+31
Yeah, I only looked at the 3 easiest problems, I was stuck on them for 3/4s of contest time. After that, I was like "meh, I don't care about this". What should I have done? Go to div2 E when I'm stuck in div2 B?
I don't enjoy solving these adhoc/one-liner problems and I don't want a barrier made of them stopping me from getting to the interesting ones. Maybe I should just quit CF then... |
|
+29
Yes, it seems their line of thinking is like this "let's avoid graph, dp, binary search, data structures and the like, these are all uninteresting and too standard for div1 and become mere typeraces for them, instead let's make whole rounds of adhoc/observation-forces and give them to div2, afterall these are more interesting to div1". |
|
0
I did it (2 sparse tables for the heights + 2 segment trees to query minimum value of elements on the stacks + 2 binary searches to find the range on which I must query), but turns out this was completely useless, because after every query on a range of elements in the segment trees I would pop out exactly the same elements from the stack. |
|
+11
HL won't always win, it depends on the parity, you can prove it by induction. Suppose we have $$$x$$$ stones for some even $$$x$$$ and that the max pile is not bigger than all the others combined. If $$$x = 0$$$, the proposition holds trivially. Assume, by induction, the proposition is true for all arrangements with $$$x - 2$$$ stones. Then, no matter which pile T chooses, HL can always choose another one such that the (possibly new) max pile is not bigger than all others combined. Moreover, after this we'll have $$$x - 2$$$ stones. By the induction hypothesis, HL wins. |
|
+18
You can also use the centroid decomposition to compute distances between any two nodes in $$$O(\log n)$$$, so you don't need LCA to solve Xenia and Tree. Just compute the distances from every node to its ancestors in the centroid tree. Each node has at most $$$O(\log n)$$$ ancestors, so you will need additional $$$O(n\log n)$$$ memory, but there is basically no runtime overhead: while constructing the centroid decomposition, you can compute distances to the previously found centroid (from its descendants) in the DFS you use to find subtree sizes. Now, to compute distances between any two nodes you just need to find their LCA in the centroid tree. You can see my submission 90467624 for details. |
|
+1
You can't run BFS at all positions, that would be quadratic in the number of vertices (which in this case can be $$$10^6$$$). |
|
0
Did you precompile the headers with the same flags as you are trying to compile your program? |
|
On
TooDumbToWin →
ICM Technex 2018 and Codeforces Round #463 (Div. 1 + Div. 2, combined) Editorial, 6 years ago
0
How do you deal with the fact that the $$$a_x$$$ won't necessarily be sorted in each subtree? Edit: never mind, I misunderstood it. |
| Name |
|---|


