Comments

Alright, there is more to it, if you are familiar with generating functions and polynomial Taylor shifts (see adamant's blog). Just had this idea today and it concretizes the previous intuition about differential equations.

Let's take a look again at our set of equations:

$$$ \begin{align*} \Delta^2 W_k &= -\frac{n(k + 1)}{m},\\ \Delta W_1 &= W_1 - \frac{n}{m},\\ W_m &= 0. \end{align*} $$$

Note that we only defined $$$\Delta^2 W_k$$$ for integers $$$k \in [1, m - 2]$$$. Let us forget about this restriction and think of $$$\Delta^2 W_k$$$, $$$\Delta W_k$$$ and $$$W_k$$$ simply as usual polynomials in $$$k$$$ (allow their "continuation").

We know that, for a polynomial $$$P$$$,

$$$ P(x + 1) = (\exp(D)P)(x). $$$

Thus

$$$ \Delta P(x) = (\exp(D) - I)P(x), $$$

where $$$I$$$ is the identity operator. Of course, we can iterate this and we get $$$\Delta^2 P = (\exp(D) - I)^2 P$$$. Multiplying by $$$\frac{D}{\exp(D) - I}$$$ twice on both sides yields

$$$\displaystyle \frac{D^2}{(\exp(D) - I)^2} \Delta^2 P = D^2 P. $$$

Letting $$$P(x) := W_x$$$ and using the fact that

$$$\displaystyle \frac{D^2}{(\exp(D) - I)^2} = I - D + O(D^2) $$$

we get

$$$\displaystyle P"(x) = (I - D + O(D^2))\left(-\frac{n(x + 1)}{m}\right) = -\frac{n}{m}x. $$$

We got ourselves a differential equation! Integrating twice gives us two constants:

$$$\displaystyle P(x) = -\frac{n}{6m}x^3 + C_1x + C_2. $$$

The initial value conditions yield $$$C_1 = \frac{nm}{6}$$$ and $$$C_2 = 0$$$.

Solution: 281804074.

Ops, that’s right! Thanks for spotting it.

+45

Some may be standard for IGMs and LGMs, but I don't think it holds true for most Masters and low reds. Many, not all, Div 2 rounds are definitely suitable for a "Div 1.5" and the coordinator should be able to decide this.

+10

Add a betting system to Codeforces, so we can place bets on who will win each round and also on ourselves against our rivals.

+129

Whenever you feel interested in it.

Alternatively, you can put new nodes in a std::deque instead of using new and delete.

Yeah, those are most likely cheaters.

Resubmission also causes (previous) submissions to be skipped.

True.

I completely agree with the prior comments by kal013. In particular, I don’t think what you’re describing is “common sense” because this discussion indicates that your sense of what “should” FST/get rejected on pretests is certainly not common/shared between participants. This means that even in your ideal world, the set of pretests will be influenced by the author’s subjective judgment about what should FST.

I referred to "common sense" when I was talking about differentiating a correct solution with a random bug from an outright wrong solution originated from a guess, not on what should FST (in this I was clear, I want the former to fail pretests and the latter to fail system tests).

But yeah, I guess I am more ok with some extra subjectivity from the setters than others. :(

You suggest that your proposal would reduce the element of guessing in programming contests, but in fact, it would force contestants to make more guesses, rather than fewer, about things like “is there a maximal test in pretests?,” “would the pretests fail me if I implemented my optimization wrong?,” “would pretests have caught me if this observation was incorrect?,” and so on.

In my proposal there would no such questions:

  1. I already stated that I am in favor of having maximal tests in pretests.
  2. I already stated that I am in favor of having pretests to catch slightly wrong correct optimization.
  3. It's very clear that I am against pretests to catch wrong assumptions, so the contestant should assume no, that there are no such pretests. Also, it's part of whole purpose of making guessing riskier.

Outside of having no pretests, there is no way to be perfectly consistent with the principles you have given. In practice, almost any pretests will fail a bad greedy, so under the system you’ve proposed, the most likely situation is that greedy solutions that are very incorrect (i.e., pass only a small proportion of test cases) will fail pretests while greedy solutions that are almost correct (i.e., only fail on a handful of edge cases) will FST. It’s unclear to me why you should be penalized more for writing an almost correct solution than for submitting a solution that is clearly wrong.

Is it a huge issue if we're not perfectly consistent? I never set a contest, but I was under the impression that creating tests were like: 1) generate a bunch of random small tests, 2) generate some random big tests, 3) add special tests to catch specific things; and what differentiates weak tests from strong tests is basically part 3). I am only suggesting a different approach to that part, it doesn't seem like it would make a big difference for the case of an almost correct solution compared to current approach (it should fail something on part 3 if the setters foresee it, or something on part 1 by chance). How is the current approach better in this case? This seems more like argument against system tests in general.

You suggest that implementing this system would reduce the element of guessing in competitive programming. I can guarantee that people will still attempt proofs by AC under the new system; the change would just make the stakes of this guessing game much higher.

True, but that's part of my point. Making it riskier.

Again, this makes little intuitive sense to me—why is submitting a greedy that fails on a particular edge case worse than submitting a solution that is the wrong time complexity or that fails on a wide range of possible tests?

This shouldn't happen, if I'm not completely wrong in what I just wrote.

Sorry for calling you a robot, sir.

TL;DR, why are there people promoting this system, it is beyond me. Please explain :(

It's more fun.

The example you give does not follow my statement, "Trying to get some sort of ill-defined middle ground would just create more problems than it solves." In your example, having an extreme take would have an obvious detriment to contest quality.

It only creates problems for people who guess solutions. You haven't stated any other problem until now.

Do you expect participants to be able to judge the constant factors and estimate whether the solution fits the time limit without a max-test?

I just said above that I am in favor of max-tests.

The only real 'benefit' of your proposal would be cases where a participant would have usually been able to come up with the correct solution within the contest duration, given they get the information of (TLE/WA etc.) on their unproven submission, would FST instead.

Why do you consider this to be the correct approach given the participant has the ability to come up with the correct solution and gets a penalty for the incorrect attempt?

Is this participant able to come up with a proven correct solution in contest time? Or an unproven correct solution? If it's the former, my approach does not affect them. If it's the latter, then they are just guessing. Should they be rewarded for guessing correctly, as in a casino?

You do seem to use common sense as an argument a lot, especially in a discussion involving subjectivity and a little ambiguity.

We're not robots following instructions to launch a rocket. Human beings can work with loosely defined concepts. Although you may be an exception.

Then pretests should be just strong enough to catch silly bugs, and nothing else.

To promote that, there could be a no pretest system as in TopCoder. Trying to get some sort of ill-defined middle ground would just create more problems than it solves.

Yes, everything should be either black or white like this. No middle ground... Let's apply such reasoning to other things. What's your opinion about DP problems? Do you like them? How many problems in a contest should be about DP? Do you have an answer or some ill-defined middle ground? Should we choose between all DP or no DP then?

You don't need a well defined middle ground as long as you use common sense, ffs.

My point was that such submissions are not differentiable by test cases (i.e. both would either fit the time limit or not). So you have to either include an extreme test in pretest which would fail both solutions or both of them would pass pretests.

I never said I was against extreme tests in pretests (ok, maybe I wasn't clear enough). I am in favor of it actually. What I am against are tests specifically designed to catch outright wrong solutions. For example, let's say there is some quantity $$$X$$$ in a problem (that is not part of the input) which some people are likely to assume is $$$O(N)$$$ when actually it is $$$O(N^2)$$$. I don't think setters should create a test case where $$$X \approx N^2$$$, unless the expected time complexity of a solution is $$$\Omega(N^2)$$$.

Consider a problem requiring a dp optimization and two submissions, one without said optimization and another with a slightly wrong implementation such that the complexity analysis no longer holds. What would you do?

I'd be in favor of adding pretests to catch the slightly wrong implementation, though it's unlikely problem setters would even come up with all possible slightly wrong implementations.

There do exist general guidelines such as not having a spike in difficulty, not easily google-able etc. Anyways, it proves my point -- why add an extra layer of subjectivity to the contests?

To make hacking great again and, as I already said, avoid proofs by AC.

Why should uninteresting bugs be treated as 'less wrong' especially corner cases?

I didn't say they were 'less wrong'. Technically they are as wrong as any other bug, they are just uninteresting and annoying in comparison to other stuff like whether or not your solution has the correct complexity.

Also, who would define a bug to be interesting/uninteresting -- it seems very arbitrary.

Bruh, just use common sense. Any decision when it comes to organizing a contest is arbitrary, or do we have exact rules to decide if a problem set is good or bad?

Unpopular opinion, but I think pretests should be made weaker, not stronger, in order to discourage guessing and proof by AC. Pretests should be there just to catch uninteresting and annoying bugs like swapping $$$N$$$ and $$$M$$$ by mistake, corner cases, etc. They shouldn't check if your solution is $$$O(N^2)$$$ instead of $$$O(N)$$$ or if your greedy is actually correct.

"A couple"? Pfft, such a noob. I only needed one contest to reach purple.

Mike, please create Division 1.5. Some (not all, of course) Div 2 rounds have hard enough problems at the end to make an interesting contest for oranges and low reds (packing purples into this division would be good too). I don't think it makes sense to have participants in this rating range waiting until the next Div 1 just because we don't have hard enough Div 1 E/F problems for LGMs.

You should not be using that list as a roadmap, it's just there for referencing. Use a book or problemset like CSES instead.

SOS DP but with multisets.

+71

I suggest putting the voting buttons in the editorial. People should provide feedback only after they have seen the solution.

As I see it, cheaters don't need more than 1 minute do copy-paste solutions and submit. Sure, longer contests means the source of leaked solutions would solve more problems, but so would honest participants. Why would that affect the difference between ranks of cheaters and non-cheaters?

Please state how much time do you think is optimal and why.

Just cancel all future Div 2 rounds. This way, 99% of cheaters will have no time to cheat.

I am not going to write vectors of vectors of vectors of vectors to solve 4-dimensional DPs...

Just avoid global variables like the plague in problems with multiple test cases.

On pblpblHelp with tree dp problem, 4 years ago
0

Greedy works. Root the tree somewhere and run DFS. After you visit all children of a particular node $$$u$$$, either:

  • For all marked nodes $$$v$$$ in the subtree of $$$u$$$, we have $$$dist(v, u) \ge D$$$. In this case you can mark node $$$u$$$ too.
  • Otherwise, while there exist 2 two marked nodes closer than $$$D$$$ to each other, discard the highest among the two. We'll discard at most 1 node from each child subtree, so to do this we just need to sort the highest marked nodes from each subtree.

To see that it works just note that, after this, we'll have the optimal pair (maximum number of marked nodes in the subtree of $$$u$$$, depth of highest marked node).

If CF admin decides to rejudge touxue's submission, I hereby politely request to rejudge mine as well. It also TLEd on pretest 12 during systest, and I was rank 2, right behind touxue, when the contest ended.

Edit: nevermind, it fails test 35 anyways...

I also received two emails (but only used the key in the first one).

Normally, I wouldn't worry too much about 30 mins of speedtyping, but the fact that this combined round is just 2 hours long makes me nervous, since it leaves only 1.5 hours for Div 1 problems. That is, if we do not fuck up and waste more time on the easy ones...

On WitchOfTruthOn Editorials, 4 years ago
+8

Your argument in the comment I replied to was literally "this hasn't been done before, therefore it's not good". Is that not being unreasonably conservative?

On WitchOfTruthOn Editorials, 4 years ago
+8

I think it was clear I was speaking in loose terms and not literally referring to the subject at hand. But since you feel the need to attack strangers on the internet, maybe you're the one that needs to get a life?

Chill down.

On WitchOfTruthOn Editorials, 4 years ago
+1

This is a pretty big assumption. If it really comes with no cost, don't you think people are that dumb to not apply it?

Yes? Some people are too conservative and unreasonably against change, no matter how little that change would cost or how much it'd improve our lives. They will really fight against change, even when they can't cite one downside related to it. Just like you here.

On WitchOfTruthOn Editorials, 4 years ago
+13

Sorry, I don't see an argument being made here, just you rambling...

If we could have something at basically no cost, why not have it? Why are you so against the idea? Will you stop participating if editorials get posted right after the rounds?

I think her point is that this is exactly the type of problem that comes up as Div2 A, B and C. Problems that look hard at first sight but are trivialized by simple observations. You could open almost any Div2 round and find problem like these (well, except problem 4 which is unlike the others and has nothing to do with the title). What's so interesting about a blog post with random unspecial problems?

On WitchOfTruthOn Editorials, 4 years ago
+15

I think you are just listing non-issues. Could you use your imagination a little bit? All the problems that you mention could be easily be solved by not scheduling a round before the editorial is ready or almost ready. Just like you won't schedule a round before having problems, you could also not schedule it before having an editorial...

On WitchOfTruthOn Editorials, 4 years ago
+78

This is very annoying. If they are not going to post the editorial right away, they should at least say when they will do it, so we don't have to keep checking CF blogs to see whether it was posted or not.

I have no idea why the editorial uses the word $$$mask$$$... I found it easier to define $$$dp_{node,l,r} = $$$ minimum cost to prevent $$$s[l, ..., r]$$$ from appearing as a subsequence of the interval corresponding to $$$node$$$ (and assuming that $$$s[l, ..., r - 1]$$$ has already appeared), where $$$s$$$ is the string "abc". Transitions to merge two nodes are straightforward (just three nested loops, yielding $$$20$$$ operations per merge).

Interesting. I have also noticed these parallels.

I wonder if we should try to apply sports periodization to competitive programming. I guess the analogy would be: a long period where you solve a bunch of standalone hard and slightly hard problems (by standalone I just mean outside a competition environment) and upsolve problems from previous contests you did, followed by a shorter period where you take lots of contests and virtual participations and ignore upsolving/practice of standalone problems.

CF staff just ghosts problemsetters. It is no wonder that we are out of contests.

+42

Law of diminishing marginal returns.

0

Moreover, the contestants could have analyzed some random tests on paper and still come up with the guess.

Is this supposed to make a problem interesting? Checking small tests until you find a pattern and then YOLO submit it with no understanding as to why it works?

+84

Round coordinated by antontrygubO_o -> guaranteed guessable problems where you can AC without proving.

Alternative solution:

For a subset of candies $$$S$$$, let $$$w(S)$$$ be the polynomial where the coefficient of $$$x^k$$$ is the number of ways to select $$$k$$$ candies from $$$S$$$ (in this case, that's simply $$$\binom{|S|}{k}$$$) and $$$s(S)$$$ the polynomial where coefficient of $$$x^k$$$ is the sum of the number of distinct colors over all subsets of size $$$k$$$ in $$$S$$$. If $$$S$$$ only contains candies of a single color, then $$$s(S)$$$ is trivial to compute. Also, if the colors in $$$S$$$ are all different from the colors in $$$T$$$, $$$s(S \cup T)$$$ is simply $$$s(S)w(T) + w(S)s(T)$$$. So this gives an algorithm: start with the sets $$$S_c$$$ of candies with color $$$c$$$ and their polynomials and, while we have more than one set, pick two sets $$$S$$$ and $$$T$$$, compute the polynomials of $$$S \cup T$$$ and discard $$$S$$$ and $$$T$$$. If we proceed as in Huffman coding order, this runs in $$$O(N\log(N)^2)$$$.

  • The pool of testers for each round is limited thus the rating will fluctuate a lot and has little meaning.

It wouldn't be perfect, and it doesn't have to be. But if we had like 10 or so high rated testers, I'm sure we'd get something meaningful.

  • It's hard to properly rate the quality of a problem above your level.

That's fine? If a contest is only interesting to LGMs because of a few problems at the end, then most participants would find it a bad contest, because it actually is a bad contest for them.

  • Last but not least, Div 1 rounds are so scarce that people will participate anyway.

My point is not to have less people participating, but having more accurate expectations before a round.

Yeah, but Anton and Mike are whining about the current mechanism. People need a voting system so they can vote "1 out of 5 stars" anonymously instead of saying "contest is shit" in the anouncement.

Also, it doesn't help that currently there will always be testers telling how wonderful the problemset is, regardless of quality. The testers that do dislike the problems just stay shut. If we could see the actual testers' average rating of a contest before it takes place, it would help tremendously everyone's expectations and we'd see less of this offensive criticism (these folks wouldn't even participate in that case).

+11

That does not guarantee optimality of the construction.

Maintain the suffix array of the reversed string (prefix array?) online with hashes and binary search (in a set with custom comparator). Now just count the total number of different substrings and subtract the number of substrings that appear exactly once. Both of these things are easy to calculate via the LCP of adjacent suffixes in the suffix array, so you can easily calculate the difference that each update causes in the answer.

I think a struct like the one below is really convenient for these types of problems.

Code

The usage is basically:

  • If you know that you will want to get back to the current state some time in the future, make a checkpoint.
  • Do whatever you want (maybe other checkpoints?), but call save on every variable of relevance before you change it (only once after every checkpoint is really necessary).
  • Call rollback to revert to the previous state (to the way things were when you last made a checkpoint).
On NM_MehedyIs cerr evil?, 5 years ago
0

Very evil. Accepted: 117533891. Idleness limit exceeded: 117532620. It's not even an interactive problem.

Antontrybug rejected all the problems, and now the setters went on strike. Probably.

The tax included prices are strictly increasing, so if the tax included price of integer $$$A$$$ is $$$X$$$, there are $$$X - A$$$ skipped integers before $$$X$$$. Now you can use this to find some closed formula, or just binary search.

Maybe forcing the queries to be answered online would've been a good option here.

You don't need to change anything inside the segment tree class when passing a struct like that. Full implementation if you want to see it.

Another way is basically wrapping up everything that you need to define for the operations inside another struct, like:

template<typename T>
struct M {
    using Type = T;
    inline const static T Id = 0;
    static T op(const T& x, const T& y) { return x + y; }
};

Then you just need to pass M<whatever> as a template parameter to the segment tree, and nothing else.

Clicking on the "Read Problems" button making the round rated for oneself requires a higher level of commitment from contestants which sometimes they aren't ready to provide.

What's the difference between committing yourself in the last 10 minutes before a round starts (by confirming your registration) vs. 5 minutes after it starts when you hit submit button on problem A?

On AnadiGood Bye 2020 Editorial, 5 years ago
+10

If you know about the rank-nullity theorem and that the rank of matrix equals the rank of its transpose, the problem becomes: for each vector with a single non-zero entry $$$u$$$, mark vertex $$$u$$$, otherwise it has another non-zero entry $$$v$$$, so add an edge between $$$u$$$ and $$$v$$$, now count the number of connected components without a marked vertex (this equals the nullity of the $$$n$$$-by-$$$m$$$ matrix with rows equal to the vectors given in the problem). No need to create a virtual vertex.

INF is too small. Made the same mistake during contest. :(

Algorithms/DS problems are hard.

Do they need to be though? I have never noticed this fact when solving problems in gym. Look at most ICPC contests, they have all kinds of problems with all kinds of difficulties.

Why do we need ABC to be completely different from EFG? What's the point? Why not have separate contests, one with ABC type of problems and others with DEFG type?

Whether you enjoy CF or not is not related to the argument.

Of course it is, this whole blog post is about people who have not been enjoying the last CF rounds...

You looked at the 3 easiest problems

Yeah, I only looked at the 3 easiest problems, I was stuck on them for 3/4s of contest time. After that, I was like "meh, I don't care about this". What should I have done? Go to div2 E when I'm stuck in div2 B?

If you want to face those problems in contest, stop getting stuck on one liner problems and get good enough to reach those problems.

I don't enjoy solving these adhoc/one-liner problems and I don't want a barrier made of them stopping me from getting to the interesting ones. Maybe I should just quit CF then...

Yes, it seems their line of thinking is like this "let's avoid graph, dp, binary search, data structures and the like, these are all uninteresting and too standard for div1 and become mere typeraces for them, instead let's make whole rounds of adhoc/observation-forces and give them to div2, afterall these are more interesting to div1".

On i.eCodeforces Round #669 Editorial, 6 years ago
0

I did it (2 sparse tables for the heights + 2 segment trees to query minimum value of elements on the stacks + 2 binary searches to find the range on which I must query), but turns out this was completely useless, because after every query on a range of elements in the segment trees I would pop out exactly the same elements from the stack.

92278634

On atoizCodeforces Round #666, 6 years ago
+11

HL won't always win, it depends on the parity, you can prove it by induction.

Suppose we have $$$x$$$ stones for some even $$$x$$$ and that the max pile is not bigger than all the others combined. If $$$x = 0$$$, the proposition holds trivially. Assume, by induction, the proposition is true for all arrangements with $$$x - 2$$$ stones. Then, no matter which pile T chooses, HL can always choose another one such that the (possibly new) max pile is not bigger than all others combined. Moreover, after this we'll have $$$x - 2$$$ stones. By the induction hypothesis, HL wins.

You can also use the centroid decomposition to compute distances between any two nodes in $$$O(\log n)$$$, so you don't need LCA to solve Xenia and Tree. Just compute the distances from every node to its ancestors in the centroid tree. Each node has at most $$$O(\log n)$$$ ancestors, so you will need additional $$$O(n\log n)$$$ memory, but there is basically no runtime overhead: while constructing the centroid decomposition, you can compute distances to the previously found centroid (from its descendants) in the DFS you use to find subtree sizes. Now, to compute distances between any two nodes you just need to find their LCA in the centroid tree. You can see my submission 90467624 for details.

+1

You can't run BFS at all positions, that would be quadratic in the number of vertices (which in this case can be $$$10^6$$$).

Did you precompile the headers with the same flags as you are trying to compile your program?

How do you deal with the fact that the $$$a_x$$$ won't necessarily be sorted in each subtree?

Edit: never mind, I misunderstood it.