Блог пользователя 244mhq

Автор 244mhq, история, 4 месяца назад, По-английски

Hi, a few disclaimers first.

  1. I work as a coordinator on CF, but this blog is solely my initiative (I didn't tell anyone from CF about it).

  2. I was a few times in the author role, so I think that I have the right to speak on behalf of both authors and coordinators. Obviously, it doesn't mean that all authors/coordinators think the same, but (based on discussion with some people) I think that my opinion is not unpopular.

Main point of this blog — I feel like our community is becoming much more aggressive towards any things (some of which I can't even call issues) that they don't like during the round. I don't think it's a good direction, and I think it can decrease the motivation of authors & coordinators to do more rounds.

Now I will explain why I think so.

Obviously, you might guess that this blog was inspired by yesterday's round, but also a few recent ones too (for instance, CF 1069, CF 1058).

The first thing that any person will see on the home page now is that this round was heavily downvoted. I checked the comments and found the following reasons:

  1. Weak samples in some of the problems. I understand that currently it's not a standard thing on CF (and usually samples are very strong), but I don't think it's a bad thing to have from time to time — it encourages people to try to prove their solutions more, which is a great skill.

  2. Boring problems. While I agree that some problems were standard (and, as a coordinator, I might not accept some of them), I think that the overall quality of the round was not bad.

Obviously, I express my subjective opinion here, which might be different from yours, and also might be biased because I got a positive delta and didn't have any issues with my solutions because of weak pretests.

But, in the same way, as for a lot of us a positive delta in our performance is important, it's important for authors (and coordinators) to see that their work is being appreciated.

And, just to emphasize what part of the work was done well:

  1. There were no issues (at least what I know of) with the preparation of the round — statements, tests, validators were good.

  2. The balance of the round was good.

  3. The editorial was released quite fast after the round.

  4. There were no problems that were well known.

So, I want to finish my rant with 2 things:

  1. Please, understand that your opinion as users who solve rounds is very important for us (authors and coordinators). So, even if you disliked the round — please, try to write constructive feedback where you write not only about the things you didn't like, but also the things that you liked there. Also, don't forget to upvote the blog if you think that the round quality was at least not much worse than average. This might be a controversial opinion, but generally I think that it's better to press upvote on a round, even if you're slightly unhappy with it, just to encourage authors for future work.

  2. If you're generally unhappy with the quality of the rounds — please, try to help. There are a lot of ways you can help (based on the amount of free time you have) — testing, authoring, coordinating.

Thanks for coming to my TED talk.

  • Проголосовать: нравится
  • +1436
  • Проголосовать: не нравится

»
4 месяца назад, скрыть # |
 
Проголосовать: нравится +10 Проголосовать: не нравится

Upvoted!.

»
4 месяца назад, скрыть # |
Rev. 3  
Проголосовать: нравится +28 Проголосовать: не нравится

I think you're right. Actually, I don't know why there are so many downvotes in recent rounds.

Well, I don't think Codeforces's problems are bad at all. I just think that the problem descriptions can be more interesting, like AtCoder's (just my own opinion).

I think this blog should be upvoted.

UPD: So I upvoted this blog.

»
4 месяца назад, скрыть # |
 
Проголосовать: нравится 0 Проголосовать: не нравится

I like every CF contest.And keep creating nice contests.

»
4 месяца назад, скрыть # |
 
Проголосовать: нравится +5 Проголосовать: не нравится

Most of the people in the community do not have their own opinion and things , they see a higher rank person downvoting and they join the queue , have seen this quite a few times on even many good and learning stuff, so half of the times the heavily downvotes things are just people messing but yea fair that it de-motivates the setters.

»
4 месяца назад, скрыть # |
 
Проголосовать: нравится +84 Проголосовать: не нравится

I agree, I just checked because of this blog, I really didn't expect the announcement to have -1200.

I think C is too hard/tricky for its position, even a lot of LGM contestants got multiple WA on it, there are many wrong ideas and the samples don't rule out any of them. Maybe could have swapped C and D. Other than that I liked ABDE. Honestly I thought the round was pretty average, I didn't even expect the vote balance to be negative.

»
4 месяца назад, скрыть # |
 
Проголосовать: нравится +5 Проголосовать: не нравится

The balance of the round was good.

I think many found problem C to be too hard for it's place. No matter the quality of the harder problems, due to the nature of codeforces rounds, most people will not even read them, so the upvotes depend disproportionately on the easier problems of the round

»
4 месяца назад, скрыть # |
 
Проголосовать: нравится +5 Проголосовать: не нравится

it encourages people to try to prove their solutions more, which is a great skill. exactly, i mean why are you even searching for the solution in the test cases in the first place, like this isn't decryptforces

»
4 месяца назад, скрыть # |
 
Проголосовать: нравится +28 Проголосовать: не нравится

The downvotes were probably cheaters who were mad that C wasn't AI solvable.

»
4 месяца назад, скрыть # |
 
Проголосовать: нравится +25 Проголосовать: не нравится

There is no need to have strong samples, samples are there to help know output format not for helping solve problem I believe, moving forward we can restrict it to 1 or atmost 2 samples per task.

»
4 месяца назад, скрыть # |
Rev. 2  
Проголосовать: нравится +7 Проголосовать: не нравится
»
4 месяца назад, скрыть # |
 
Проголосовать: нравится +286 Проголосовать: не нравится

This blog is good, but it is not good enough. I find some behaviour of commenters and downvoters just absolutely insane.

  1. Samples are not there to explain to you how to solve the problem. Samples have 2 functions: showcase input/output format and make sure you understand the problem correctly. Understand the problem, not how to solve it. If I see a person say "my stupid greedy passed samples and I got WA2 therefore it is a bad problem", all their opinions are instantly invalidated in my head. Not so long ago, there were debates if having weak pretests is ok (yes, it is); how did we get to "if my solution passes samples and doesn't get AC it is a bad problem"?

  2. People who comment "the worst contest ever" on a perfectly normal contest just because they got -100 delta. Did you mean "the worst contest ever for me in terms of my performance because I was stupid"? Because this is a very different thing and is better formulated as "the worst performance ever for me" (and nobody cares). I remember people being surprised at how I can praise the problems while having a bad performance. What is wrong with you? NEVER write "it was a bad contest" if you mean you had a bad performance. You can bitch to your boyfriend about it, but don't blame the authors.

  • »
    »
    4 месяца назад, скрыть # ^ |
     
    Проголосовать: нравится +129 Проголосовать: не нравится

    These comments are a trend on the Internet nowadays, and they exist not just in Codeforces. The samples or rating deltas are not what actually matters, it's just some bad habit of expressing complaints spreading among the community.

    It's the same thing like after a football match, people go post things like "you're playing shit" under some random player's post, and it's not even because they actually didn't do well, but instead might just simply be that the fan lost his bet.

    This trend is also spreading to more serious situations, like official contests. In China, almost all Olympiad contests receive very negative feedback from contestants, while most of the times I don't even get what they are complaining about.

    I'm personally feeling sad about this on the Internet, but I guess it's because we don't know each other online, and it feels not guilty at all to do very rude things, while people have to care about their behaviors offline. Pathetically, I don't think this can be resolved easily, unless Codeforces really puts in some measure to ban some too extreme users.

    • »
      »
      »
      4 месяца назад, скрыть # ^ |
       
      Проголосовать: нравится +1 Проголосовать: не нравится

      Most content on social media is negative or just ragebait (simply because it gets more views). And many kids can’t distinguish real life from the internet. That’s why they think it’s okay to give opinions in a rude or provocative way.

  • »
    »
    4 месяца назад, скрыть # ^ |
     
    Проголосовать: нравится +18 Проголосовать: не нравится

    Samples are not there to explain to you how to solve the problem.

    Isn't there a general trend that the pretests are good and at least show some obvious corner cases? If the samples were always this weak (or at least not that rare), I and many others would see no problem with that. I don't remember that I ever saw like 11k wrong attempts on the problem on CLIST. I think I don't need to explain what frustration brings getting wa2, especially when it happens this rare.

    There were a couple other problems with this round (the order of CDE is not clear and F is boring, on the latter I don't yet have a personal opinion, but it was stated), which accumulate together in a not-so-positive experience.

    I also think the main reason for this many downvotes is that the majority of contestants experienced the issues with problem C and were sad because of that.

  • »
    »
    4 месяца назад, скрыть # ^ |
     
    Проголосовать: нравится 0 Проголосовать: не нравится

    I could not agree more with your first point, I also think there needs to be an active push by higher rated people like you and coordinators to instruct problem setters to make samples that doesn't give away the tricky parts/cases of the solution. Codechef is much better than Codeforces in this aspect.

    In fact,for a lot of Div 2 A,B,C problems,unless you are too strong for them,a significant amount of times, it is better to guess from samples if you fail to solve from just reading the problem,or atleast guess a simple solution/observation that would be appropriate for the problem and then prove it. I hate solving this way. One easy solution is to not care about rating at all but I am not totally fine with that yet. I don't understand how one can genuinely enjoy problem solving but still want strong samples.

»
4 месяца назад, скрыть # |
 
Проголосовать: нравится 0 Проголосовать: не нравится

TBH I don't understand the hate, just because samples were weak doesn't mean that you downgrade the entire thing :O

»
4 месяца назад, скрыть # |
 
Проголосовать: нравится +63 Проголосовать: не нравится

Upvoted , I'm on the side of 244mhq.

Always perform better in contest with an enterprising heart. It's the spirit of CP. For example, follow the great LGMs and IGM and reflect why they can do the "impossible" problems for you. By adjusting and learning , one can push itself beyond the limits.

Blame yourself instead of others , because you can truly change yourself.

Besides, one of my personal tutorial once got -36. As I put much time on writing it and ensure its quality, I feel depressed and confused. I feel down and maybe I won't write anything like it any longer. Yes , authors and coordinators may feel just like me.

Be kind to the world, PLEASE. (0.0)

»
4 месяца назад, скрыть # |
 
Проголосовать: нравится +41 Проголосовать: не нравится

I was also surprised to see the last global round so disliked when it wasn't broken, "just" misjudged difficulty and useless samples.

Weak samples in some of the problems.

Well, there's a difference between weak and "might as well not exist because it adds nothing to the problem". Samples shouldn't be particularly strong but should at least justify why said problem isn't div4A. They should demonstrate the problem statement, basically.

I don't think weak samples encourage people to prove their solutions more either. That'd only work if even people on a decent level with decent experience won't bother properly proving anything unless terrified into it by uncertainty and the risk of surely losing even a small amount of points, which isn't how CF contests work. I try to not overprove my solutions instead because it consumes valuable time, an intuitive semiproof is often good enough. Samples don't affect that. Even if I relied on samples more, the stakes are too low (for-fun online contests) and the amount of points lost when I check validity by submitting is also low so might as well do that. Pretests are too strong so best to take the risk. Finally, as I pointed out in the global round's comments, it's (varying degrees of) easy to go from no sample to many by writing a bruteforce and generator, at least much easier than trying to prove a solution on paper. When you can't write a bruteforce, you've got other concerns... for example misunderstanding the problem, and that for sure won't make proving your wrong solution a good use of your time. Preventing such misunderstandings should the main purpose of samples.

»
4 месяца назад, скрыть # |
 
Проголосовать: нравится -22 Проголосовать: не нравится

This is not how it works. The core problem here is that you have 2 systems baked into one: quality assessment and popularity contest which are both reflected in like/dislike balance and contribution. If you want to have a good quality assessment system then introduce a separate one which explicitly states that and has an easy way to score the problem. For example I've seen some authors adding voting system in their editorials with awful/bad/neutral/good/amazing options for each problem.

Otherwise people will just continue voting based on how they feel, because the voting system is designed for that. If you hover over upvote/downvote buttons they literally say "I like it/I do not like it". They don't say "This content is high quality/This content is low quality".

  • »
    »
    4 месяца назад, скрыть # ^ |
     
    Проголосовать: нравится +55 Проголосовать: не нравится

    This is not how it works.

    This is exactly how it worked not that long ago, I want to say 5-6 years or so.

    It does not matter how many voting buttons you add and it certainly does not matter what some tooltip says. If people are angry with the contest because they lost rating or because the samples were weak or whatever the "issue" with the round was, they are going to express that anger in every way they can. If you add some parallel voting system with more downvote buttons, these angry people are simply going to downvote everything. They are not going to consider the finer distinction between quality assessment and some subjective dislike. The only reason why the polls in editorials might be shielded somewhat is that they are slightly out of view.

    The problem is entirely a cultural one. These days people have a lot of unjustified expectations from the author, and downvoting rounds has become an acceptable way of expressing that. This is nothing inherent to the system, and we know that because it hasn't always been this way.

    • »
      »
      »
      4 месяца назад, скрыть # ^ |
       
      Проголосовать: нравится +2 Проголосовать: не нравится

      Most of your statements boil down to "it was better in the good old days" without addressing the core issue. And the issue is that the voting/comments system is designed in a way that doesn't nudge people into the "stop to think and then write" direction. Culture and all the subsequent behavior emerges from that and the fact that the problem with heavy downvoting didn't manifest 5-6 years ago (which I doubt btw) doesn't mean that the system itself was good. It just took more years and bigger participant sample to discover the flaws.

      If you add some parallel voting system with more downvote buttons, these angry people are simply going to downvote everything.

      No. The issue here is timing. People are angry in the moment after the contest, but if you delay feedback collection by let's say 3 days, for example, most people will calm down and probably will have a more levelheaded approach. And also a lot of people who don't really care will just forget about it and won't dilute the quality score with their votes. Also some of those people will upsolve problems and get a better understanding why the round was good/bad which also will make the measurement better.

      In any case. Neither you nor OP are suggesting something actionable. You can't talk people into behaving the way you want by asking nicely or bringing up examples from the past. You need to design a feedback process where people are constrained and forced to stop and think.

»
4 месяца назад, скрыть # |
 
Проголосовать: нравится +16 Проголосовать: не нравится

i can imagine its due to a lot of ppl (myself included) being tilted after the round lol, not an excuse but a good reason for the problemsetters to not pay too much attention to mindless criticism

»
4 месяца назад, скрыть # |
 
Проголосовать: нравится +26 Проголосовать: не нравится

As an author that have prepared a contest before, I know that preparing the round is extremely time consuming. Your idea might not be fresh enough, there may be a ancient problem which was extremely similar to your problem but you didn't even notice, your test case may be missing one that can let unexpected solution pass, and now, the appear of AI makes problem-setting even more difficult. So I really appreciate testers who can prepare an successful contest.

For me, I never downvote a contest when my rating drop big, because most of the time, it is my problem. The only contests that I downvote are: round 745 (extremely tricky problem), goodbye 2023 (low problem quality), and round 1033 (Because scoreboard look so unusual that time, I thought problems must be leaked by authors by intent, so I got very angry and downvote the post, I try to cancel my downvote a few days later but failed).

»
4 месяца назад, скрыть # |
 
Проголосовать: нравится 0 Проголосовать: не нравится

If one cannot even get test case 1, he will likely admit he doesn't get the problem; on the other hand, passing test case 1 shows some people the illusion that they solve the problem.

In this specific case, if one tries the wrong greedy approach, he can easily come to x = (1 << msb) + (1 << ssb) - 1 and y = n xor x. He'll then think 'Look I have discovered the trap behind the weak sample!', and then ironically get wrong case test 2. In summary, the small trap of the wrong solution contributes to a more realistic illusion.

That is why many participants are raged after contest.

Note that I am trying to analyze why the announcement is downvoted, not if the announcement deserves downvote.

»
4 месяца назад, скрыть # |
 
Проголосовать: нравится 0 Проголосовать: не нравится

I don't think it's just the Codeforces community. It seems like a lot of the community, on all platforms, grows more aggressive and toxic over time, and it gets stupid sometimes, too.

»
4 месяца назад, скрыть # |
 
Проголосовать: нравится -42 Проголосовать: не нравится

Samples are very important in every codeforces problem, because it helps A LOT to imagine how to probably solve the problem and what unordinary cases there will probably be.

»
4 месяца назад, скрыть # |
 
Проголосовать: нравится +44 Проголосовать: не нравится

Thank you for bringing this up. Agree!

»
4 месяца назад, скрыть # |
Rev. 2  
Проголосовать: нравится +27 Проголосовать: не нравится

Thank you very much for writing this blog! As someone who's been an author and also has friends who authored rounds it can be super demotivating when people down vote or complain on your round. So please realize that authors and coordinators will make mistakes, or the round will not be perfect. And that is fine. They will be upset about it more than you. Like, in a month you probably won't even be able to remember what round number it was, but authors definitely will. So if you're going to criticize a round, please take a walk outside, think about something else, and then in few hours or the next day write something constructive.

»
4 месяца назад, скрыть # |
Rev. 2  
Проголосовать: нравится +7 Проголосовать: не нравится

I agree.

»
4 месяца назад, скрыть # |
 
Проголосовать: нравится -34 Проголосовать: не нравится

Why this happens:

  1. Mentally sane people point out the imperfectness of the contest without criticism implied (or light criticism with no aggression).

  2. Monks sees comments from type 1 people, feels "empathy" and "justification" for their rating loss and, rant like 69 iq gamers with aggressive language and escalate the situation.

  3. Monks who release hatred if their delta <0.

By default ch**ters are not accounted.

So if you are not emotionally charged, watching monks is a nice activity. You will find those three types of people in almost every contest announcement. You can also find some ch**ters as a byproduct. Enjoy!

»
4 месяца назад, скрыть # |
 
Проголосовать: нравится +3 Проголосовать: не нравится

very well spoken.(upvoted) It's a big concern for me as well as a participant as creating new problems with fresh ideas has become really difficult for authors especially after AI has gotten so strong. Further discouraging the authors and co-ordinators for petty reasons(just because they got negative delta) would further decrease the frequency of the rounds on CF. Will make sure to appreciate the rounds I participate in future.

»
4 месяца назад, скрыть # |
Rev. 2  
Проголосовать: нравится -13 Проголосовать: не нравится

.

»
4 месяца назад, скрыть # |
 
Проголосовать: нравится +8 Проголосовать: не нравится

People just don't want to take responsibility for their skill-issue and just put the blame on the authors. I think I used to be like that too.

»
4 месяца назад, скрыть # |
 
Проголосовать: нравится +8 Проголосовать: не нравится

I really liked the problem C. Felt stupid after not able to generalize the exact thing I was trying to do with last 2 numbers can be done with all of them. I shared the problem with some of my old CP friends who are now retired. Everyone gave the same wrong greedy in 2 minutes But we enjoyed the discussion.

5-6 years back pretests were not equal to systest and there used to be alot of hacks. I have seen many people submitting multiple solutions even after passing pretests. But now people are demanding sample = systest ??

»
4 месяца назад, скрыть # |
 
Проголосовать: нравится 0 Проголосовать: не нравится

ig majority of downvotes was due to C, well its skill issue if u couldn't comeup with the test case where obvious greedy solution was failing, I almost got that during contest though, but was just doing for 2 numbers instead of all numbers in the array as I was not able to come up with that type of test case and it was my skill issue so it doesn't matter, it was great problem tbh

»
4 месяца назад, скрыть # |
 
Проголосовать: нравится +8 Проголосовать: не нравится

Nowadays Cf community is bit toxic.