I usually try to avoid things that are "political" in nature, but I will say something here because I think people being mean to problemsetters/coordinators is very very bad, and this problem is an especially bad instance of it!
I agree that the preparation of the problem was a bit sloppy due to not having an exponential bruteforce for the final version of the problem, but here's a few reasons why this does not warrant verbal punishment:
- In general, there's probably some small probability that any problemsetting team unintentionally produces an incorrect problem. Different teams have different probabilities, and them actually producing an incorrect problem is probably correlated with the probability, but it's unfair to place immense blame on those who do. I think the correct attitude is to consider it an unlucky accident when it happens. As much as we like to pretend that we can prove things infallibly, this is simply not the case because we're human.
- Preparing a problem lacking an exponential bruteforce arguably shouldn't be done when creating one is possible, but it's very understandable why this could be skipped — writing such things are tedious, especially if you have to do it for every single problem. And I get the sense that competitive programmers really dislike tedious things. And in this case it was actually more nuanced than that, because there was an exponential bruteforce to the original correct version of the problem, and it makes sense that one could be too lazy/forget to change it. The incentive to not write a bruteforce is made even stronger by the fact that 99% of the time it was probably unnecessary.
- It was unintentional. I believe that problemsetters/coordinators do their job out of passion, and we should only be thankful to them for giving us fun things to do. I don't like the idea that there is fear of getting downvoted/hated due to something you didn't intend. In fact, in this case, it doesn't even make sense to downvote the announcement because they didn't contribute to the mistake.
Additionally, saying something mean to them because you would've had a good performance is quite silly because rating changes are pretty much zero-sum, so unrating any round would probably not affect the sum of rating change-induced happiness (I know that happiness might not be directly proportional to the size of the change, but whatever). In fact, this would've been my best perf in a while :(
Another point that has been brought up is "how did some large % of div1 participants proof-by-AC the problem? they are all sheeple!" I, unfortunately, am one of the sheeple. At least for me, I did not intentionally proof-by-AC. The mistake in my fake-proof was that I thought the following statement is true: "f != g => the first non-consecutive swap reduces the number of inversions by > 1". Since we're trained so much to use the strategy of "guess a necessary/sufficient condition, and then prove that it's sufficient/necessary", it's easy to be sloppy with the proof part because we also want to AC fast. In fact, it seems to me that most of the sheeple-accusers didn't even rule out the fake-solution by finding a hole in the proof, but by directly finding a countertest.
In conclusion, to the people preparing problems, please keep doing it so that I can one day become the grandmaster!









I'll silently drop my downvote on the blogs as a protest supporting that things should be checked more thoroughly, especially for rare div1 rounds.
I see, I think that's reasonable. Although I think this has the downside that if everybody did this, and nobody refuted hostile comments, it could be misinterpreted that the downvoters shared the same sentiment as them.
yeah div1 rounds are rare, they should definitely get checked thoroughly true, but at the end of the day everyone is human, anyone can make a mistake best we can do is to try to reduce the chances of happening of those mistakes.
I'm grateful that you are willing to speak on our behalf.
Even after this incident, I still believe maomao90 is a good coordinator. All of our problem-setting members, except for StarSilk (who was almost only responsible for D2F/D1D), were first-time problem setters for CF rounds. maomao90 helped us resolve many doubts, corrected our mistakes, and offered useful suggestions for modifying the problems (except for mistakenly removing a sentence in D2D/D1B's statement).
My advice for future problemsetters: in addition to writing exponential brute-force solutions, please be cautious with every "conclusion" you make, clarify its assumptions, and carefully write and check the proof process.
It's not a bad idea just to make ChatGPT do it. Just copy the raw statement and ask it to write an exponential brute force. It's pretty good at it. If a statement is unclear or missing something, GPT might write a wrong brute force or ask for a clarification. This has happened to me.
Maybe you are right. In fact, we discovered that ChatGPT could successfully solve D2ABCE1E2 but can't solve D2D several minutes after the competition started. However, we all believed that D2D successfully defended against AI, and no one considered that there might be a mistake with the statement of D2D. With the increasing performance of AI, I suggest future problemsetters to pay more attention to the problem-solving process of AI.
The advantage of AI testers over human testers may lie in the fact that: the entire thought process and reasoning chain of AI can be fully transparent.
Note that GPT does not need to solve the problem to write an exponential brute force, so you can do this with any problem of any difficulty and ensure its correctness.
i suggest after AI code double check it by human.
AI sometimes not able write hard bruteforce(like dfs game tree) correctly, and make very stupid mistake. I have seen it 998244353 times.
Well yes, you should run your solution against the AI brute force. If it doesn’t match, you should investigate both.
I would suggest otherwise, due to the possibility of extracting training data from AI. No one is going to do this for a cf round probably but we still shouldn't feed AI data pre round to ensure it doesn't happen. Perhaps making a local AI write the brute force solution would be a better idea, but not accessible to everyone. I don't know if something accessible for everyone without the risk of leaking data exists tbh.
(yes, training data and prompt data can be extracted from AI, it's not easy but possible, and OpenAI only promises to not train AI on the expensive API, not on the actual interface)
edit: here is a paper about extracting training data from production AI models for those interested
edit2: note that I haven't actually read the paper so my concerns might be invalid
edit3: there is extracted plain text training data at the end of the paper so I'd assume my concerns are valid. Still might be too hard to pull off to be done for a cf round(as you are just getting the problem statement in advance, for this substantial effort, wouldn't call it exactly worth it, convenience seems to be one of the deriving reasons behind cheating), but we shouldn't be leaving the possibility of this happening in the first place. We shouldn't rely on " surely no one would extract data form AI for the next codeforces contest"