You are targeting the wrong people.
The authors are the ones who hope the most that their contest would run smoothly. If the scoreboard is polluted because of cheaters, you should blame those cheaters and not the authors. Rather, we should support the authors so that they don't get depressed by their unethical behavior.
Also, asking the authors to write LLM-proof problems only is nonsense. Problemsetting is already a tough task and many, many problems are being rejected because of quality, duplication, balance issues, and more. At this point, filtering out problems that are solvable by LLM heavily adds more burden to the authors even to come up just with the problem idea that won't be rejected. Moreover, it will only leave a specific types of problems and the contests will lose diversity in problem types.
Even though it's theoretically possible to write a contest only with LLM-proof problems so far, it can't last forever. As the models get better and better, there will remain almost no problem that AIs can't solve. Even when that time comes, we should continue the contests, as if these cheaters just don't exist. They should all be eventually, silently banned, and all of us honest people should enjoy the contests that are made only for us honest people.









We need a better solution like a report feature
Surely Mike is working on such a feature right now.
There's no solution. Even if we somehow are able to detect if a certain code has been written by some LLM, there is no way of detecting people who write the code themselves, but get the solution idea from LLMs.
That is also much harder and slower. Not being able to catch 100% of cheaters should not be an excuse for not even trying to find the obvious ones.
Cheating being inaccessible, hard and less rewarding will obviously lead to less cheaters, which is ultimately a good thing.
People also sometimes cheat in relatively high profile chess matches. Does it mean all anticheating measures should be dropped?
The cheating might decrease when the companies stop using the rating as a resume evaluation parameter.
Well,but if so,competing in CF may be more useless,becoming just a hobby
Isn't this the point of competitive programming?
bingo
Nope
Chess is not used as a test in the interview but we see that chess.com and other platforms are full of cheaters.
Cheater are just sick people
Even if all the problems were not solvable by LLMs, cheaters will still cheat using YouTube/telegram groups. The problem is when cheaters get banned, they would just make a new account, I think having their IP banned from participating official in contests would help with this, you cheat once 1 week IP ban, twice 2 weeks and so on.
ig for dynamic IP ISP provides, this will be hard, since refreshing the router might give you a new IP, which is the case for most people, or using VPN.
Instead there is a solution called account/device fingerprinting, which ig would help, you could find it here FingerprintJS
Completely agree. Although I enjoy rating more than most people, I don't want to take my fun of solving problems as the sacrifice.
yes this is true, we should all support the authors when their life is becoming difficult without any fault of their own, due to cheaters.
Completely agree!!
Authors normally don't have enough context and experience, but coordinators do. For online contests it is better to avoid problems that can be easily solved with ~10 lines of code. At least for Div2C+ problems. Even without AI these problems open up tons of opportunities for cheaters. And for some reason Codeforces coordinators simply don't care. Almost at every contest you can see Div2D/Div2E-level problems of this type. The code is extremely short and doesn't allow to detect plagiarism and then we see all these "I guessed the formula, finally I'm ..." and "Not enough evidence to say they cheat" posts and comments to explain cheaters' "fantastic overperformance". Yes, online competitive programming slowly dies, but it could've happened way slower if multiple steps were taken timely by online judges (e.g. account validation, permanent bans, proper problem selection).
You don't say that when you guys literally set a oneliner in MHC.
I am not responsible for all problems at MHC as you may guess.
Yes, I mean, you don't have say in this matter right? You should have either spoke out against setting that problem, or rather refrained from arguing about it, really... I don't entirely refute your claim, it's just you are not the best person to call it out.
You don't know anything about MHC problemsetting to state that. All you can check is that I am not the one who authored that problem. Even more, if I witnessed any backsides of decisions like that I am in a better position to speak against using such problems. What you say is basically "If you have ever murdered anyone you can't speak up against murderers", absolute joke.
Fair, I get your point. Have a nice day.
But just to make it crystal clear, I am not advocating for double standards. I would be glad if there are no problems like that in any online programming competition, including MHC.
As a coordinator, I indeed don’t use the number of lines required in the solution to judge problem quality. And?
It's not about problem quality, it's about how much this problem fits into modern reality of online programming contests. So yes, if you don't pay attention to the number of lines in mid/hard problem's solution, I think you could've done way better.
It doesn't matter how many lines of code a problem has, a (theoretically) intelligent cheater can always write the AI code in their own style of code, and you would never know. At some point, the meta was to write problems that are not so easily solvable by AI (so if a problem was good but solvable by AI, it would be rejected or less preferred), but problemsetters have since stopped doing that, because there's no point making the activity less fun for the legitimate people, just because of AI cheaters. This is the same thing: after a point, you really have to ask yourself the question: is it really worth doing this much stuff (which arguably sometimes ruins the experience for honest people), just to prevent some dishonest people from ruining the platform? And I think the answer is NO.
It is one of the existing points of view, I understand that, but I like the opposite one more. In my understanding authors also lose something when they don't really know how many contestants actually found a solution. So they can consider using these problems for onsite competition where the issue is mitigated to some degree. Recently there were some nice problems with cool idea, but ~5 lines of code to solve it. I liked the problem itself, but not having it in online competition full of cheaters.
I think we can agree to disagree.
LLMs can type way faster than any human ever could. If anything, cheating would get worse if every problem suddenly required hundreds of loc.
What the models still sometimes struggle with is hard enough ad-hoc problems, not writing 5 data structures in a row.
More code means more plagiarism detected. Few lines of code can never be used to detect plagiarism. I don't think there is much we can do to protect from AI models in general, but at least if someone succeeded with using AI for problem solving, they won't be able to easily share their solution with hundreds of people.
I never mentioned any LLMs in my statement, by the way. Read carefully.
The second point is fair, although since the original blog was about LLMs, I just assumed your comment was also related to it.
I'm not so certain about "more code = more plagiarism detected" part, however. This would require the whole code to be "custom" and not contain any data structures or common algorithms, since those are usually copied from templates, and it's perfectly legal to do that. And besides that, the tradeoff of not presenting certain problems (to be fair, I'm likely biased since I enjoy when a series of observations leads to a short and elegant solution) might not be worth the potential increase in detection.
The target function of a good problem is how interesting it is, not how it can help catch potential cheaters.
Wouldn't a problem having a larger solution give cheaters using LLM an advantage? because of course AI writes faster than most people
And cheaters can still manage to hide it using their macros and not get detected, most cheaters who are just copying get detected most of the time, if not because of a short problem they'll get caught for their solution in a longer one.
Based on most cheaters comments I have read, When they say they have guessed the formula for that specific problem, they mostly have cheated on multiple problems but, they only mention this specific one because it's the one that they have something they can say about.
Well said Thanks