I guess you were right in the part "a year from now LLMs will solve problems on your level". You were also saying something along the lines of "you'll change your mind when it starts affecting you". In this part, you were wrong.
Cheaters always existed, and they always will. LLMs make it easier to cheat, true. There is no way of completely preventing LLM cheating (or any other kind of cheating) in online competitions.
I just coordinated the round everyone is buzzing about, so I can with confidence say that in the CF rules for coordinators and authors, there isn't anything about making problems LLM-proof. Not for easy problems, not for hard problems, nothing. And I doubt any such rules will appear. LLMs are forbidden by the rules, that's that.
So we, as honest participants (yes, I will assume that you are an honest participant, otherwise this blog is not applicable to you and I don't care about you), will have to accept that some problems in the future rounds will be LLM-able, and there will be people cheating using an LLM and getting Accepted verdicts on some set of problems that will depend not on their skill level or the difficulty of the problems, but on the problems' LLM-ability (and cheater's ability to LLM, ha).
In my opinion, the worst effect of cheaters on Global Round 29 was the fact that problem G with a 4500 score got more solves than problem F with a 3000 score, and that made a lot of participants skip F (and sometimes even E) in favor of "free points" in G. I was always saying that you shouldn't open standings during the contest, if you can get a solve count for every problem. Heck, in a CF round, you don't really need even the solve counts: the problems are sorted and there is a known scoring distribution. Well, maybe the times are changing. Maybe you need to be aware that some participants are not playing fair. And maybe a high solve count doesn't say much about the difficulty of the problem for a human, but rather the difficulty of the problem for an LLM.
Of course, authors and coordinators can mess up the difficulty order. It happened before, it will happen again. Plus, difficulty is subjective and multifaceted. So, keeping an eye on the solve counts, looking for discrepancies, is still a good tactic. But maybe, just maybe, after seeing how everyone solves G in 5 minutes, but then you open it and it is a scary number theory in which you need 10 minutes to just understand the statement, and for the next 10 minutes you can't make any headway, maybe take a peek at the standings and check who are those "everyone" who solves G in 5 minutes. You know, hypothetically. Any similarity to any recent contest is a coincidence.
Trust yourself more. If the supposedly "easy 5-minute adventure that grays are solving" seems difficult, well, maybe it is difficult. Or maybe it is difficult for you. It doesn't even have to be cheating or LLMs. Maybe you don't know some technique that makes the problem simple. Or maybe those who got AC just have seen this problem before and copied their old solution.
All of this was happening 10 years ago (and, I'm assuming, 20 years ago, but I'm not that old), before LLMs. You are not the first generation to encounter cheaters; you are not the first generation to encounter the "standings effect". I'm not saying LLMs are not affecting your contests; they certainly do. But they are not going anywhere, and you will have to adapt.







