Codeforces used to be the one place I felt truly challenged. Now, it feels rigged. Does anyone else remember Codeforces during last year's summer? I do. I remember waking up early to do the frequent and high-quality contests; I remember getting on calls with friends and discussing solutions post-contest and laughing at WA attempts; I also remember times where when I couldn't think of a solution and I would bang my head on the wall and draw trillions of random dots, parenthesis, letters and numbers on paper — all in the name of hoping that I would stumble upon an observation. I grew, the community grew, and we all thrived.↵
↵
We've all been there, fallen in love with competitive programming and problem-solving. That feeling of getting AC after debugging for an hour and then finding out it was a random global variable is ... indescribable. Codeforces used to be a haven for us problem-solvers. It used to be a place where we could *be ourselves*. Ratings weren't everything for us: we cared more about growth and the fun of the game rather than our results.↵
↵
However, we're all familiar with the current competitive programming scene: racism, cheaters, problem leaking, telegram groups for cheating, etc. It's safe to say that if these issues continue, this community — one that has helped many people discover themselves and find joy in problem-solving — will be the losers. ↵
↵
These issues can all be tied to rating. Why do people cheat with AI? rating. Why do people leak solutions? rating. Why are people on this platform racist? rating. Rating is a construct that this community has created as a means to rank people's skill and performance in contests. The core issue lies in that people view rating and comparisons with others as a means to track their own progress. I think that this system is wrong. We cannot, as a community, keep using rating as a way to track our improvement, because cheaters will infiltrate our contest leaderboards and make the metric entirely useless.↵
↵
It is undeniable that the introduction of publicly available LLM-reasoning models such as GPT 4o has greatly increased the number of cheaters on this website, and while there are means to detect them, these detection systems are extremely easy to bypass by just rewriting the code yourself. Sure, the people at the top can rest safely for now while the average competitive programmer loses interest in growth or self-improvement due to the large amount of cheaters infesting lower ranks. It’s insulting to see people take shortcuts using AI and then claim mastery over problems they couldn’t dream of solving alone. However, just as chess engines became better than humans at chess, eventually, the top of the leaderboards in Div 1s will be filled with non-human entities. At this point, a public rating system would be utterly pointless.↵
↵
Sure, we have in-person global competitions such as ICPC and such. But, we're some of the smartest people in the entire damn internet — how are we going to back down and lose our passion for problem-solving to these stupid cheaters that can't muster up a single original thought? I strongly believe that we as a community are capable of addressing this hurdle and creating a better platform.↵
↵
I have some suggestions for systems we could adopt solve the issues with rating: ↵
↵
1. Online contests work like this: The authors create problems, a group of trusted testers solves the problems and assigns them "problem ratings". Then, instead of having a contest performance based on how fast other people solve problems, we have a deterministic calculation of contest perf that only involves the problems you solved and how much time it took you to solve the problems given their assigned rating. This method should not only prevent rating inflation over time, but also re-allow rating to become a metric people can use to track improvement. ↵
↵
2. Leaderboards could still exist, where people strive to be top 10 or ranked number 1, but for those, you need to be verified. (This is a very controversial thing, but I believe verification would allow for a proper way to prevent people from being able to bypass bans).↵
↵
3. We should have a way to report cheaters instead of having to make blog posts calling them out.↵
↵
Obviously, I'm not that good at competitive programming. No matter how much I enjoy solving problems, I don't have the decades of experience some of the LGMs on this platforms have. I want to hear from others about what they think should happen so we can make codeforces and improvement feel real again.
↵
We've all been there, fallen in love with competitive programming and problem-solving. That feeling of getting AC after debugging for an hour and then finding out it was a random global variable is ... indescribable. Codeforces used to be a haven for us problem-solvers. It used to be a place where we could *be ourselves*. Ratings weren't everything for us: we cared more about growth and the fun of the game rather than our results.↵
↵
However, we're all familiar with the current competitive programming scene: racism, cheaters, problem leaking, telegram groups for cheating, etc. It's safe to say that if these issues continue, this community — one that has helped many people discover themselves and find joy in problem-solving — will be the losers. ↵
↵
These issues can all be tied to rating. Why do people cheat with AI? rating. Why do people leak solutions? rating. Why are people on this platform racist? rating. Rating is a construct that this community has created as a means to rank people's skill and performance in contests. The core issue lies in that people view rating and comparisons with others as a means to track their own progress. I think that this system is wrong. We cannot, as a community, keep using rating as a way to track our improvement, because cheaters will infiltrate our contest leaderboards and make the metric entirely useless.↵
↵
It is undeniable that the introduction of publicly available LLM-reasoning models such as GPT 4o has greatly increased the number of cheaters on this website, and while there are means to detect them, these detection systems are extremely easy to bypass by just rewriting the code yourself. Sure, the people at the top can rest safely for now while the average competitive programmer loses interest in growth or self-improvement due to the large amount of cheaters infesting lower ranks. It’s insulting to see people take shortcuts using AI and then claim mastery over problems they couldn’t dream of solving alone. However, just as chess engines became better than humans at chess, eventually, the top of the leaderboards in Div 1s will be filled with non-human entities. At this point, a public rating system would be utterly pointless.↵
↵
Sure, we have in-person global competitions such as ICPC and such. But, we're some of the smartest people in the entire damn internet — how are we going to back down and lose our passion for problem-solving to these stupid cheaters that can't muster up a single original thought? I strongly believe that we as a community are capable of addressing this hurdle and creating a better platform.↵
↵
I have some suggestions for systems we could adopt solve the issues with rating: ↵
↵
1. Online contests work like this: The authors create problems, a group of trusted testers solves the problems and assigns them "problem ratings". Then, instead of having a contest performance based on how fast other people solve problems, we have a deterministic calculation of contest perf that only involves the problems you solved and how much time it took you to solve the problems given their assigned rating. This method should not only prevent rating inflation over time, but also re-allow rating to become a metric people can use to track improvement. ↵
↵
2. Leaderboards could still exist, where people strive to be top 10 or ranked number 1, but for those, you need to be verified. (This is a very controversial thing, but I believe verification would allow for a proper way to prevent people from being able to bypass bans).↵
↵
3. We should have a way to report cheaters instead of having to make blog posts calling them out.↵
↵
Obviously, I'm not that good at competitive programming. No matter how much I enjoy solving problems, I don't have the decades of experience some of the LGMs on this platforms have. I want to hear from others about what they think should happen so we can make codeforces and improvement feel real again.




