Recently, we’ve run into an unpleasant situation: some large companies have been using a huge number of accounts to submit AI-generated solutions purely for model training, which ended up causing serious “in queue” congestion for everyone else.
Maybe it’s time to consider restricting submissions from unrated users.
If those AI-driven accounts were required to participate in at least one contest before being allowed to submit normally, it would drastically cut down the number of accounts they could abuse. On top of that, we could improve AI-detection during contests and apply stricter monitoring to accounts that submit massive amounts of AI-generated code.
For regular users, the requirement is trivial — just join a single Codeforces round, and you’re good to go. Compared to the awful experience of having the queues jammed for hours, this small inconvenience might be a reasonable trade-off.
That’s the best idea I can think of at the moment. What do you think? How can we better protect the platform from such exploitative behavior?
Update:ig RainRecall's idea is much better
Perhaps assigning a separate evaluation machine specifically for Unrated Users would be a better approach.
If we group users by rating and let each group’s judges prioritize submissions from that group (only taking others when they’re idle), it should significantly reduce the load and help alleviate the issue.








