Enough Is Enough: A Concrete Plan to Tackle Cheating on Codeforces

Revision ru2, by vn4k, 2026-03-23 11:19:39

Hello, Codeforces.

Having participated in several rounds, I’ve seen the same frustration shared by many: cheaters often go unnoticed for too long, and the current reporting system relies heavily on manual effort from the community. We need a smarter way to flag suspicious activity so that moderators and trusted members can review it efficiently.

I’m proposing Codeforces Anti‑Cheat (CFAC) – an automated flagging system that works after each contest. It does not ban anyone automatically; instead, it highlights submissions that warrant a closer look, helping reviewers focus their time where it’s most needed.


How It Works

After a contest ends, every submission is analysed by three independent modules. If a submission triggers enough flags, it gets added to a review queue for human moderators or a panel of trusted community members.

1. Submission‑Time Analysis

  • For each account, we record the time taken to solve each problem.
  • Flags are raised when:
  • A low‑rated account (e.g., gray) solves a high‑difficulty problem (e.g., Div. 2 E) in an unusually short time (e.g., under 3 minutes).
  • The same account shows an implausible jump in solving speed across problems (e.g., solving A in 20 minutes, then B in 1 minute).
  • These are not standalone convictions – they only indicate behaviour that differs sharply from normal contestants.

2. Edit‑Track Analysis

  • Codeforces already stores the edit history of submissions. We can analyse:
  • The number of compilations before the final version.
  • Large, sudden code changes that mimic copying from an external source (e.g., a complete rewrite after several WA/TLE attempts).
  • Flags are raised when a submission shows characteristics of copy‑pasting or outsourcing.

3. NLP‑Based Code Classification

  • A lightweight binary classifier (e.g., a fine‑tuned transformer or a simpler model) is trained on past submissions labelled as “original” vs. “cheating” (plagiarised, AI‑generated, etc.).
  • The model processes the source code and outputs a suspiciousness score.
  • Flags are raised when the score exceeds a conservative threshold.

All three signals are combined into a risk score. Only submissions that cross a high threshold are presented to human reviewers. This drastically reduces the number of false positives and keeps the workload manageable.


Why Flagging, Not Automatic Banning?

Automated bans are risky – false positives could ruin innocent contestants’ experience. By using a flag‑and‑review model we:

  • Keep humans in the loop, ensuring fairness.
  • Allow trusted community members (e.g., high‑rated volunteers) to verify cases before any action is taken.
  • Collect labelled data to continuously improve the detection models.

The system would complement, not replace, the existing manual reporting and the work of Codeforces moderators.


Call for Collaboration

I’m looking for contributors to help build this system. Areas where help is needed:

  • Data collection – gathering labelled examples of cheated submissions (with care to avoid privacy issues).
  • Model development – training the NLP classifier and tuning the rule‑based heuristics.
  • System integration – possibly creating a dashboard for reviewers (could be a separate tool, not necessarily modifying the main Codeforces codebase).
  • Testing and feedback – running experiments on past contests to measure accuracy and false‑positive rates.

If you’re interested, please comment below or send me a direct message. Even ideas and constructive criticism are greatly appreciated.


Conclusion

Cheating undermines the integrity of contests and demotivates honest participants. An automated flagging system won’t solve everything overnight, but it can significantly reduce the burden on the community and help catch offenders faster. Let’s work together to make Codeforces a fairer platform for everyone.

Thank you for your time.

Updates

History

 
 
 
 
Revisions
 
 
  Rev. Lang. By When Δ Comment
en7 English vn4k 2026-03-26 20:12:32 2962 Tiny change: ')\n~~~~~\n<\spoiler>\n\n\n###U' -> ')\n~~~~~\n\n\n###U'
en6 English vn4k 2026-03-26 19:38:17 119
en5 English vn4k 2026-03-24 14:45:17 8 Tiny change: 'n github](github.com' -> 'n github](https://github.com'
en4 English vn4k 2026-03-24 14:39:35 17 Tiny change: 'or cheaters and not cheaters\n\n &mdas' -> 'or cheater's code\n\n &mdas'
ru3 Russian vn4k 2026-03-24 14:36:58 5217
en3 English vn4k 2026-03-24 14:36:00 45
en2 English vn4k 2026-03-24 14:32:46 5111 Tiny change: 'lp\n[user:You] I need' -> 'lp\n[user:you] I need'
ru2 Russian vn4k 2026-03-23 11:19:39 107 Мелкая правка: 'ted cfac [repo on github](https://' -> 'ted cfac [**repo on github**](https://'
en1 English vn4k 2026-03-22 10:26:55 4103 Initial revision for English translation
ru1 Russian vn4k 2026-03-22 10:26:19 4103 Первая редакция (опубликовано)