Блог пользователя Nan-Do

Автор Nan-Do, 5 месяцев назад, По-английски

Hello everyone,

I hesitated to make this post, but I believe the potential benefits outweigh the downsides. Recently, there has been significant discussion regarding the cheating problem, specifically how AI tools allow users to reach Grandmaster (Red) status in just a few contests.

While many advocate for stricter identification measures, I would like to propose a different technical approach. As the title suggests, I have created an open-source tool inspired by AlphaCode to generate solutions for competitive programming problems. Although I developed this for personal research, I believe tools like this can be used to combat AI-driven cheating.

I believe this approach helps in two key ways:

  1. Assisting Problem Setters: It allows setters to verify if a proposed problem is easily solvable by current AI models.
  2. Identifying Cheaters: It helps flag users who utilize LLMs during contests.

The Logic

At their core, LLMs are statistical machines. They will eventually repeat themselves, especially when converged on a specific solution path.

The Proposed Workflow

I propose using my tool (which serves as a proof of concept) to implement the following workflow:

  1. Use the tool to generate a large volume of valid solutions for a contest problem.
  2. Add these solutions to the current anti-cheat/plagiarism detection systems.
  3. Flag users who submit answers similar to the AI-generated code.
    • Exact matches could be grounds for an instant ban.
    • Similar matches (high correlation) should be flagged. If a user is flagged consistently across multiple contests, this provides strong statistical evidence of AI usage.

Resources

The tool I created can be found here: https://github.com/Nan-Do/phi-code

For reference, there is a similar open-source tool called AlphaCodium (https://github.com/Codium-ai/AlphaCodium), which could also be adapted to achieve what I am proposing.
Another option could be OlympicCoder (https://huggingface.co/blog/olympic-coder-lmstudio), from HuggingFace, although it is a less comprehensive solution.

Note: Please keep in mind that this tool is a proof of concept and is not intended for production use. By design, it currently only generates Python code and is not optimized for competitive programming platforms like Codeforces or AtCoder.

P.S. I searched for similar discussions on the site but couldn't find any threads proposing this specific idea. My apologies if this has been discussed before.

Edit: Added other options that could be used to implement this workflow. Added a note about the tool status

  • Проголосовать: нравится
  • -13
  • Проголосовать: не нравится

»
5 месяцев назад, скрыть # |
 
Проголосовать: нравится +10 Проголосовать: не нравится

Why did you post this on a blog for everyone to see and use while you "claim" to help stop cheating with it.

»
5 месяцев назад, скрыть # |
 
Проголосовать: нравится +7 Проголосовать: не нравится

It's irresponsible to advertise that it can conveniently perform well in real contests. This is an invitation for cheaters.

»
5 месяцев назад, скрыть # |
 
Проголосовать: нравится 0 Проголосовать: не нравится

What happens when some poor guy comes up with the same idea as the LLM on his own?