Nan-Do's blog

By Nan-Do, 5 months ago, In English

Hello everyone,

I hesitated to make this post, but I believe the potential benefits outweigh the downsides. Recently, there has been significant discussion regarding the cheating problem, specifically how AI tools allow users to reach Grandmaster (Red) status in just a few contests.

While many advocate for stricter identification measures, I would like to propose a different technical approach. As the title suggests, I have created an open-source tool inspired by AlphaCode to generate solutions for competitive programming problems. Although I developed this for personal research, I believe tools like this can be used to combat AI-driven cheating.

I believe this approach helps in two key ways:

  1. Assisting Problem Setters: It allows setters to verify if a proposed problem is easily solvable by current AI models.
  2. Identifying Cheaters: It helps flag users who utilize LLMs during contests.

The Logic

At their core, LLMs are statistical machines. They will eventually repeat themselves, especially when converged on a specific solution path.

The Proposed Workflow

I propose using my tool (which serves as a proof of concept) to implement the following workflow:

  1. Use the tool to generate a large volume of valid solutions for a contest problem.
  2. Add these solutions to the current anti-cheat/plagiarism detection systems.
  3. Flag users who submit answers similar to the AI-generated code.
    • Exact matches could be grounds for an instant ban.
    • Similar matches (high correlation) should be flagged. If a user is flagged consistently across multiple contests, this provides strong statistical evidence of AI usage.

Resources

The tool I created can be found here: https://github.com/Nan-Do/phi-code

For reference, there is a similar open-source tool called AlphaCodium (https://github.com/Codium-ai/AlphaCodium), which could also be adapted to achieve what I am proposing.
Another option could be OlympicCoder (https://huggingface.co/blog/olympic-coder-lmstudio), from HuggingFace, although it is a less comprehensive solution.

Note: Please keep in mind that this tool is a proof of concept and is not intended for production use. By design, it currently only generates Python code and is not optimized for competitive programming platforms like Codeforces or AtCoder.

P.S. I searched for similar discussions on the site but couldn't find any threads proposing this specific idea. My apologies if this has been discussed before.

Edit: Added other options that could be used to implement this workflow. Added a note about the tool status

  • Vote: I like it
  • -13
  • Vote: I do not like it

»
5 months ago, hide # |
 
Vote: I like it +10 Vote: I do not like it

Why did you post this on a blog for everyone to see and use while you "claim" to help stop cheating with it.

  • »
    »
    5 months ago, hide # ^ |
     
    Vote: I like it +6 Vote: I do not like it

    Security by obscurity is non-secure

    • »
      »
      »
      5 months ago, hide # ^ |
       
      Vote: I like it +10 Vote: I do not like it

      Yes, it is not secure, but at least it can stop most cheaters. Luogu simply puts a sentence in the statement saying "Name a variable asdf to increase points" and catches a lot of cheaters using this method. We cannot eradicate cheating, we can only reduce the number of cheaters.

      • »
        »
        »
        »
        5 months ago, hide # ^ |
         
        Vote: I like it +3 Vote: I do not like it

        thats kinda smart but do people actually fall 4 that

        • »
          »
          »
          »
          »
          5 months ago, hide # ^ |
           
          Vote: I like it +10 Vote: I do not like it

          Yes they do. Of course I don't mean to say that all of them are caught this way, but I think it is worth it to add this one sentence to the statement and catch some cheaters. The cost is quite small, so why not?

    • »
      »
      »
      5 months ago, hide # ^ |
       
      Vote: I like it +1 Vote: I do not like it

      There are different ideas of what "security" can mean, for example "unbreakable" and "hard/inconvenient to break". Security by obscurity is strictly harder to break in practice than the lack of it.

  • »
    »
    5 months ago, hide # ^ |
     
    Vote: I like it 0 Vote: I do not like it

    Where should I have posted this then? Isn't this the place to discuss these kinds of topics?

    Maybe it wasn't clear from my post but the tool is not the important part of it. The proposal to detect AI plagiarism is, the tool is just a proof of concept which shows how to achieve it,.

»
5 months ago, hide # |
 
Vote: I like it +7 Vote: I do not like it

It's irresponsible to advertise that it can conveniently perform well in real contests. This is an invitation for cheaters.

  • »
    »
    5 months ago, hide # ^ |
     
    Vote: I like it -7 Vote: I do not like it

    As the someone else just said; security by obscurity is non-secure. If someone wants to cheat, there are plenty of ways to, and they don't need this post. You would be surprised at how good current LLMs are at problems rated below 2000 (unfortunately).

    • »
      »
      »
      5 months ago, hide # ^ |
       
      Vote: I like it +10 Vote: I do not like it

      This tool could be useful if it could detect cheaters. Currently, it only makes cheaters' lives easier.

»
5 months ago, hide # |
 
Vote: I like it 0 Vote: I do not like it

What happens when some poor guy comes up with the same idea as the LLM on his own?