Блог пользователя di_z

Автор di_z, история, 6 часов назад, По-английски

Inspired by the comment of hxu10 at https://mirror.codeforces.com/blog/entry/133874?#comment-1197073

I think this is an important topic as it impacts the very existence of the whole online competitive programming communities (like Codeforces). So, I open a new thread.

Back in my university days around 2019, when I was actively competing in Codeforces contests, I never imagined that AI would advance so quickly that it could solve difficult competitive programming problems.

Open AI's new model claims to achieve 1800+ rating. I would assume in the near future, AI could achieve 4000+ rating and beat tourist. Although I'll mark this day as the day when AGI comes, it will pose an existential threat to Codeforces!

Also using Go as example. After AI performed better than every human, online Go competition effectively collapsed. Everyone can use AI to cheat. An unknown contestant who suddenly performs really well will be challenged on whether they are cheating using AI.

But the situation of competitive programming will be more dire after AI keeps improving it competitive programming capability. Reasons:

  1. Cheating in a two-player game like Go only affects one opponent while cheating in a Codeforces contest, however, undermines the entire leaderboard and harms every participant.

  2. In-person Go contests are still alive. However, due to the nature of competitive programming, with its smaller and dispersed community, there are almost no in-person Codeforces equivalents. OI and ICPC are only for students.

Actually I have no ideas that can solve this issue. Here're some ideas with profound limitations:

  1. Signing Term of Agreement when registering contests which commits not to use AI. Limitation: will not be effective.

  2. Mandate screen-sharing (and even camera-on) during contests to prevent cheating. Drawback: privacy concerns and high costs (to both Codeforces itself and users).

  • Проголосовать: нравится
  • +54
  • Проголосовать: не нравится

»
6 часов назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

Auto comment: topic has been updated by di_z (previous revision, new revision, compare).

»
4 часа назад, # |
Rev. 4   Проголосовать: нравится 0 Проголосовать: не нравится

.

»
4 часа назад, # |
  Проголосовать: нравится +12 Проголосовать: не нравится

Why is this blog being downvoted? Obviously the proposed solutions are not acceptable but it doesn't hide the fact that this is a real problem, especially considering a 1650 rated model is publicly available.

I, for one, would like there to be more in person competitive programming competitions. That would be a lot more fun and a global rating system can still be maintained in that case. Codeforces then would be a place for discussion and for problems from more irl contests than just informatic olympiads and ICPC stuff.

  • »
    »
    4 часа назад, # ^ |
      Проголосовать: нравится -17 Проголосовать: не нравится

    Downvoted the blog because restricting technology just because you are afraid of it is retarded. It should be possible to use models in competitions.

    However, the main problem of using chatgpt\claude specifically is the zero-effort copy-paste approach and how easy the models are to access. But if someone has a model running locally on his own hardware then it's fine, because this person actually put in some effort before the contest, it's like preparing your own algos library basically.

    What can be done to avoid zero-effort usage:

    1. prevent easy copy-pasting of the problem statement using html\css\js\image overlay\whatever
    2. prepare better problems that are gpt-resistant
    3. randomly inject an invisible prompt inside of the problem statement that looks like a paragraph break, so when the participant just copy-pastes it without thinking it's reflected in the generated code and can be used to trace and ban participants for cheating
    • »
      »
      »
      3 часа назад, # ^ |
        Проголосовать: нравится +4 Проголосовать: не нравится

      You use Codeforces to proof and improve your own competitive programming ability, not your computers' and not your ability to make an AI.

      Invisible prompt is good btw

    • »
      »
      »
      3 часа назад, # ^ |
        Проголосовать: нравится 0 Проголосовать: не нравится

      If you have good reasons to be afraid of something, restricting it makes sense.

      There is a high chance AI will bring about the end of the world. Especially if we're not very very careful.

»
4 часа назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

This might not be a feasible solution, but there can be an application like safe exam browser, and on that we can restrict the websites we can visit.

»
3 часа назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

Maybe ask some suspicious users how they came up with the idea in the code, or ask them to participate a round with camera opened and screen shared.

»
3 часа назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

How about doing nothing? I think it's vanity of acquiring colors that is being damaged by any kind of cheaters. Inflation of rating makes you feel like something is taken from you, as if it wasn't just numbers to begin with (no one can ever steal your intelligence though)

CF is a platform with the greatest collection of problems to get better -- a lot of people won't ever use it properly and instead cheat for ego -- isn't it just how life is anyway?

  • »
    »
    3 часа назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится

    no

  • »
    »
    3 часа назад, # ^ |
      Проголосовать: нравится +5 Проголосовать: не нравится

    Then ranking in a contest will be useless. Codeforces calculates your rating by your ranking, so if, rating will be useless. That will be a big problem.

  • »
    »
    109 минут назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится

    I agree. Cheating using telegram groups/AI would simply just be you passing on the task of thinking for someone else to do, which won't benefit you or your problem-solving capability at all. You're just wasting your precious time on this website being dishonest and irritating to others. There is no secret route to mastery, you just have to work really hard.

»
3 часа назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

I feel just disabling copy & paste during the contest will help us a lot.

»
3 часа назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

For Go or chess online contests they did something called a accuracy check, basically means they assume no one will ever be able to play as well as AI, and if you play as nearly well as AI (a.k.a. your accuracy is too high) you get banned for that. IMO this is barely a real solution.

  • »
    »
    3 часа назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится

    Is that the end for CP?

  • »
    »
    2 часа назад, # ^ |
      Проголосовать: нравится +1 Проголосовать: не нравится

    This solution for chess is viable because engines have surpassed humans there, so ofcourse playing at such level is suspicious but in codeforces it can solve till 1800-2000, there are ppl above that rating so implementing accuracy check might hamper ppl with higher rating since they are better than AI.

»
3 часа назад, # |
  Проголосовать: нравится +32 Проголосовать: не нравится

Competitive programming (especially online contests) is not even a competition between two people like Go. It is just a competition between the problem setter and the participant. Maybe "competition" here is also inaccurate, because the goal of the problem setter is not to prevent the participant from solving the problem, but to accurately measure the ability of the participant by carefully constructing the problems.

If AI can beat tourist, I would image online competitive programming platforms to be in another form.

For example, if AI can achieve a rating of 4000+, then it is very likely that AI can propose unique problems. At that time we can have infinite contests to solve. Have a free afternoon and want to do a Div. 1 contest? No problem. AI can generate one contest specially for you and update your rating according to your performance. Want to cheat for a higher rating? I would assume AI at that level can recognize who is cheating according to their history performance and such. There is no one to "compete with". It is just between you and AI.

"Online" "competitive" programming may die, but onsite contests and competitive programming as a hobby will thrive.

»
106 минут назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

I have an idea, though I have no idea how feasible it is in practice. Maybe there's a chance it's possible to cross reference the solutions submitted by a person to an AI-generated solution, and see if there's a resemblance of their implementation styles/techniques? I guess this would only work if the AI generates similar solutions to the same problem.

  • »
    »
    88 минут назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится

    I never thought cheating below div2 could cause any big harm except requiring codeforces team spending more time on those tedious plagiarism check. The real problem I believe is when AI can solve div1/2 problems, reading AI generated code can give experienced contestants direct hints on how to approach them. I can easily turn this into my own solution without any cheating evidence left. It's equivalent to implementing a solution after reading editorial.