How to save Codeforces from AI-assisted cheating as AI models evolve so rapidly?

Revision en3, by di_z, 2024-09-14 12:55:44

Inspired by the comment of hxu10 at https://mirror.codeforces.com/blog/entry/133874?#comment-1197073

I think this is an important topic as it impacts the very existence of the whole online competitive programming communities (like Codeforces). So, I open a new thread.

Back in my university days around 2019, when I was actively competing in Codeforces contests, I never imagined that AI would advance so quickly that it could solve difficult competitive programming problems.

Open AI's new model claims to achieve 1800+ rating. I would assume in the near future, AI could achieve 4000+ rating and beat tourist. Although I'll mark this day as the day when AGI comes, it will pose an existential threat to Codeforces!

Also using Go as example. After AI performed better than every human, online Go competition effectively collapsed. Everyone can use AI to cheat. An unknown contestant who suddenly performs really well will be challenged on whether they are cheating using AI.

But the situation of competitive programming will be more dire after AI keeps improving it competitive programming capability. Reasons:

  1. Cheating in a two-player game like Go only affects one opponent while cheating in a Codeforces contest, however, undermines the entire leaderboard and harms every participant.

  2. In-person Go contests are still alive. However, due to the nature of competitive programming, with its smaller and dispersed community, there are almost no in-person Codeforces equivalents. OI and ICPC are only for students.

Actually I have no ideas that can solve this issue. Here're some bad ideas with profound limitations:

  1. Signing Term of Agreement when registering contests which commits not to use AI. Limitation: will not be effective.

  2. Mandate screen-sharing (and even camera-on) during contests to prevent cheating. Drawback: privacy concerns and high costs (to both Codeforces itself and users).


UPD: Thanks all for your replies! After reading the replies, I finally get a useful idea.

We all think that even if AI gets smarter than us, we can still have fun doing Codeforces. It's a great way to get better at solving programming problems, or just to feel good about tackling tough challenges.

But the Codeforces rating system could become broken. So, I propose that we can have two separate rating systems.

  1. Virtual rating: applies to all users.
  2. Verified-human rating: only those who participate in onsite contest and perform on Codeforces online contests at a similar level as they perform onsite will get the verified rating on Codeforces.

For example, if a user performs at rating 2000 in a onsite contest (that forbids digital devices and Internet), and 1 week later he performs rating 2200 in a Codeforces rating round, then the rating 2200 can be considered valid and both the two ratings (virtual rating & verified-human rating) can be upgraded.

However, if the user performs at rating 3000 just 1 week after the onsite contest, then only virtual rating is upgraded, while the verified-human rating will not be upgraded (just similar to the current out-of-competition mechanism to prevent double account rating abusers). Only after the user performs at least 2800 in the next onsite contest, the 3000-point Codeforces performance can be trusted and the verified-human rating can be upgraded.

That means, if a user never takes part in onsite contest, they will only get the virtual rating. This is enough if the user only cares about their personal growth and not the public recognition. However, many of us still want to climb the verified-human rating leaderboard. This requires more onsite contests to get more users verified.

OI and ICPC are age-limited. And Google Code Jam is dead. Even if it were still alive, the onsite round only covers 25 participants a year. We need much larger scopes than that.

So, Codeforces might need to partner with OpenAI and let OpenAI sponsors onsite contests.

Given that OpenAI has already used the Codeforces platform and CP problem sets and submissions to train models, there's an ethical argument to be made that the company has a responsibility to support the continued growth and vitality of the CP community. Competitive programmers including the problem setters and participants, driven by their unwavering passion and years of tireless effort, have built up this community with high-quality data.

If OpenAI trains their models by the data provided by Codeforces and then their super-intelligent AI kills Codeforces, it will be unacceptable, right?

That will also be a win-win for OpenAI. OpenAI can use the onsite contests to advertise their new models on problem solving skills by competing with humans.

For the ethnics, there is a similar case in journalism. Many journalists feared that the traffic to their media's websites will be impacted by ChatGPT. Because OpenAI uses high-quality text written by professional journalists to train GPTs, it is under pressure to pay back to the journalism industry. And indeed OpenAI has already partnered with some media companies.

Tags codeforces, artificial intelligence

History

 
 
 
 
Revisions
 
 
  Rev. Lang. By When Δ Comment
en3 English di_z 2024-09-14 12:55:44 3259
en2 English di_z 2024-09-13 09:53:44 35 Tiny change: 'uivalents.\n\nA' -> 'uivalents. OI and ICPC are only for students.\n\nA'
en1 English di_z 2024-09-13 09:52:40 2008 Initial revision (published)