robinyqc's blog

By robinyqc, history, 3 hours ago, In English

GPT-o3 recently achieved a 2700 rating in OpenAI's Codeforces test, which means fewer than 200 active users worldwide have a higher rating (less than a single page on the CF leaderboard).

I used to feel unbothered by this because there were many people with ratings higher than GPT. Back then, my rating was also higher. I knew it would surpass me eventually, but I didn't expect it to happen so quickly—just about a year. Now, I feel slightly panicked: the only advantage I have over GPT is that I'm cheaper (I heard o3 costs a lot to solve a problem, while I can solve them for free, lol). I genuinely enjoy solving problems, but that doesn't stop me from feeling anxious. While I know my passion will keep me engaged in competitive programming, I still want to seek support.

I recalled that Um_nik once wrote a blog titled On AI ruining "solving math problems with computer", which expressed his views on AI. I wonder if he anticipated AI advancing this quickly. Back then, I thought the article made sense, but now, replacing the "1600" in it with "2700" and "Div2A/B" with "Div1C/D" feels really strange and unsettling.

As described in that article, the CF community now has people "running around yelling 'We will all die, somebody do something about AI.'" again. I used to think this was just unnecessary panic, but now I feel it's somewhat justified. I might soon become one of them!

These are my current thoughts: mild panic and anxiety. Many users with similar ratings to mine have expressed their own feelings. Kozliklekarsky's comment perfectly captures my sentiments: Is this the real life? Is this just fantasy?

I also have some thoughts about the future. Currently, o3 is prohibitively expensive. If its price drops in the future, people will undoubtedly start using it for competitive programming. (Don't tell me “o3's price will never drop”—it already achieved a 2700 rating, so I can't bring myself to believe it's impossible.)

This has both good and bad implications. On the bright side, GPT's level surpasses mine, which means I can use it as a resource to learn and improve. On the downside, it also means I could use GPT to cheat on CF and achieve a rating higher than my actual skill level. Sure, such a rating would be fake, but we're talking about a top 200 global ranking! Ordinary people have vanity. If I could achieve this ranking, I'd be laughing in my dreams. For an 1800 rating, I believe 85% of people could resist the temptation to cheat. But for a 2700 rating—my dream rating—how many could truly resist?

The influx of high-level accounts would also greatly impact the mindset of top competitors. Moreover, it feels like there's no way to prevent this from happening. For instance, if I asked GPT, “How do I solve this problem?” and then learned its approach and wrote the code myself, how could anyone prove it was AI-assisted? I admit this leans towards conspiracy thinking, but I can't help pondering it.

What are your thoughts on AI's current rating in competitive programming?

  • Vote: I like it
  • +12
  • Vote: I do not like it

»
3 hours ago, # |
  Vote: I like it -10 Vote: I do not like it

nobody reading all this just put this blog back in the drafts

  • »
    »
    39 minutes ago, # ^ |
    Rev. 2   Vote: I like it +3 Vote: I do not like it

    I'm sorry if my blog post cluttered your homepage and affected your experience. However, this is a blog—I wrote it to express my thoughts and emotions, and to explore others' opinions on the topic. Isn't that one of the main purposes of blogging? Does Codeforces only allow academic content on its blogs? Clearly not. If you're not interested in my post, you're free to skip it. I won't take it down just because of a comment, even if the writing might be a bit rough.

»
3 hours ago, # |
  Vote: I like it +7 Vote: I do not like it

I never enjoyed contests nearly as much as I enjoy solving hard problems offline at my own pace, so I wouldn’t really care if contests were to end entirely.

Now, I feel slightly panicked: the only advantage I have over GPT is that I'm cheaper (I heard o3 costs a lot to solve a problem, while I can solve them for free, lol).

Soon, we are going to lose this advantage too, and not just in sport programming but all areas of life.

»
2 hours ago, # |
Rev. 2   Vote: I like it 0 Vote: I do not like it

I really wished for the 'AI winter' to come back when o1 just came out, but it's over now. No one can stop this anymore. LGM-equivalent ChatGPT model will arrive within 2 years, but it’ll take some time for humanity to fully appreciate ChatGPT’s power. After that, within the next 10 years, everyone will lose their jobs—except for those in manual labor.

  • »
    »
    2 hours ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    its evolution, why do you want to stop it

»
2 hours ago, # |
  Vote: I like it 0 Vote: I do not like it

How can you say that you recall that Um_nik once wrote a blog when it was just three months ago? I recall that like I recall yesterday.

»
2 hours ago, # |
  Vote: I like it 0 Vote: I do not like it

i think tourist was the one who was solving on this ai account

https://x.com/que_tourist/status/1866705352710033467

»
2 hours ago, # |
Rev. 3   Vote: I like it +9 Vote: I do not like it

If anyone can cheat, the incentive for cheating is greatly reduced. A clear example is chess, where top engines are publicly available and far stronger than any human.

A key difference, however, is that chess has frequent in-person contests that ground ratings. For competitive programming to have a future, in-person contests would need to become more frequent. One upside of intelligent AI is that it could make writing problems for in-person contests much easier.

I think another reason many people feel sadness because of this news is tied to the value they get from being able to solve difficult problems. Right now, if you have an algorithm problem of moderate difficulty, say, under 2000, the best course of action is often to ask someone skilled in competitive programming. But if GPT becomes equivalent to a 2700-rated competitor, that role essentially disappears. The "problem-solving skills" you worked hard to develop become little more than trivia, as anyone could achieve similar results by prompting an AI model.

»
2 hours ago, # |
Rev. 4   Vote: I like it +4 Vote: I do not like it

For an 1800 rating, I believe 85% of people could resist the temptation to cheat. But for a 2700 rating—my dream rating—how many could truly resist?

Here's what we exactly should do: make general perception that using AIs in contests is absolutely bad and hate those who use them in contests. In everyday life, we prevent ourselves from doing bad things because we know we shouldn't do bad things, not because it's hard to do bad things.

Most of the temptation to use AIs comes from the lack of this perception, because even though using AIs is prohibited now, many people still think "but... it's not really that bad, isn't it?" This is also the case for alts: we're lacking general perception that alts are absolutely bad that we should never make alts. I almost everyday see people advertising that they have alts in public channels without feeling guilty at all. It would be crazy to see similar situations with the AIs.

I don't like all those "It's over" comments because that's exaclty what makes the opposite perception. It makes us feel that there's no meaning to have rules in CP anymore and thus increases the temptation to use AIs. What we should do is to stop self-ruining the atmosphere and focus on continuing the community as people who love CP. Do not give credits who do bad things against the rules.

As the broken windows theory suggests, we can only hope that some good effort is done to catch the AI users, so that this temptation does not spread out to more and more people.

  • »
    »
    23 minutes ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    In everyday life, we prevent ourselves from doing bad things because we know we shouldn't do bad things, not because it's hard to do bad things.

    Probably that's actually because doing bad things activates some chain reaction that brings unexpected consequences.