Блог пользователя Qingyu

Автор Qingyu, 8 часов назад, По-английски

I've checked today is not April 1st.

(source: 12 Days of OpenAI: Day 12 https://www.youtube.com/watch?v=SKBG1sqdyIU)

  • Проголосовать: нравится
  • +254
  • Проголосовать: не нравится

»
8 часов назад, # |
  Проголосовать: нравится +65 Проголосовать: не нравится

Merry Christmas!

»
8 часов назад, # |
  Проголосовать: нравится +53 Проголосовать: не нравится

thanks for guiding me to become red

»
8 часов назад, # |
  Проголосовать: нравится +44 Проголосовать: не нравится

Anyone know why o1 is rated 1891 here? From https://openai.com/index/learning-to-reason-with-llms/ o1 preview and o1 are rated 1258 / 1673, respectively.

»
8 часов назад, # |
  Проголосовать: нравится -8 Проголосовать: не нравится

in 5 years, there will be no way to pretend that the average human is worth more than a rock

»
8 часов назад, # |
  Проголосовать: нравится +34 Проголосовать: не нравится

I'll wait until it starts participating in live contests and having Red performance

»
8 часов назад, # |
  Проголосовать: нравится +9 Проголосовать: не нравится

damn im cooked

»
8 часов назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

Not possible...

»
8 часов назад, # |
Rev. 2   Проголосовать: нравится +5 Проголосовать: не нравится

I doubt that AI can do better math research than humans 5 years later.

  • »
    »
    7 часов назад, # ^ |
      Проголосовать: нравится -18 Проголосовать: не нравится

    That's the only thing you're gonna be able to do 5 years later — doubt.

  • »
    »
    5 часов назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится

    Is this a prediction about humans now vs AIs in 5 years or AI + human in 5 years vs AIs in 5 years?

»
7 часов назад, # |
  Проголосовать: нравится +42 Проголосовать: не нравится

From the presentation we know, that o3 is significantly more expensive. o1-pro now takes ~3 minutes to answer to 1 query. based on the difference in price for o3, o3 is expected to be like 40-100?(more???) times slower. CF contest lasts at most 3 hours. How can o3 get to 2700 if it will spend all the time on solving problem A? It's very interesting to read the paper about o3, and specifically how do they measure its performance.

»
7 часов назад, # |
  Проголосовать: нравится +21 Проголосовать: не нравится

I will personally volunteer myself as the first human coder to participate in the inevitable human vs AI competitive programming match.

»
7 часов назад, # |
  Проголосовать: нравится +57 Проголосовать: не нравится

I only believe it if it was tested in a live contest

  • »
    »
    7 часов назад, # ^ |
      Проголосовать: нравится +24 Проголосовать: не нравится

    Maybe, codeforces should allow some accounts from OpenAI to participate unrated in the competitions? MikeMirzayanov what do you think?

  • »
    »
    7 часов назад, # ^ |
      Проголосовать: нравится +17 Проголосовать: не нравится

    o1-pro was tested in this contest live https://mirror.codeforces.com/contest/2040 and solved E,F (the blog has since been deleted)

    • »
      »
      »
      5 часов назад, # ^ |
        Проголосовать: нравится 0 Проголосовать: не нравится
      Комментарий удален по причине нарушения правил Codeforces
      • »
        »
        »
        »
        4 часа назад, # ^ |
          Проголосовать: нравится 0 Проголосовать: не нравится

        It also couldn't solve B after multiple attempts, so keep that in mind as well (still, it's really impressive)

        • »
          »
          »
          »
          »
          4 часа назад, # ^ |
            Проголосовать: нравится 0 Проголосовать: не нравится

          It feels comfortable until your last line

          • »
            »
            »
            »
            »
            »
            3 часа назад, # ^ |
              Проголосовать: нравится +1 Проголосовать: не нравится

            I mean, I can't deny it, these new AI models are really impressive for what is, in essence, a "which word is likely to come next" model. With that being said, and I'm paraphrasing from what I've heard others say since I'm nowhere at the level to solve those problems, F was a knowledge problem of Burnside's lemma with a bit of a twist.

            I can't say for certain how these models will evolve; o3 got a super high score on ARC-AGI (a general reasoning task set), which could help its performance on problems like B. On the other hand, we have no idea if these results are embellished or how exactly they're calculating this, so only time will tell.

»
7 часов назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

Dude, I feel big threat

»
7 часов назад, # |
  Проголосовать: нравится +7 Проголосовать: не нравится

If o3 really has deep understanding of competitive programming core principles I think it also means it can become a great problemsetting assistant. Of course it won't be able to make AGC-level problems but imagine having more frequent solid div.2 contests that would be great.

»
7 часов назад, # |
  Проголосовать: нравится +25 Проголосовать: не нравится

Is this a real life?

»
7 часов назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

How do these things perform on marathon tasks? Psyho

»
6 часов назад, # |
  Проголосовать: нравится -19 Проголосовать: не нравится

I don't see why people are paranoid about those insane ratings claimed by OpenAI. I guess they're worried about cheaters, but why? Competitive programming isn't only about Codeforces — it's a whole community. In every school and country, we know each other personally, we see each other solve problems live, and we compete against each other in onsite contests. So we know each other's level. When we see someone who we know isn't a strong competitive programmer suddenly ranking in the top 5 of a Codeforces contest, it doesn't mean much. We just feel sorry for them that they've started cheating. It will be more funny when we see a red coder who can't qualify for ICPC nationals from their university.

  • »
    »
    6 часов назад, # ^ |
      Проголосовать: нравится +20 Проголосовать: не нравится

    i think you're not seeing the bigger picture, the implications for the competitive programming are huge. 1) we might lose sponsors/sponsored contests because now contest performance isn't a signal for hiring or even skill? 2) let's not kid ourselves, but a lot of people are here just to grind out cp for a job / cv and that's totally fine. now they will be skewing the ratings for literally everyone. 3) from 2 it may follow that codeforces elo system completely breaks and we'll have no rating? the incentive to compete is completely gone which will further drive down the size of the active community there are many more, i bet you could even prompt chatgpt for them :D

    • »
      »
      »
      5 часов назад, # ^ |
        Проголосовать: нравится -12 Проголосовать: не нравится

      we'll have no rating
      And then we will have no cheaters. Happy ending

  • »
    »
    6 часов назад, # ^ |
      Проголосовать: нравится +46 Проголосовать: не нравится

    It will be more funny when we see a red coder who can't qualify for ICPC nationals from their university.

    It's not funny, it happens quite often, for example, at our university(

    • »
      »
      »
      5 часов назад, # ^ |
        Проголосовать: нравится -8 Проголосовать: не нравится

      Red was just an example, A more accurate example would be a team of newbies qualifying while a team of reds fails to do so. don't tell me it's still not funny

  • »
    »
    6 часов назад, # ^ |
      Проголосовать: нравится +5 Проголосовать: не нравится

    I think it has major implications for the whole world, not only competitve programming. For example, pace of mathematical research can easily double almost overnight (realistically over like a year period).

»
6 часов назад, # |
  Проголосовать: нравится +3 Проголосовать: не нравится

According to this article, it does not seem practical for the average user to run?

Quoting, "Granted, the high compute setting was exceedingly expensive — in the order of thousands of dollars per task, according to ARC-AGI co-creator Francois Chollet."

However, this is indeed a large step forward for AI.

»
6 часов назад, # |
Rev. 2   Проголосовать: нравится 0 Проголосовать: не нравится
O1: I'm faster than humans
O3: I'm better pal

;(

»
5 часов назад, # |
  Проголосовать: нравится -6 Проголосовать: не нравится

Do I still have a chance to reach LGM before AI?

»
5 часов назад, # |
  Проголосовать: нравится +44 Проголосовать: не нравится

OpenAI is lying. I bought 1 month of o1 and it is not nearly 1900 rating. It is as bad as me. I think they lie on purpose because they are burning a lot of money and they want people to buy their model.

»
5 часов назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

Day by day I am getting mindfucked with these latest AI updates so much that I might lose my sanity.

»
5 часов назад, # |
  Проголосовать: нравится +11 Проголосовать: не нравится

I'm a bit skeptical. o1 is claimed to have a rating around 1800 and I've seen it fail on many div2Bs.

»
4 часа назад, # |
  Проголосовать: нравится +3 Проголосовать: не нравится

If I already have lower rating than o1-preview, why should I be concerned?

»
4 часа назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

after we have rank Tourist for 4000 ratings, maybe we can have GPT for 4500 or so in the near future.

»
3 часа назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

WYSI

»
3 часа назад, # |
  Проголосовать: нравится +23 Проголосовать: не нравится

What does the light blue part on o3 mean here? Doesn't seem like the video explained it.

»
2 часа назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

Amazing and unbelievable!

»
95 минут назад, # |
  Проголосовать: нравится +17 Проголосовать: не нравится

I recently subscribed to o1 (not the pro version) in the hope of clearing out some undesirable problems in BOJ mashups, and I got skeptical if this AI is even close to 1600. It can solve some known problems, which probably some Googling will also do. However, in general, the GPT still gets stuck in incorrect solutions very well and has trouble understanding why their solution is incorrect at all.

So, how did the GPT get a gold medal in IOI? Probably because it was able to submit many times. So, if I give them 10,000 counterexamples, it will eventually solve my problem. Maybe I could also get GPT to do 1600-level results if I gave them counterexamples all the time.

In other words, GPT generates solutions decently well, but it is bad at fact-checking. But fact-checking should be the easiest part of this game: You only need to write a stress test. Then why is this not provided on the GPT model? I assume that they are just not able to meet the computational requirements.

I don't think the results are fabricated at all (unlike Google, which I believe fabricates their results) and believe even at o1 model GPT can find a good spot, especially with the recent CF meta emphasizing "ad-hoc" problems which are easy to verify and find a pattern. But this is a void promise if it is impossible to replicate in consumer level. I wonder if o3 is any different.

  • »
    »
    11 минут назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится

    You can write the code yourself to prompt it to stress-test. I think that shouldn't be part of the default model served to users, it would add too much computation, while 99% of the time during dev use cases users will just feed untestable snippets.

    People have already submitted o1-mini solutions in contest and gotten 2200 performance multiple times.