Блог пользователя MDSPro

Автор MDSPro, история, 11 месяцев назад, По-английски

Hi Codeforces, the member of the APIO 2025 organising committee is with you.

Our team is thrilled to announce that the participation window (17 May 02:00:00 UTC — 18 May 14:00:00 UTC) has closed and all submissions have been successfully judged. You are now welcome to discuss the tasks and your results!

Here is the ranking: apio2025.github.io/apio2025_ranking

ATTENTION: This table lists all participants, including unofficial ones. The official results will be published in a few days.

We also congratulate strapple on achieving the highest score in APIO 2025!

Also here you can find the GitHub repository with tasks details and test data: https://github.com/apio2025/apio2025_tasks

UPD 1: Ranking archived.

UPD 2: You can watch awards ceremony on youtube

  • Проголосовать: нравится
  • +166
  • Проголосовать: не нравится

»
11 месяцев назад, скрыть # |
 
Проголосовать: нравится +12 Проголосовать: не нравится

As a participant, i enjoyed the contest

»
11 месяцев назад, скрыть # |
 
Проголосовать: нравится 0 Проголосовать: не нравится

Hello! Loved the contest Could we also get editorial for the same?

»
11 месяцев назад, скрыть # |
 
Проголосовать: нравится +37 Проголосовать: не нравится

ojuz 👀

»
11 месяцев назад, скрыть # |
Rev. 3  
Проголосовать: нравится +21 Проголосовать: не нравится

Section C of the rules says, "For each submission, the score for each test case is calculated according to your program or output, rounded to the nearest 2 decimal places."

However, my score in P1 in the scoreboard is 51, even though I got 51.04. My name is Mina Ragy Fouad.

Spoiler
»
11 месяцев назад, скрыть # |
 
Проголосовать: нравится +64 Проголосовать: не нравится

Thank you for the contest!

»
11 месяцев назад, скрыть # |
 
Проголосовать: нравится +52 Проголосовать: не нравится

I manually brute forced every country's top 6 I'm that desperate for cutoffs LOL (too lazy to write a program for it + I think this takes less time TBH)

image of my hell

There are 244 total people across the top 6 contestants from the 41 joining countries. My estimate of the cutoffs follow based off the rules on the website. These are estimates, so these might be wrong due to wrong data on the rankings website/me mistyping a score.

  • Gold: 202.3 (top 20 out of the 244)
  • Silver: 147 (top 61 out of the 244)
  • Bronze: 88 (top 121 out of 244, 87 points covers top 131 which is above 50th percentile)

No guarantees that these are the exact cutoffs, but they might be close. Congratulations to everyone else who joined! I got 77 points so no medal for me :(

»
11 месяцев назад, скрыть # |
Rev. 4  
Проголосовать: нравится +22 Проголосовать: не нравится

I, with the help of Zanite, made our own version of standing on this spreadsheet.

Calculated cut-offs:

  • Gold: 224.1
  • Silver: 163.1
  • Bronze: 108

Notes:

  • I excluded BRA, LBY, MEX, DZA, TUN as they do not appear on the homepage of https://apio2025.uz/.
  • I also included THA and TWN despite them not appearing on the homepage.
  • TUR is excluded from official participation.
  • All 0 scores are included.
  • Medal distribution is according to https://apio2025.uz/rules.
  • Their ranking API returns more precise scores, but I round them to 1 d.p. according to a comment above (contrary to what the rules are saying).

UPD. Excluded TUR from official participation, and updated spreadsheet and cut-offs.

UPD. Excluded several more unofficial participants according to their closing ceremony. I tried to match the cut-offs, but do note that the spreadsheet is not official.

»
11 месяцев назад, скрыть # |
 
Проголосовать: нравится +9 Проголосовать: не нравится

Why are the time and memory limits in problem A hack configured differently in readme.md and problem.json?

In readme.md it is 3s,1G, but in problem.json it is 2s,2G.

»
11 месяцев назад, скрыть # |
Rev. 5  
Проголосовать: нравится -44 Проголосовать: не нравится

Meanwhile in Nanjing: 270, 270, 258, 248, 247, 247... 6 out of top 10 unofficial scores in CHN came from one city (that's infamous for exceling at off-meta nonsense). Is this really better than just having a ton of 300s like last year...

eh

  • »
    »
    11 месяцев назад, скрыть # ^ |
     
    Проголосовать: нравится +21 Проголосовать: не нравится

    number 4 with 258 is Kevin, right?

  • »
    »
    11 месяцев назад, скрыть # ^ |
     
    Проголосовать: нравится +15 Проголосовать: не нравится

    As a mere problemetter, my opinion is not absolutely representative of the SC's decision, but isn't it better considering:

    • The problemset succeeded in determining rank;
    • considering "off-meta", isn't it a good thing for a contest to not fix their decisions based on some "meta"?
    • »
      »
      »
      11 месяцев назад, скрыть # ^ |
      Rev. 2  
      Проголосовать: нравится +5 Проголосовать: не нравится
      • My take is most issues in current OI-style problem setting come from trying to determining rank (in your words) among CHN contestants (or those with CHN-style training), without knowing what their training system is like.

      • The issue people don't seem to realize is that it's also possible to train for generic off-meta / anti-meta problem setting. That training / selection is far more brutal than learning standard topics/techniques (I'm from Nanjing, and have experienced this myself)

      • »
        »
        »
        »
        11 месяцев назад, скрыть # ^ |
         
        Проголосовать: нравится +9 Проголосовать: не нравится

        most issues ... come from trying to determining rank (in your words) among CHN contestants

        But I believe this was not their primary aim? If that were the case, the official contest standings would be completely partitioned into CHN and non-CHN. In reality, the SC did not even know that there is a Chinese onsite mirror, so that especially could not be the case.

        ... that it's also possible to train for generic off-meta / anti-meta problem setting ...

        I acknowledge that. There is no way we could prevent that, so problemsetting must be exempt from this consideration. However, that cannot rationalize relying on a "meta" to select problems. Even if aliens' trick was novel in 2016, it would be truly saddening to see a contest series decide winners based on aliens' trick for 5~6 consecutive years.

        • »
          »
          »
          »
          »
          11 месяцев назад, скрыть # ^ |
           
          Проголосовать: нравится -13 Проголосовать: не нравится

          I was the main person who pushed for the Lagrange multiplier technique during the IOI`16 winter meeting. At that point it was quite well known in my circles of problem setters.

          The point of that problem is that there are enough such nonsense in the CTSC literature that something like that (that's only solvable to those who remember the particular CTSC report) can be dug out and put on almost every contest. The hope was that seeing such things will cause people to stop escalating contests further. I'm quite sad that we ended up with the current trend of a 2~3 solver on IOI every year (and now APIO).

          In my experience with problem setting, aiming for unique winner, instead of for 5~10 perfect scores among official contestants, has roughly been the same as `distinguish those w. CHN style training' since about 2007.

          • »
            »
            »
            »
            »
            »
            11 месяцев назад, скрыть # ^ |
             
            Проголосовать: нравится +37 Проголосовать: не нравится

            So, what really is your claim? We shouldn't rely on rarely known techniques as a way to "determine rank among top contestants"? Sure, I get that (we don't want IOI to become a contest of who read more papers), but why should that be an opinion against APIO 2025? In case you didn't, I recommend that you read the problems; I don't think any of these problems is based on "rarely known techniques".

      • »
        »
        »
        »
        11 месяцев назад, скрыть # ^ |
         
        Проголосовать: нравится +8 Проголосовать: не нравится

        In my opinion, the training style for these sorts of problems is just 'do AtCoder', which doesn't seem like much of a brutal training style.

    • »
      »
      »
      11 месяцев назад, скрыть # ^ |
      Rev. 2  
      Проголосовать: нравится 0 Проголосовать: не нравится

      I benefitted a lot off of "off-meta" contest, because I sit around doing fuck all this year. Yay!

      (In other words, I don't really do IOI batch or IOI interactive. I just do problems that I like, so that's probably the best way to not die in off-meta contest. Jack off all trade, master of none, but still better than master of one)

  • »
    »
    11 месяцев назад, скрыть # ^ |
     
    Проголосовать: нравится 0 Проголосовать: не нравится

    Having Gotten 99 in the 1st problem, I have nothing to say about myself.

  • »
    »
    11 месяцев назад, скрыть # ^ |
     
    Проголосовать: нравится +30 Проголосовать: не нравится

    It is understandable for a contest to have tons of 300s from top contestants if it succeeds at differentiating lower ranked contestants, but last year's contest failed at both.

    Also, it seems that not that much liked last year's problems either.

  • »
    »
    11 месяцев назад, скрыть # ^ |
     
    Проголосовать: нравится 0 Проголосовать: не нравится

    particularly for myself I would say I'm usually doing worse in those "off-meta" contests

    • »
      »
      »
      11 месяцев назад, скрыть # ^ |
       
      Проголосовать: нравится 0 Проголосовать: не нравится

      You still need to train for NOI / CTSC right? Then it's normal, because these things (more generally, combinatorics-based or combinatorics-only problems) tend to measure how much you are NOT doing, so learning anything else tend to be handicaps vs. such problems. If you ever do make national team, it's fairly easy to get this ability back by doing a mental dump on things like SA / LPs / NTTs ...

      I've been joking since 2023 (beechtree) that IOI is now doing a great job distinguishing those on CHN who started in grade 4 vs. those who started grade 8... I'm actually a bit of a laughing stock among my math teachers back in Nanjing for 0ing an IMO combi that they felt was well within my abilities before I left.

  • »
    »
    11 месяцев назад, скрыть # ^ |
     
    Проголосовать: нравится +6 Проголосовать: не нравится

    What is exactly off-meta in the contest? (don't just tell me constructive)

»
11 месяцев назад, скрыть # |
Rev. 3  
Проголосовать: нравится +66 Проголосовать: не нравится

De3b0o got the highest score in the Syrian delegation even very close to the bronze cutoff. Yet they didn't chose him for IOI because of many problems at the Syrian selection tests and bad decisions. We said this to the committee but it was always met with ignorance.

Stand_with_De3b0o

Give_De3b0o_justice

De3b0o_deserves_to_represent_Syria

Also I want to thank APIO SC for the great experience this year. Love from Syria!

»
11 месяцев назад, скрыть # |
 
Проголосовать: нравится +31 Проголосовать: не нравится

Upsolve is available on Eolymp.

»
11 месяцев назад, скрыть # |
 
Проголосовать: нравится +3 Проголосовать: не нравится

Oh, in problem A, I find a solution with BSGS and binary search, the complexity of time and operations is $$$O(\sqrt{n}\log n)$$$. But I only get $$$25$$$ points in the contest, same as brute force.

I can't believe that the right solution depends on this method. Just do some single optimizations.

But this is really terrible: I find the main part of the right solution, but I can't get any extra points, not even $$$1$$$ point.

Author should have set a subtask with $$$n\le 10^7$$$.

»
11 месяцев назад, скрыть # |
 
Проголосовать: нравится 0 Проголосовать: не нравится

Auto comment: topic has been updated by MDSPro (previous revision, new revision, compare).

»
11 месяцев назад, скрыть # |
 
Проголосовать: нравится 0 Проголосовать: не нравится

Auto comment: topic has been updated by MDSPro (previous revision, new revision, compare).

»
11 месяцев назад, скрыть # |
 
Проголосовать: нравится 0 Проголосовать: не нравится

Auto comment: topic has been updated by MDSPro (previous revision, new revision, compare).

»
11 месяцев назад, скрыть # |
Rev. 3  
Проголосовать: нравится +15 Проголосовать: не нравится

According to the awards ceremony and the ranking, Here are the medal cutoffs:

  • Gold: 224.1
  • Silver: 163.1
  • Bronze: 108
»
11 месяцев назад, скрыть # |
 
Проголосовать: нравится +1 Проголосовать: не нравится

I got cut by the cutoff !

»
11 месяцев назад, скрыть # |
 
Проголосовать: нравится +1 Проголосовать: не нравится

Here is the official ranking:

https://apio2025.uz/ranking

»
11 месяцев назад, скрыть # |
 
Проголосовать: нравится 0 Проголосовать: не нравится

This APIO got me thinking. My score was $$$68.8$$$ $$$46$$$ $$$16$$$. After the test, I looked at the scores and saw that many people got $$$100$$$ points on P3. Then, I upsolved the problem and realized that the problem is very easy, but in test I just couldn't get the idea. I just realized that the idea was simple after looking at the scores.

I thought that was a really beginner mistake, because I don't have much experience with IOI contest style, but then I looked at the standings again, and realized that a lot of people fell for the same bait.

So, I have a question, how to not fall for this kind of bait? I'm trying to construct a good contest strategy for me, and its not the first time that i fell to this kind of bait, and I don't know how to correct it.