MDSPro's blog

By MDSPro, history, 11 months ago, In English

Hi Codeforces, the member of the APIO 2025 organising committee is with you.

Our team is thrilled to announce that the participation window (17 May 02:00:00 UTC — 18 May 14:00:00 UTC) has closed and all submissions have been successfully judged. You are now welcome to discuss the tasks and your results!

Here is the ranking: apio2025.github.io/apio2025_ranking

ATTENTION: This table lists all participants, including unofficial ones. The official results will be published in a few days.

We also congratulate strapple on achieving the highest score in APIO 2025!

Also here you can find the GitHub repository with tasks details and test data: https://github.com/apio2025/apio2025_tasks

UPD 1: Ranking archived.

UPD 2: You can watch awards ceremony on youtube

  • Vote: I like it
  • +166
  • Vote: I do not like it

»
11 months ago, hide # |
 
Vote: I like it +12 Vote: I do not like it

As a participant, i enjoyed the contest

»
11 months ago, hide # |
 
Vote: I like it 0 Vote: I do not like it

Hello! Loved the contest Could we also get editorial for the same?

»
11 months ago, hide # |
 
Vote: I like it +37 Vote: I do not like it

ojuz 👀

»
11 months ago, hide # |
Rev. 3  
Vote: I like it +21 Vote: I do not like it

Section C of the rules says, "For each submission, the score for each test case is calculated according to your program or output, rounded to the nearest 2 decimal places."

However, my score in P1 in the scoreboard is 51, even though I got 51.04. My name is Mina Ragy Fouad.

Spoiler
  • »
    »
    11 months ago, hide # ^ |
     
    Vote: I like it -51 Vote: I do not like it

    This is intentional. We decided to round to 1 d.p. to reduce variance due to randomness. This was unfortunately not included in the task statement. We didn't know that subtask details page shows more d.p.. The score that is shown on the submissions page is the actual score you got.

»
11 months ago, hide # |
 
Vote: I like it +64 Vote: I do not like it

Thank you for the contest!

»
11 months ago, hide # |
 
Vote: I like it +52 Vote: I do not like it

I manually brute forced every country's top 6 I'm that desperate for cutoffs LOL (too lazy to write a program for it + I think this takes less time TBH)

image of my hell

There are 244 total people across the top 6 contestants from the 41 joining countries. My estimate of the cutoffs follow based off the rules on the website. These are estimates, so these might be wrong due to wrong data on the rankings website/me mistyping a score.

  • Gold: 202.3 (top 20 out of the 244)
  • Silver: 147 (top 61 out of the 244)
  • Bronze: 88 (top 121 out of 244, 87 points covers top 131 which is above 50th percentile)

No guarantees that these are the exact cutoffs, but they might be close. Congratulations to everyone else who joined! I got 77 points so no medal for me :(

»
11 months ago, hide # |
Rev. 4  
Vote: I like it +22 Vote: I do not like it

I, with the help of Zanite, made our own version of standing on this spreadsheet.

Calculated cut-offs:

  • Gold: 224.1
  • Silver: 163.1
  • Bronze: 108

Notes:

  • I excluded BRA, LBY, MEX, DZA, TUN as they do not appear on the homepage of https://apio2025.uz/.
  • I also included THA and TWN despite them not appearing on the homepage.
  • TUR is excluded from official participation.
  • All 0 scores are included.
  • Medal distribution is according to https://apio2025.uz/rules.
  • Their ranking API returns more precise scores, but I round them to 1 d.p. according to a comment above (contrary to what the rules are saying).

UPD. Excluded TUR from official participation, and updated spreadsheet and cut-offs.

UPD. Excluded several more unofficial participants according to their closing ceremony. I tried to match the cut-offs, but do note that the spreadsheet is not official.

  • »
    »
    11 months ago, hide # ^ |
     
    Vote: I like it 0 Vote: I do not like it

    Why are there $$$108$$$ bronze medals instead of $$$238/2 = 119$$$?

    • »
      »
      »
      11 months ago, hide # ^ |
       
      Vote: I like it 0 Vote: I do not like it

      From https://apio2025.uz/rules, the cut-offs are calculated only from the top 6 contestants of each delegation, totalling 216 contestants.

      Contestants with the same score as the sixth contestant will also be included in the final standings. For example, you can find 8 contestants from SAU, because their 7th and 8th contestants are tied with the 6th.

      • »
        »
        »
        »
        11 months ago, hide # ^ |
         
        Vote: I like it 0 Vote: I do not like it

        I see. Thank you for elaborating.

      • »
        »
        »
        »
        11 months ago, hide # ^ |
         
        Vote: I like it 0 Vote: I do not like it

        Thanks for the epic spreadsheet, I have a question about the ties and inclusion of people who are ranked > 6 in their country (technically 6 but you know what I mean). Last year, the cutoff for gold was 240 while there were a lot of (I heard 57) Chinese contestants getting 300, that way won't it have been 300 instead?

        • »
          »
          »
          »
          »
          11 months ago, hide # ^ |
           
          Vote: I like it +5 Vote: I do not like it

          No. To calculate the cutoffs, you take the top 6 scores from each country, so in this case it's 6 300s from China. Then in all countries people who tied with the 6th get on the ranking too.

  • »
    »
    11 months ago, hide # ^ |
     
    Vote: I like it +3 Vote: I do not like it

    Turkey is also an unofficial participant this year.

  • »
    »
    11 months ago, hide # ^ |
     
    Vote: I like it +5 Vote: I do not like it

    Hi, I'm from team Thailand and I think our participation was very much official (despite not having our country appearing on the homepage). We talked to our team leaders about this, and they also did think it was strange, but they had official communication with the organizers, so our participation will almost definitely be included in the calculation of medals.

  • »
    »
    11 months ago, hide # ^ |
    Rev. 2  
    Vote: I like it 0 Vote: I do not like it

    This was quite accurate, you just didn’t know about unofficial participants who do affect in cut-offs

»
11 months ago, hide # |
 
Vote: I like it +9 Vote: I do not like it

Why are the time and memory limits in problem A hack configured differently in readme.md and problem.json?

In readme.md it is 3s,1G, but in problem.json it is 2s,2G.

  • »
    »
    11 months ago, hide # ^ |
     
    Vote: I like it +10 Vote: I do not like it

    Oh yes, thanks for the catch! Initially, it was 2 seconds, but we decided to change it to 3 seconds one day before the contest. I forgot to update the problem.json.

»
11 months ago, hide # |
Rev. 5  
Vote: I like it -44 Vote: I do not like it

Meanwhile in Nanjing: 270, 270, 258, 248, 247, 247... 6 out of top 10 unofficial scores in CHN came from one city (that's infamous for exceling at off-meta nonsense). Is this really better than just having a ton of 300s like last year...

eh

  • »
    »
    11 months ago, hide # ^ |
     
    Vote: I like it +21 Vote: I do not like it

    number 4 with 258 is Kevin, right?

  • »
    »
    11 months ago, hide # ^ |
     
    Vote: I like it +15 Vote: I do not like it

    As a mere problemetter, my opinion is not absolutely representative of the SC's decision, but isn't it better considering:

    • The problemset succeeded in determining rank;
    • considering "off-meta", isn't it a good thing for a contest to not fix their decisions based on some "meta"?
    • »
      »
      »
      11 months ago, hide # ^ |
      Rev. 2  
      Vote: I like it +5 Vote: I do not like it
      • My take is most issues in current OI-style problem setting come from trying to determining rank (in your words) among CHN contestants (or those with CHN-style training), without knowing what their training system is like.

      • The issue people don't seem to realize is that it's also possible to train for generic off-meta / anti-meta problem setting. That training / selection is far more brutal than learning standard topics/techniques (I'm from Nanjing, and have experienced this myself)

      • »
        »
        »
        »
        11 months ago, hide # ^ |
         
        Vote: I like it +9 Vote: I do not like it

        most issues ... come from trying to determining rank (in your words) among CHN contestants

        But I believe this was not their primary aim? If that were the case, the official contest standings would be completely partitioned into CHN and non-CHN. In reality, the SC did not even know that there is a Chinese onsite mirror, so that especially could not be the case.

        ... that it's also possible to train for generic off-meta / anti-meta problem setting ...

        I acknowledge that. There is no way we could prevent that, so problemsetting must be exempt from this consideration. However, that cannot rationalize relying on a "meta" to select problems. Even if aliens' trick was novel in 2016, it would be truly saddening to see a contest series decide winners based on aliens' trick for 5~6 consecutive years.

        • »
          »
          »
          »
          »
          11 months ago, hide # ^ |
           
          Vote: I like it -13 Vote: I do not like it

          I was the main person who pushed for the Lagrange multiplier technique during the IOI`16 winter meeting. At that point it was quite well known in my circles of problem setters.

          The point of that problem is that there are enough such nonsense in the CTSC literature that something like that (that's only solvable to those who remember the particular CTSC report) can be dug out and put on almost every contest. The hope was that seeing such things will cause people to stop escalating contests further. I'm quite sad that we ended up with the current trend of a 2~3 solver on IOI every year (and now APIO).

          In my experience with problem setting, aiming for unique winner, instead of for 5~10 perfect scores among official contestants, has roughly been the same as `distinguish those w. CHN style training' since about 2007.

          • »
            »
            »
            »
            »
            »
            11 months ago, hide # ^ |
             
            Vote: I like it +37 Vote: I do not like it

            So, what really is your claim? We shouldn't rely on rarely known techniques as a way to "determine rank among top contestants"? Sure, I get that (we don't want IOI to become a contest of who read more papers), but why should that be an opinion against APIO 2025? In case you didn't, I recommend that you read the problems; I don't think any of these problems is based on "rarely known techniques".

      • »
        »
        »
        »
        11 months ago, hide # ^ |
         
        Vote: I like it +8 Vote: I do not like it

        In my opinion, the training style for these sorts of problems is just 'do AtCoder', which doesn't seem like much of a brutal training style.

    • »
      »
      »
      11 months ago, hide # ^ |
      Rev. 2  
      Vote: I like it 0 Vote: I do not like it

      I benefitted a lot off of "off-meta" contest, because I sit around doing fuck all this year. Yay!

      (In other words, I don't really do IOI batch or IOI interactive. I just do problems that I like, so that's probably the best way to not die in off-meta contest. Jack off all trade, master of none, but still better than master of one)

  • »
    »
    11 months ago, hide # ^ |
     
    Vote: I like it 0 Vote: I do not like it

    Having Gotten 99 in the 1st problem, I have nothing to say about myself.

  • »
    »
    11 months ago, hide # ^ |
     
    Vote: I like it +30 Vote: I do not like it

    It is understandable for a contest to have tons of 300s from top contestants if it succeeds at differentiating lower ranked contestants, but last year's contest failed at both.

    Also, it seems that not that much liked last year's problems either.

  • »
    »
    11 months ago, hide # ^ |
     
    Vote: I like it 0 Vote: I do not like it

    particularly for myself I would say I'm usually doing worse in those "off-meta" contests

    • »
      »
      »
      11 months ago, hide # ^ |
       
      Vote: I like it 0 Vote: I do not like it

      You still need to train for NOI / CTSC right? Then it's normal, because these things (more generally, combinatorics-based or combinatorics-only problems) tend to measure how much you are NOT doing, so learning anything else tend to be handicaps vs. such problems. If you ever do make national team, it's fairly easy to get this ability back by doing a mental dump on things like SA / LPs / NTTs ...

      I've been joking since 2023 (beechtree) that IOI is now doing a great job distinguishing those on CHN who started in grade 4 vs. those who started grade 8... I'm actually a bit of a laughing stock among my math teachers back in Nanjing for 0ing an IMO combi that they felt was well within my abilities before I left.

  • »
    »
    11 months ago, hide # ^ |
     
    Vote: I like it +6 Vote: I do not like it

    What is exactly off-meta in the contest? (don't just tell me constructive)

»
11 months ago, hide # |
Rev. 3  
Vote: I like it +66 Vote: I do not like it

De3b0o got the highest score in the Syrian delegation even very close to the bronze cutoff. Yet they didn't chose him for IOI because of many problems at the Syrian selection tests and bad decisions. We said this to the committee but it was always met with ignorance.

Stand_with_De3b0o

Give_De3b0o_justice

De3b0o_deserves_to_represent_Syria

Also I want to thank APIO SC for the great experience this year. Love from Syria!

»
11 months ago, hide # |
 
Vote: I like it +31 Vote: I do not like it

Upsolve is available on Eolymp.

»
11 months ago, hide # |
 
Vote: I like it +3 Vote: I do not like it

Oh, in problem A, I find a solution with BSGS and binary search, the complexity of time and operations is $$$O(\sqrt{n}\log n)$$$. But I only get $$$25$$$ points in the contest, same as brute force.

I can't believe that the right solution depends on this method. Just do some single optimizations.

But this is really terrible: I find the main part of the right solution, but I can't get any extra points, not even $$$1$$$ point.

Author should have set a subtask with $$$n\le 10^7$$$.

»
11 months ago, hide # |
 
Vote: I like it 0 Vote: I do not like it

Auto comment: topic has been updated by MDSPro (previous revision, new revision, compare).

»
11 months ago, hide # |
 
Vote: I like it 0 Vote: I do not like it

Auto comment: topic has been updated by MDSPro (previous revision, new revision, compare).

»
11 months ago, hide # |
 
Vote: I like it 0 Vote: I do not like it

Auto comment: topic has been updated by MDSPro (previous revision, new revision, compare).

»
11 months ago, hide # |
Rev. 3  
Vote: I like it +15 Vote: I do not like it

According to the awards ceremony and the ranking, Here are the medal cutoffs:

  • Gold: 224.1
  • Silver: 163.1
  • Bronze: 108
»
11 months ago, hide # |
 
Vote: I like it +1 Vote: I do not like it

I got cut by the cutoff !

»
11 months ago, hide # |
 
Vote: I like it +1 Vote: I do not like it

Here is the official ranking:

https://apio2025.uz/ranking

»
11 months ago, hide # |
 
Vote: I like it 0 Vote: I do not like it

This APIO got me thinking. My score was $$$68.8$$$ $$$46$$$ $$$16$$$. After the test, I looked at the scores and saw that many people got $$$100$$$ points on P3. Then, I upsolved the problem and realized that the problem is very easy, but in test I just couldn't get the idea. I just realized that the idea was simple after looking at the scores.

I thought that was a really beginner mistake, because I don't have much experience with IOI contest style, but then I looked at the standings again, and realized that a lot of people fell for the same bait.

So, I have a question, how to not fall for this kind of bait? I'm trying to construct a good contest strategy for me, and its not the first time that i fell to this kind of bait, and I don't know how to correct it.