Hi Codeforces, the member of the APIO 2025 organising committee is with you.
Our team is thrilled to announce that the participation window (17 May 02:00:00 UTC — 18 May 14:00:00 UTC) has closed and all submissions have been successfully judged. You are now welcome to discuss the tasks and your results!
Here is the ranking: apio2025.github.io/apio2025_ranking
ATTENTION: This table lists all participants, including unofficial ones. The official results will be published in a few days.
We also congratulate strapple on achieving the highest score in APIO 2025!
Also here you can find the GitHub repository with tasks details and test data: https://github.com/apio2025/apio2025_tasks
UPD 1: Ranking archived.
UPD 2: You can watch awards ceremony on youtube








As a participant, i enjoyed the contest
Hello! Loved the contest Could we also get editorial for the same?
ojuz 👀
This isn't oj.uz, but I noticed QOJ added APIO 2025: https://qoj.ac/contest/2031
https://oj.uz/problems/source/apio2025
kudos ojuz
Section C of the rules says, "For each submission, the score for each test case is calculated according to your program or output, rounded to the nearest 2 decimal places."
However, my score in P1 in the scoreboard is 51, even though I got 51.04. My name is Mina Ragy Fouad.
This is intentional. We decided to round to 1 d.p. to reduce variance due to randomness. This was unfortunately not included in the task statement. We didn't know that subtask details page shows more d.p.. The score that is shown on the submissions page is the actual score you got.
Variance due to randomness is much higher than 0.05 though
Thank you for the contest!
I manually brute forced every country's top 6 I'm that desperate for cutoffs LOL (too lazy to write a program for it + I think this takes less time TBH)
There are 244 total people across the top 6 contestants from the 41 joining countries. My estimate of the cutoffs follow based off the rules on the website. These are estimates, so these might be wrong due to wrong data on the rankings website/me mistyping a score.
No guarantees that these are the exact cutoffs, but they might be close. Congratulations to everyone else who joined! I got 77 points so no medal for me :(
I think you included unofficial countries (like Algeria, Brazil, ... etc)
According to my calculations:
Gold: 224.1 (top 17 out of 204)
Silver: 147.3 (top 51 out of 204)
Bronze: 92.5 (top 102 out of 204)
I will be a fraction away from the cutoff again :'(
oops, yeah my bad
me when brazil is an asia/pacific country:
I understand your pain, I missed Bronze by 1.5 points as well. :(
silver 147.3 qwq.
Wait did I not get gold
I, with the help of Zanite, made our own version of standing on this spreadsheet.
Calculated cut-offs:
Notes:
UPD. Excluded TUR from official participation, and updated spreadsheet and cut-offs.
UPD. Excluded several more unofficial participants according to their closing ceremony. I tried to match the cut-offs, but do note that the spreadsheet is not official.
Why are there $$$108$$$ bronze medals instead of $$$238/2 = 119$$$?
From https://apio2025.uz/rules, the cut-offs are calculated only from the top 6 contestants of each delegation, totalling 216 contestants.
Contestants with the same score as the sixth contestant will also be included in the final standings. For example, you can find 8 contestants from SAU, because their 7th and 8th contestants are tied with the 6th.
I see. Thank you for elaborating.
Thanks for the epic spreadsheet, I have a question about the ties and inclusion of people who are ranked > 6 in their country (technically 6 but you know what I mean). Last year, the cutoff for gold was 240 while there were a lot of (I heard 57) Chinese contestants getting 300, that way won't it have been 300 instead?
No. To calculate the cutoffs, you take the top 6 scores from each country, so in this case it's 6 300s from China. Then in all countries people who tied with the 6th get on the ranking too.
Turkey is also an unofficial participant this year.
Hi, I'm from team Thailand and I think our participation was very much official (despite not having our country appearing on the homepage). We talked to our team leaders about this, and they also did think it was strange, but they had official communication with the organizers, so our participation will almost definitely be included in the calculation of medals.
This was quite accurate, you just didn’t know about unofficial participants who do affect in cut-offs
Why are the time and memory limits in problem A
hackconfigured differently inreadme.mdandproblem.json?In
readme.mdit is 3s,1G, but inproblem.jsonit is 2s,2G.Oh yes, thanks for the catch! Initially, it was 2 seconds, but we decided to change it to 3 seconds one day before the contest. I forgot to update the problem.json.
Meanwhile in Nanjing: 270, 270, 258, 248, 247, 247... 6 out of top 10 unofficial scores in CHN came from one city (that's infamous for exceling at off-meta nonsense). Is this really better than just having a ton of 300s like last year...
number 4 with 258 is Kevin, right?
As a mere problemetter, my opinion is not absolutely representative of the SC's decision, but isn't it better considering:
My take is most issues in current OI-style problem setting come from trying to determining rank (in your words) among CHN contestants (or those with CHN-style training), without knowing what their training system is like.
The issue people don't seem to realize is that it's also possible to train for generic off-meta / anti-meta problem setting. That training / selection is far more brutal than learning standard topics/techniques (I'm from Nanjing, and have experienced this myself)
But I believe this was not their primary aim? If that were the case, the official contest standings would be completely partitioned into CHN and non-CHN. In reality, the SC did not even know that there is a Chinese onsite mirror, so that especially could not be the case.
I acknowledge that. There is no way we could prevent that, so problemsetting must be exempt from this consideration. However, that cannot rationalize relying on a "meta" to select problems. Even if aliens' trick was novel in 2016, it would be truly saddening to see a contest series decide winners based on aliens' trick for 5~6 consecutive years.
I was the main person who pushed for the Lagrange multiplier technique during the IOI`16 winter meeting. At that point it was quite well known in my circles of problem setters.
The point of that problem is that there are enough such nonsense in the CTSC literature that something like that (that's only solvable to those who remember the particular CTSC report) can be dug out and put on almost every contest. The hope was that seeing such things will cause people to stop escalating contests further. I'm quite sad that we ended up with the current trend of a 2~3 solver on IOI every year (and now APIO).
In my experience with problem setting, aiming for unique winner, instead of for 5~10 perfect scores among official contestants, has roughly been the same as `distinguish those w. CHN style training' since about 2007.
So, what really is your claim? We shouldn't rely on rarely known techniques as a way to "determine rank among top contestants"? Sure, I get that (we don't want IOI to become a contest of who read more papers), but why should that be an opinion against APIO 2025? In case you didn't, I recommend that you read the problems; I don't think any of these problems is based on "rarely known techniques".
In my opinion, the training style for these sorts of problems is just 'do AtCoder', which doesn't seem like much of a brutal training style.
I benefitted a lot off of "off-meta" contest, because I sit around doing fuck all this year. Yay!
(In other words, I don't really do IOI batch or IOI interactive. I just do problems that I like, so that's probably the best way to not die in off-meta contest. Jack off all trade, master of none, but still better than master of one)
what is off-meta contest could some one kindly clarify this pls
ok i get it thanks :)
Having Gotten 99 in the 1st problem, I have nothing to say about myself.
It is understandable for a contest to have tons of 300s from top contestants if it succeeds at differentiating lower ranked contestants, but last year's contest failed at both.
Also, it seems that not that much liked last year's problems either.
particularly for myself I would say I'm usually doing worse in those "off-meta" contests
You still need to train for NOI / CTSC right? Then it's normal, because these things (more generally, combinatorics-based or combinatorics-only problems) tend to measure how much you are NOT doing, so learning anything else tend to be handicaps vs. such problems. If you ever do make national team, it's fairly easy to get this ability back by doing a mental dump on things like SA / LPs / NTTs ...
I've been joking since 2023 (beechtree) that IOI is now doing a great job distinguishing those on CHN who started in grade 4 vs. those who started grade 8... I'm actually a bit of a laughing stock among my math teachers back in Nanjing for 0ing an IMO combi that they felt was well within my abilities before I left.
What is exactly off-meta in the contest? (don't just tell me constructive)
De3b0o got the highest score in the Syrian delegation even very close to the bronze cutoff. Yet they didn't chose him for IOI because of many problems at the Syrian selection tests and bad decisions. We said this to the committee but it was always met with ignorance.
Stand_with_De3b0o
Give_De3b0o_justice
De3b0o_deserves_to_represent_Syria
Also I want to thank APIO SC for the great experience this year. Love from Syria!
.
Upsolve is available on Eolymp.
Oh, in problem A, I find a solution with BSGS and binary search, the complexity of time and operations is $$$O(\sqrt{n}\log n)$$$. But I only get $$$25$$$ points in the contest, same as brute force.
I can't believe that the right solution depends on this method. Just do some single optimizations.
But this is really terrible: I find the main part of the right solution, but I can't get any extra points, not even $$$1$$$ point.
Author should have set a subtask with $$$n\le 10^7$$$.
Doesn't $$$O(\sqrt n\log n)$$$ gives $$$78$$$ points?
EDIT: see comment below
Actually if your complexity of each checking operation is $$$O(\sqrt{mid-l+1})$$$, the final complexity will be $$$O(\sqrt n)$$$ (refer to Master Theorem).
Auto comment: topic has been updated by MDSPro (previous revision, new revision, compare).
Auto comment: topic has been updated by MDSPro (previous revision, new revision, compare).
Auto comment: topic has been updated by MDSPro (previous revision, new revision, compare).
According to the awards ceremony and the ranking, Here are the medal cutoffs:
I got cut by the cutoff !
Here is the official ranking:
https://apio2025.uz/ranking
This APIO got me thinking. My score was $$$68.8$$$ $$$46$$$ $$$16$$$. After the test, I looked at the scores and saw that many people got $$$100$$$ points on P3. Then, I upsolved the problem and realized that the problem is very easy, but in test I just couldn't get the idea. I just realized that the idea was simple after looking at the scores.
I thought that was a really beginner mistake, because I don't have much experience with IOI contest style, but then I looked at the standings again, and realized that a lot of people fell for the same bait.
So, I have a question, how to not fall for this kind of bait? I'm trying to construct a good contest strategy for me, and its not the first time that i fell to this kind of bait, and I don't know how to correct it.