Now that the results are released, let's congratulate all teams that made it to the World Finals, and especially those who have won medals and the trophy.
Disclaimer 1: I respect all efforts from judges, and this blog is purely based on my observations and is not intended to be personal. It is not easy to make a problemset and your work is greatly appreciated.
Disclaimer 2: I haven't read the problemset, which means I know nothing about the problems so far. Therefore, feel free to consider this as nonsense.
Disclaimer 3: I don't know anything about the pipeline in the World Finals' problem setting.
OK, I only want to emphasize one phenomenon from this contest: Rank 4 (Gold) has the same number of problem solves as Rank 17 (nothing). And what's more, they all solve exactly the same subset of problems.
The last time I saw this, it was in last year's NAC (North America Championship): https://nac.icpc.global/scoreboard/2024/. FYI, Rank 7 has the same number of solves as everyone til Rank 21, with almost the same subset of problems.
What's wrong with this, then? In NAC, teams compete for approximately 15 slots to advance to the World Finals. As a result, penalties decided whether you can get a slot. Similarly, in this year's World Finals, the situation became worse! The penalty decided whether you can get a medal, and if you have a small enough penalty, you can get in top 4 for a gold medal!
Strong words here it is: Contestants' efforts deserve a better problemset.
I understand that penalties are part of the game, and in fact, it is really important. There were many times in history when the penalty decided the trophy. But I am afraid that this is the first (few?) time such a bad distribution of problem solves has ever occurred in World Finals history.
What does this mean? This means that this problemset fails to predict the distribution cut-off. Maybe they have done work to predict and try to make the contest balance. However, due to one of the following two reasons they didn't succeed:
- Not enough problems to select in the pool.
- Unaware of how participants' abilities are nowadays.
Either reason is worrying. As far as I know, most judges are not active in the community for a long time. This makes them hard to understand the "fashion" of problems nowadays, and to know competitive teams in each season. This is not a big issue in most years in the past. However, if we consider the contest a serious selection of the world's best 12 teams, then they should do better and learn from this year's experience. Maybe, a more serious investigation is needed. Maybe, more judges are needed to bring in fresh blood.
Or maybe, the problem setting pipeline should involve serious testing. In the Chinese ICPC regional setting, we will invite reliable teams to test the contest in advance to see if our predicted distribution is as expected. I once mentioned this to NAC judges; they told me that it is unlikely to happen here since there is a risk of leaking.
No matter what really happened, I do feel that this is a lesson to the ICPC World Finals. More efforts should be made to better separate teams, and some proper efforts are really appreciated. If any judge can see this, feel free to share your thoughts.




