Salam Codeforces!
We are glad to invite you to follow the 2025 ICPC World Finals in Baku, Azerbaijan!

The event will mark the conclusion of the 2024–2025 ICPC season. Near the 140 university teams from different countries will join the finals. After many regional contests, they will now meet in Baku to fight for the title of ICPC World Champion at the 49th ICPC World Finals.
The 2025 ICPC World Finals Baku will start on September 4, 2025 at 11:00 (UTC+4). We invite everyone to join the live broadcast of the biggest programming event of the year!
Teams qualified to ICPC WF Baku:
We'll be streaming the event all week, tune in to the broadcasts some exciting content!
There will be an online mirror available. Please register in advance at this link:
Also, you can observe teams' monitors and web cameras on the separate broadcast before frozen hour. Sent team's hashtag to the chat to see your favorite!
Helpful links to follow the World Finals:
- Official ICPC World Finals website
- Schedule of Events
- Live broadcast
- Photo Gallery
- Teams' submit reactions
- Brochure
Good luck to all competing teams — may you have a great experience and show your best!








What happened to Yerevan State University? Based on the regional standings and a press release by the university itself, it seems like they qualified to the finals, but there are not on the list.
https://en.wikipedia.org/wiki/Nagorno-Karabakh_conflict
Azerbaijan and Armenia (where Yerevan located) have really bad relationship (they have a couple of wars last five years). And Azerbaijan holds captive several Armenian citizen from Artakh right now. So, it is quite dangerous for Armenian citizen to visit Azerbaijan, or maybe they didn't maintain got a government permission to visit Azerbaijan.
can you guys please use a frontier model from one of your platinum sponsors to perform a complete overhaul of your website? registering for an online mirror shouldn’t be such a painful experience
I've recently added the list of teams (with universities and participants profiles clickable) to the Competitive Programming Hall Of Fame: https://cphof.org/advanced/icpc/2025
Thanks a lot! Added this link to post
Why is mirror registration so bad? Why is the "Institution" field required? I'm wondering what "Institution" team of Petr, ksun and Radewoosh is representing (if they will be using the same online mirror for their stream).
We will actually use a different mirror, there is a separate one for onsite attendees.
MIT
How do I register for the online mirror? What should I put for "institution" if I am not competing as any institution?
I leave this comment to return to it , insahallah , i will become a conestant in ICPC
why does the team status keeps showing "pending"? its been 2 hours
where is the current standing??
here it is https://worldfinals.icpc.global/scoreboard/2025/index.html
If anyone has the problem set PDF, could they share it here?
here it is https://worldfinals.icpc.global/problems/2025/problemset.pdf
Thanks!
How do I register for the mirror contest?
All the best to the all the teams
May the best Wins!!!!
Hi, all,
as one of the judges I'd be grateful for any feedback on the problemset for the finals, both from participants and from others.
I only know problems B, D, H, I, J, K, L.
I think replacing H with something else could have made the problemset much better.
Thanks for your comments!
Solution to problem H:
There were a few annoying details: none of them was hard to handle on its own, but overall the made the problem very tricky.
So I think the implementation was much heavier than the ideas. Also, the sample tests were weak.
If you use a lower bound of $$$10^7$$$, you can actually ignore the case of discarding lower values. Infact proving the better bound makes the implementation harder and if you used the naive bound of $$$10^7$$$, you could have passed H without even realising the above case of discarding values, which was the most major one imo.
Without further detailed feedback (at this moment), I do want to comment that I found this set of problems much better than last year's, and less annoying too. So I do want to thank the judges for that
Thank you!
If you have anything more to say, I'll be happy to read it.
Hi, I have finally got around to solving the problems (well all the doable ones, except K which is not added to Kattis yet). Here is my feedback.
I think the problems were quite cool this year. I especially liked A, B, E and I think they are very good problems. J and K were also good (but please do check for coincidences more carefully next time, K appeared on atcoder before, WF 2024 also had a problem with many many occurrences before).
F and L are ok, and D is meh, but at least it is not annoying. H was somewhat annoying, and probably the only problem I don't like in the set. Some people tell me that one such problem is needed (but I disagree).
Next year, I will be participating again, and hope to see a similiar quality set :)
I appreciate your willingness to take the feedback!
E was very cool, thank you so much! F was easy, but I enjoyed it too.
I did not liked B at all — some standard knowledge, implementing brute force, and then guessing from the pattern. This is especially hard in an ICPC environment where computers are limited resources. So B was not only bad by itself, but it had a very negative effect to the scoreboard as well.
I don't like that hard problems (C,G) are all kind of a 3D geometry task, but I don't know them well and maybe I'm wrong.
For H, the contestants found it manageable, and I consider it as fourth easiest of all. I feel like the judges have overestimated, and this issues with BCGH seems to be the culprit for the balance. The problem itself is ok.
For K, given how convoluted the settings were, I guessed that it should be something dumb, which was true. It's still ok, I guess.
D and I were kinda cool.
A J L were ok.
You basically having to guess or know that operations in that kind of heap have good amortized complexity. Also knowing that a skew heap is a special type of leftist heap helps a ton.
When I solved the problem (as a part of the selection process), it was kinda obvious to me that since it's a named datastructure, it'll have logarithmic insertion; but you're right, that's something you have to guess. We might've just straight out said it in the problem statement (since I agree expecting people to prove this in a contest environment seems unreasonable).
I don't know any such thing and neither guessed it. It just seemed a very obvious optimization (while constructing) to pass the values to heavy child while extracting the ones for light child.
The only overall issue I see is that problems were not able to differentiate the scoreboard well. AHEK all are of similar difficulties with ton of implementation.
My solution is different. It seems most people solved it solving per subtree and merging the answer, I solved it by removing vertices one by one constructing the answer from back to front.
Oh I see, yeah my solution involves merging subtrees.
I did the same thing; and then was wandering why the other people in the selection committee had so many conversations about making it non-quadratic :)
Thanks!
I would be very interested in knowing, out of the teams that solved B, how many actually found a strategy (see, e.g., the comment above by TheScrasse), and how many guessed from the pattern. Looking at submissions during the contest, I believed most solutions actually knew what a strategy looks like, because I believe that when you look at patterns, the easiest pattern to see is that 8p for p prime and as large as possible works (while most teams submitted 2p, where n/4 < p < n/3, which I think is easier to justify, but harder to spot by looking at bunches of candidates).
I don't think G can be really described as 3D geometry. While C and G both have geometric components, I'd say they are very different.
I'd like to know what you mean by "balance issues" and "negative effect to scoreboard". FWIW, the judges didn't consider H to be super-hard; the fourth-hardest problem was, I think in everybody's opinion, E.
I think I overheard that H was expected to be one of the harder problems, in the same tier as E. Maybe I'm mistaken.
By balance issue, I'm refering to the 9 solve teams from rank 4 to 17.
I submitted 2p for prime p > n/4, and it was purely deduced by pattern matching and asserts.
Thanks!
So, yea, for the fact 14 teams in the medal zone had the same number of problems solved is sub-optimal. On the other
For B — again, I'd be really interested in seeing more data on how many folks did pattern-matching vs how many folks did the solving. I probably disagree with you that solving via pattern-matching makes for a bad problem by itself (data-digging is a valid skill in a contest environment; and it was totally possible to just solve this the "standard way"), but I would be happier if most solutions were of the "standard" rather than "data-digging" type.
(also, I'm pretty convinced that if you had to, for some reason, justify why 2p for p>n/4 works, you would be able to do this by just playing the game out)
For the balance — yea, it's hard, and there are conflicting goals. On the top end of the contest; I'd say there are three things I'd like to see:
I'd like to have the winner to have a unique score; gold medals to have a clear separation, and the medal zone to have a clear separation. In this case, the first worked out, the second was pretty close, and the third didn't work out.
I'd like not to have a "clear ordering" across the problems. That is, I find it bad when there's a clear ordering like "everyone who solved X will also have solved Y". The reason I think it's bad is because that means that a team trying to enter any particular zone (say, win, or get a gold medal, or get a medal) has basically only one problem they can go through, so the results are very much about whether that particular threshold problem was "well-suited" to them. This year it wasn't perfect, but it was also not awful (I'd have loved it if, for example, one of the 9-problem teams additionally got C). Note that while a bunch of teams got 9 problems, they weren't always the same 9 problems (although, again, I'd have loved even more variety).
One personal pet peeve of mine is that frequently at the finals nobody even attempts the hardest problem. We tried to avoid it this year, and succeeded — I think two teams got G in the end, and there were some pretty reasonable attempts at C as well.
So, while this year's results weren't perfect, I actually liked them more than a bunch of previous years.
Obviously, people will want different things out of the overall balance of the set; but I think it's worthwhile to explain why I, for one, am not unhappy with the final scoreboard.
"H is supposed to be in the same tier as E" was an opinion the analysts had (you might've overheard it, since it was the analysts speaking on ICPC Live, but that's not an opinion we, judges, had, and in this case it seems we were more correct on this). In the selection committee, we considered H to be on a similar tier to A; and B on a similar tier to E.
I think "2p for p > n/4" (which I think I saw one team submit during the contest) sounds more likely to have been guessed than "2p for p in (n/4, n/3])", which is what the majority of teams submitted. FWIW, I've asked two teams that solved B (SPbSU and BYU), and both of them got the strategy, rather than brute-forcing. But I agree it can be brute-forced.
I think we found a strategy because we were using the PC to implement something else. If we got AC on other problems earlier, we would have likely tried to use the PC to find the pattern.
I'll summarize what I heard from the feedback so far (and I'll try to drop in here a few times more, so if you've got feedback, I'd be super-happy to hear it).
Generally, I think people consider the problems to be OK to good. The problems which people mention as good are pretty much spread around, but A, B, E and I appear more than once. The problem that people most dislike is H (described as "annoying", which I guess means that there's a bunch of special cases you have to deal with correctly). There were also concerns about B (guessable from pattern, although from what I've seen teams that got it rather solved in on paper than guessed), A (there was knowledge that could help you), K (similar problem appeared elsewhere) and G (which has been described as edge-case grind).
On the composition, I think the main concern is about the medal-zone — the fact that 14 teams (in places 4-17) ended up with 9 problems (and most of them with the same set, there were 2 exceptions).
Thank you all for the feedback; and if anyone else wants to add their input, I'll be happy :)
Hey, when will test data/solution rundowns (in form of video/PDF like previous years) be out? I sent you a DM but replying here in case you don't check them
I'm not the right person to ask; I think this is more a question to the analytics team (e.g., misof or PavelKunyavskiy)?
My understanding is ICPCNews posts the videos that we on analytics recorded (and they've been uploaded for a few days).
I thought the judges provide the PDF summary. Data are already on the ICPC website.
Hi, all problem solutions videos already loaded on our youtube channel
A few thoughts I had:
I don't think the tie between 4th-17th is as bad as some people are making it out to be. This blog makes some valid procedural suggestions, but it also makes the tie out to be some kind of inexcusable error, which feels unnecessarily harsh (as one comment notes, many rounds which do follow all of the advice in that blog end up with similar discrepancies between the intended and actual score distribution). Even if the judges can perfectly calibrate the difficulty of the set and design it such that on average, twelve teams solve at least nine problems and four teams get to ten, there's enough variance that an outcome with three teams on ten solves and seventeen with at least nine doesn't seem that improbable.
I liked the first nine problems (all but B, C, G); I thought they were well above average for the easy-medium end of an ICPC set (and a significant improvement over 2024). I do think the easy end was a bit too long: all but ten teams solved five problems, significantly more than an average year. I liked all of the five easy problems (D was okay, L was gimmicky but not bad, F, I, and J were all great), but I would have saved one of them for a different year to give teams in the top half more time to work on the problems challenging to them.
I disagree with the criticism of H: I don't think it's that annoying to code / there aren't that many edge cases, and I like having one problem in an ICPC set where implementation is the main challenge to create interesting strategic decisions and give weaker teams something to work on.
I haven't solved C or G, but if the solution to C relies on geometry, I would be a little sad that both of the potential first place-decider problems were geometry (though maybe part of that is my anti-geometry bias...). One additional hard problem, maybe to replace an easy/medium problem, might have helped the balance.
Thanks for the contest!
Thank you for the comments (and I'm glad you appreciated the set). I agree that looking back we could've saved, say, L for a future finals; TBH I think that at least I underestimated the "weaker" teams in the finals (and was worried that if we ran without L, we might end up with a 5+ teams with a zero score; a worry that was obviously unfounded).
I'm not sure I'd want to add another medium/hard problem, though. One thing I disliked about a lot of the finals sets is that we had a "boss" problem (like Kindergarten / G in 2024, Recurrence / E in 2023, Archeo / Z in 2022, etc.) and nobody even read it (and the title got decided by speed on the medium-hard problems, rather than by the ability to solve something actually hard). Instead, this year, we had less time-sinks in the set, and two "boss" problems (C and G); and the winners actually did solve one of them (and there were serious attempts at C as well). And I really liked that about this set.
(obviously, people's opinions might vary, I just want to point out that this tradeoff between "a bunch of medium-hard problems" and "having the boss problem actually be attacked" is a real one)
I agree that this is a real tradeoff. I will note that not having solved C, if C is meaningfully harder than G or its implementation involves computational geometry, that would suggest the hard end of the set substantially favors problems strong at geometry relative to teams better at other topics (which I think is bad for several reasons, including that it leaves teammates who don't specialize in geometry without much to do while the one who does solves C/G). I'd be happier if the hard end of the set included multiple problems that test substantially different skillsets. This is something I thought the Astana set did well; teams could reasonably choose any of EHK as their ninth problem to attempt.
I agree (and we discussed that) that it's somewhat unfortunate that both the hardest problems were at least geometry-adjacent. They were very very different (in G, the geometric aspects were primarily about planar geometry, while in C the main point was things like finding the normal to a plane in 3D), but I agree this is a weakness of the set (which resulted from the set of problems we were choosing from).
Predicted ratings for all problems?
Really curious, as I want to know how the top teams perceive the hardest problems of the ICPC set to a 3300-3500 rated problem
Who won
Could St.Petersburg get 11pts? UPD:They did!
Based on their response to David, they did solve 11 problems.
The standing freezing should have finished because the contest has ended. So where could we watch the final standing or the standing freshing?
2025 ICPC World Finals Awards Ceremony in Youtube or Facebook
is anybody knows THU's rank and TUK's rank
tsinghua placed 4
Why I cannot register for online mirror?
Saratov State University got medals!!! Its first medals since 2011. Congrats!
As usual, award ceremony stream was far from good. The worst moments were when they stopped showing standings while resolving the medal-deciding submissions and displaying wrong info for teams. Including World Champions. Yes, Saint Petersburg State University is different from ITMO, it's their 5th title, ITMO has 7 already, but I am 100% sure they deserve an honor of mentioning their info correctly.
IIT Indore has done so well . #61
that's actually more than 50% percentile
Its #72. Look at the updated standings.
The overall style of this year's problems leans more toward thinking compared to previous years, which is a positive change. I won’t comment on the overall problem set (as it might seem like I’m making excuses for my own failure), but I must point out that the judges and technical team were thoroughly unprepared—their efforts did not match the hard work contestants put in.
Dress Rehearsal:
There were numerous issues during the warm-up. Problems F and L were misconfigured initially, making it impossible for any team to pass them. With the exception of interactive problems, the time limits for all other problems were incorrectly set—seemingly uniformly to 30 seconds. While issues during Dress Rehearsal are understandable (especially since this was the first time in recent years that PC2 was used at WF), and in fact helped identify problems before the main contest, the post-rehearsal Q&A was disappointing. The speaker avoided addressing these critical issues and instead focused on minor complaints like the air conditioning being too warm or the tables being too sharp, simply ending with ‘sorry, we can’t.’ Contestants were left completely in the dark—was it a problem configuration error, or was the judging system really 30 times faster than local testing?
Main Contest:
This year, the testing tool for the interactive problem was once again incorrect. To my knowledge, at WF 2024 in Astana, Problem A had incorrect data, and the interactive problem’s testing tool was also flawed. It’s truly disappointing to see such issues occur two years in a row. Even if the error in this year’s interactor was easy to spot and fix, it still caused confusion and affected our team’s morale. We couldn’t tell whether the issue was in our code or in the judge’s interactor. If the jury’s approach is to attach lengthy disclaimers to testing tools and then simply issue a “We are sorry” clarification when things go wrong during the contest, it might be better not to provide testing tools at all.
Contestants have trained hard all year, participating in dozens or even hundreds of practice contests. The problem setters had a full year to prepare this competition, yet such basic mistakes still occurred. It’s deeply disappointing. I hope future World Finals can learn from this and do better.
I'm collecting feedback on the problemset above; so I'd definitely be grateful if you did decide to comment on the problemset.
I don't want to comment on the technical team's work, nor on their responses during the Q&A.
I definitely want to apologize in the name of judges for the issue with the interactive testing tool. One of the takeaways from this contest for me is that we need better testing capabilities for testing tools in our package verification setup — making mistakes is a part of programming; but testing is the way to prevent mistakes from having a significant impact. That said, you seem dissatisfied with the in-contest response to this issue, once discovered. AFAIU, there was a public announcement (with an apology), an instruction on how to fix the tool, and a fixed version available to download. Would you have expected something more / different?
I also hope that future World Finals will learn from this and do better (and collecting feedback is a part of this).
Thank you for your response. The handling during the competition was generally acceptable, aside from the somewhat slow reply speed—which is understandable, as ensuring the corrected version was completely accurate took time. What I hope for more is that the Judges can improve the problem-setting process. As you mentioned, everyone makes mistakes, and that’s understandable. However, the problem development process in competitive programming is already quite mature. I find it hard to believe that the WF, being the world’s most prestigious contest, has repeatedly encountered such issues. Perhaps it’s time for the WF Judges to consider updating their methods to keep up with the times.
Regarding the problemset, I think the overall style is acceptable. However, it seems the Judges misjudged the difficulty of Problems B and E. As a result, whether teams won medals—and what level of medal—depended almost entirely on their performance on E. I believe having more parallel problems of varying types for contestants to choose from would make the competition more balanced and interesting. The perceived difficulty of B largely depends on the approach taken. I suspect that with more test solvers, more comprehensive feedback could have been gathered. This type of problem isn’t inherently bad, but for most contestants, B proved too unintuitive. This forced the medal contenders to focus largely on solving E as quickly and accurately as possible, which made the fight for medals somewhat monotonous.
What was the problem with the interactor?
The interactor's initial instructions state that the input uses 1-indexing, but after applying modulo-n operations, it effectively becomes 0-indexed.
Will there be a CF gym mirror now since the contest is over?
I wouldn't bet on it, since the 2024 version hasn't been added to the gym yet. I assume it will be added to Kattis or QOJ soon.
Problems have been posted to Kattis: https://open.kattis.com/problem-sources/ICPC%20World%20Finals%202025
What are the order of problems in terms of difficulty?
Hello! I don't know where is the right place to ask this question so I'll just ask here. Will the judge for the ICPC Challenge be reopened at some point? I have another idea and can't wait to see how many points it would get. Uploading the home directories would also be enough because they contain the test case generator.
easy problems