ICPCNews's blog

By ICPCNews, 8 months ago, In English

Salam Codeforces!

We are glad to invite you to follow the 2025 ICPC World Finals in Baku, Azerbaijan!

ICPC WF Baku

The event will mark the conclusion of the 2024–2025 ICPC season. Near the 140 university teams from different countries will join the finals. After many regional contests, they will now meet in Baku to fight for the title of ICPC World Champion at the 49th ICPC World Finals.

The 2025 ICPC World Finals Baku will start on September 4, 2025 at 11:00 (UTC+4). We invite everyone to join the live broadcast of the biggest programming event of the year!

Teams qualified to ICPC WF Baku:

We'll be streaming the event all week, tune in to the broadcasts some exciting content!

Scoreboard Problems Live broadcasts playlist
MAIN RU AR CN1 CN2 ES PT JP

There will be an online mirror available. Please register in advance at this link:

Register for the Online Mirror

Also, you can observe teams' monitors and web cameras on the separate broadcast before frozen hour. Sent team's hashtag to the chat to see your favorite!

Split screen Hashtags

Helpful links to follow the World Finals:

Good luck to all competing teams — may you have a great experience and show your best!

  • Vote: I like it
  • +373
  • Vote: I do not like it

»
8 months ago, hide # |
Rev. 2  
Vote: I like it +44 Vote: I do not like it

What happened to Yerevan State University? Based on the regional standings and a press release by the university itself, it seems like they qualified to the finals, but there are not on the list.

»
8 months ago, hide # |
 
Vote: I like it +39 Vote: I do not like it

can you guys please use a frontier model from one of your platinum sponsors to perform a complete overhaul of your website? registering for an online mirror shouldn’t be such a painful experience

»
8 months ago, hide # |
 
Vote: I like it +5 Vote: I do not like it

I've recently added the list of teams (with universities and participants profiles clickable) to the Competitive Programming Hall Of Fame: https://cphof.org/advanced/icpc/2025

»
8 months ago, hide # |
 
Vote: I like it +83 Vote: I do not like it

Why is mirror registration so bad? Why is the "Institution" field required? I'm wondering what "Institution" team of Petr, ksun and Radewoosh is representing (if they will be using the same online mirror for their stream).

  • »
    »
    8 months ago, hide # ^ |
     
    Vote: I like it +13 Vote: I do not like it

    We will actually use a different mirror, there is a separate one for onsite attendees.

  • »
    »
    8 months ago, hide # ^ |
     
    Vote: I like it +13 Vote: I do not like it

    I'm wondering what "Institution" team of Petr, ksun and Radewoosh is representing

    MIT

»
8 months ago, hide # |
 
Vote: I like it +8 Vote: I do not like it

How do I register for the online mirror? What should I put for "institution" if I am not competing as any institution?

»
8 months ago, hide # |
 
Vote: I like it -7 Vote: I do not like it

I leave this comment to return to it , insahallah , i will become a conestant in ICPC

»
8 months ago, hide # |
 
Vote: I like it 0 Vote: I do not like it

why does the team status keeps showing "pending"? its been 2 hours

»
8 months ago, hide # |
 
Vote: I like it 0 Vote: I do not like it

where is the current standing??

»
8 months ago, hide # |
 
Vote: I like it 0 Vote: I do not like it

If anyone has the problem set PDF, could they share it here?

»
8 months ago, hide # |
 
Vote: I like it +3 Vote: I do not like it

How do I register for the mirror contest?

»
8 months ago, hide # |
 
Vote: I like it 0 Vote: I do not like it

All the best to the all the teams

May the best Wins!!!!

»
8 months ago, hide # |
Rev. 2  
Vote: I like it +43 Vote: I do not like it

Hi, all,

as one of the judges I'd be grateful for any feedback on the problemset for the finals, both from participants and from others.

  • »
    »
    8 months ago, hide # ^ |
     
    Vote: I like it +18 Vote: I do not like it
    Spoiler
    • »
      »
      »
      8 months ago, hide # ^ |
       
      Vote: I like it +28 Vote: I do not like it

      Thanks for your comments!

      Spoiler
      • »
        »
        »
        »
        8 months ago, hide # ^ |
        Rev. 3  
        Vote: I like it +18 Vote: I do not like it
        Spoiler
        • »
          »
          »
          »
          »
          8 months ago, hide # ^ |
           
          Vote: I like it 0 Vote: I do not like it
          Spoiler
  • »
    »
    8 months ago, hide # ^ |
     
    Vote: I like it +13 Vote: I do not like it

    Without further detailed feedback (at this moment), I do want to comment that I found this set of problems much better than last year's, and less annoying too. So I do want to thank the judges for that

    • »
      »
      »
      8 months ago, hide # ^ |
       
      Vote: I like it 0 Vote: I do not like it

      Thank you!

      If you have anything more to say, I'll be happy to read it.

      • »
        »
        »
        »
        8 months ago, hide # ^ |
         
        Vote: I like it +21 Vote: I do not like it

        Hi, I have finally got around to solving the problems (well all the doable ones, except K which is not added to Kattis yet). Here is my feedback.

        I think the problems were quite cool this year. I especially liked A, B, E and I think they are very good problems. J and K were also good (but please do check for coincidences more carefully next time, K appeared on atcoder before, WF 2024 also had a problem with many many occurrences before).

        F and L are ok, and D is meh, but at least it is not annoying. H was somewhat annoying, and probably the only problem I don't like in the set. Some people tell me that one such problem is needed (but I disagree).

        Next year, I will be participating again, and hope to see a similiar quality set :)

  • »
    »
    8 months ago, hide # ^ |
     
    Vote: I like it +20 Vote: I do not like it

    I appreciate your willingness to take the feedback!

    Spoiler
    • »
      »
      »
      8 months ago, hide # ^ |
       
      Vote: I like it +10 Vote: I do not like it
      I'm not a big fan of A due to
      • »
        »
        »
        »
        8 months ago, hide # ^ |
         
        Vote: I like it 0 Vote: I do not like it
        Spoiler
      • »
        »
        »
        »
        8 months ago, hide # ^ |
         
        Vote: I like it +10 Vote: I do not like it

        I don't know any such thing and neither guessed it. It just seemed a very obvious optimization (while constructing) to pass the values to heavy child while extracting the ones for light child.

        The only overall issue I see is that problems were not able to differentiate the scoreboard well. AHEK all are of similar difficulties with ton of implementation.

        • »
          »
          »
          »
          »
          8 months ago, hide # ^ |
           
          Vote: I like it 0 Vote: I do not like it

          My solution is different. It seems most people solved it solving per subtree and merging the answer, I solved it by removing vertices one by one constructing the answer from back to front.

          • »
            »
            »
            »
            »
            »
            8 months ago, hide # ^ |
             
            Vote: I like it 0 Vote: I do not like it

            Oh I see, yeah my solution involves merging subtrees.

          • »
            »
            »
            »
            »
            »
            8 months ago, hide # ^ |
             
            Vote: I like it 0 Vote: I do not like it

            I did the same thing; and then was wandering why the other people in the selection committee had so many conversations about making it non-quadratic :)

    • »
      »
      »
      8 months ago, hide # ^ |
       
      Vote: I like it +10 Vote: I do not like it

      Thanks!

      Spoiler
      • »
        »
        »
        »
        8 months ago, hide # ^ |
        Rev. 2  
        Vote: I like it 0 Vote: I do not like it
        Spoiler
        • »
          »
          »
          »
          »
          8 months ago, hide # ^ |
           
          Vote: I like it +20 Vote: I do not like it

          Thanks!

          Spoiler

          For the balance — yea, it's hard, and there are conflicting goals. On the top end of the contest; I'd say there are three things I'd like to see:

          • I'd like to have the winner to have a unique score; gold medals to have a clear separation, and the medal zone to have a clear separation. In this case, the first worked out, the second was pretty close, and the third didn't work out.

          • I'd like not to have a "clear ordering" across the problems. That is, I find it bad when there's a clear ordering like "everyone who solved X will also have solved Y". The reason I think it's bad is because that means that a team trying to enter any particular zone (say, win, or get a gold medal, or get a medal) has basically only one problem they can go through, so the results are very much about whether that particular threshold problem was "well-suited" to them. This year it wasn't perfect, but it was also not awful (I'd have loved it if, for example, one of the 9-problem teams additionally got C). Note that while a bunch of teams got 9 problems, they weren't always the same 9 problems (although, again, I'd have loved even more variety).

          • One personal pet peeve of mine is that frequently at the finals nobody even attempts the hardest problem. We tried to avoid it this year, and succeeded — I think two teams got G in the end, and there were some pretty reasonable attempts at C as well.

          So, while this year's results weren't perfect, I actually liked them more than a bunch of previous years.

          Obviously, people will want different things out of the overall balance of the set; but I think it's worthwhile to explain why I, for one, am not unhappy with the final scoreboard.

        • »
          »
          »
          »
          »
          8 months ago, hide # ^ |
           
          Vote: I like it +10 Vote: I do not like it

          "H is supposed to be in the same tier as E" was an opinion the analysts had (you might've overheard it, since it was the analysts speaking on ICPC Live, but that's not an opinion we, judges, had, and in this case it seems we were more correct on this). In the selection committee, we considered H to be on a similar tier to A; and B on a similar tier to E.

          Spoiler
      • »
        »
        »
        »
        8 months ago, hide # ^ |
        Rev. 2  
        Vote: I like it 0 Vote: I do not like it
        Spoiler
  • »
    »
    8 months ago, hide # ^ |
    Rev. 3  
    Vote: I like it +10 Vote: I do not like it

    I'll summarize what I heard from the feedback so far (and I'll try to drop in here a few times more, so if you've got feedback, I'd be super-happy to hear it).

    Generally, I think people consider the problems to be OK to good. The problems which people mention as good are pretty much spread around, but A, B, E and I appear more than once. The problem that people most dislike is H (described as "annoying", which I guess means that there's a bunch of special cases you have to deal with correctly). There were also concerns about B (guessable from pattern, although from what I've seen teams that got it rather solved in on paper than guessed), A (there was knowledge that could help you), K (similar problem appeared elsewhere) and G (which has been described as edge-case grind).

    On the composition, I think the main concern is about the medal-zone — the fact that 14 teams (in places 4-17) ended up with 9 problems (and most of them with the same set, there were 2 exceptions).

    Thank you all for the feedback; and if anyone else wants to add their input, I'll be happy :)

    • »
      »
      »
      8 months ago, hide # ^ |
       
      Vote: I like it 0 Vote: I do not like it

      Hey, when will test data/solution rundowns (in form of video/PDF like previous years) be out? I sent you a DM but replying here in case you don't check them

    • »
      »
      »
      8 months ago, hide # ^ |
       
      Vote: I like it +10 Vote: I do not like it

      A few thoughts I had:

      I don't think the tie between 4th-17th is as bad as some people are making it out to be. This blog makes some valid procedural suggestions, but it also makes the tie out to be some kind of inexcusable error, which feels unnecessarily harsh (as one comment notes, many rounds which do follow all of the advice in that blog end up with similar discrepancies between the intended and actual score distribution). Even if the judges can perfectly calibrate the difficulty of the set and design it such that on average, twelve teams solve at least nine problems and four teams get to ten, there's enough variance that an outcome with three teams on ten solves and seventeen with at least nine doesn't seem that improbable.

      I liked the first nine problems (all but B, C, G); I thought they were well above average for the easy-medium end of an ICPC set (and a significant improvement over 2024). I do think the easy end was a bit too long: all but ten teams solved five problems, significantly more than an average year. I liked all of the five easy problems (D was okay, L was gimmicky but not bad, F, I, and J were all great), but I would have saved one of them for a different year to give teams in the top half more time to work on the problems challenging to them.

      I disagree with the criticism of H: I don't think it's that annoying to code / there aren't that many edge cases, and I like having one problem in an ICPC set where implementation is the main challenge to create interesting strategic decisions and give weaker teams something to work on.

      I haven't solved C or G, but if the solution to C relies on geometry, I would be a little sad that both of the potential first place-decider problems were geometry (though maybe part of that is my anti-geometry bias...). One additional hard problem, maybe to replace an easy/medium problem, might have helped the balance.

      Thanks for the contest!

      • »
        »
        »
        »
        7 months ago, hide # ^ |
         
        Vote: I like it 0 Vote: I do not like it

        Thank you for the comments (and I'm glad you appreciated the set). I agree that looking back we could've saved, say, L for a future finals; TBH I think that at least I underestimated the "weaker" teams in the finals (and was worried that if we ran without L, we might end up with a 5+ teams with a zero score; a worry that was obviously unfounded).

        I'm not sure I'd want to add another medium/hard problem, though. One thing I disliked about a lot of the finals sets is that we had a "boss" problem (like Kindergarten / G in 2024, Recurrence / E in 2023, Archeo / Z in 2022, etc.) and nobody even read it (and the title got decided by speed on the medium-hard problems, rather than by the ability to solve something actually hard). Instead, this year, we had less time-sinks in the set, and two "boss" problems (C and G); and the winners actually did solve one of them (and there were serious attempts at C as well). And I really liked that about this set.

        (obviously, people's opinions might vary, I just want to point out that this tradeoff between "a bunch of medium-hard problems" and "having the boss problem actually be attacked" is a real one)

        • »
          »
          »
          »
          »
          7 months ago, hide # ^ |
           
          Vote: I like it 0 Vote: I do not like it

          I agree that this is a real tradeoff. I will note that not having solved C, if C is meaningfully harder than G or its implementation involves computational geometry, that would suggest the hard end of the set substantially favors problems strong at geometry relative to teams better at other topics (which I think is bad for several reasons, including that it leaves teammates who don't specialize in geometry without much to do while the one who does solves C/G). I'd be happier if the hard end of the set included multiple problems that test substantially different skillsets. This is something I thought the Astana set did well; teams could reasonably choose any of EHK as their ninth problem to attempt.

          • »
            »
            »
            »
            »
            »
            7 months ago, hide # ^ |
             
            Vote: I like it 0 Vote: I do not like it

            I agree (and we discussed that) that it's somewhat unfortunate that both the hardest problems were at least geometry-adjacent. They were very very different (in G, the geometric aspects were primarily about planar geometry, while in C the main point was things like finding the normal to a plane in 3D), but I agree this is a weakness of the set (which resulted from the set of problems we were choosing from).

»
8 months ago, hide # |
 
Vote: I like it +10 Vote: I do not like it

Predicted ratings for all problems?

  • »
    »
    8 months ago, hide # ^ |
     
    Vote: I like it -10 Vote: I do not like it

    Really curious, as I want to know how the top teams perceive the hardest problems of the ICPC set to a 3300-3500 rated problem

»
8 months ago, hide # |
 
Vote: I like it +3 Vote: I do not like it

Who won

»
8 months ago, hide # |
Rev. 2  
Vote: I like it +9 Vote: I do not like it

Could St.Petersburg get 11pts? UPD:They did!

»
8 months ago, hide # |
 
Vote: I like it 0 Vote: I do not like it

The standing freezing should have finished because the contest has ended. So where could we watch the final standing or the standing freshing?

»
8 months ago, hide # |
 
Vote: I like it 0 Vote: I do not like it

is anybody knows THU's rank and TUK's rank

»
8 months ago, hide # |
 
Vote: I like it 0 Vote: I do not like it

Why I cannot register for online mirror?

»
8 months ago, hide # |
Rev. 2  
Vote: I like it +61 Vote: I do not like it

Saratov State University got medals!!! Its first medals since 2011. Congrats!

»
8 months ago, hide # |
 
Vote: I like it +8 Vote: I do not like it

As usual, award ceremony stream was far from good. The worst moments were when they stopped showing standings while resolving the medal-deciding submissions and displaying wrong info for teams. Including World Champions. Yes, Saint Petersburg State University is different from ITMO, it's their 5th title, ITMO has 7 already, but I am 100% sure they deserve an honor of mentioning their info correctly.

»
8 months ago, hide # |
 
Vote: I like it +4 Vote: I do not like it

IIT Indore has done so well . #61

»
8 months ago, hide # |
Rev. 2  
Vote: I like it +96 Vote: I do not like it

The overall style of this year's problems leans more toward thinking compared to previous years, which is a positive change. I won’t comment on the overall problem set (as it might seem like I’m making excuses for my own failure), but I must point out that the judges and technical team were thoroughly unprepared—their efforts did not match the hard work contestants put in.

Dress Rehearsal:
There were numerous issues during the warm-up. Problems F and L were misconfigured initially, making it impossible for any team to pass them. With the exception of interactive problems, the time limits for all other problems were incorrectly set—seemingly uniformly to 30 seconds. While issues during Dress Rehearsal are understandable (especially since this was the first time in recent years that PC2 was used at WF), and in fact helped identify problems before the main contest, the post-rehearsal Q&A was disappointing. The speaker avoided addressing these critical issues and instead focused on minor complaints like the air conditioning being too warm or the tables being too sharp, simply ending with ‘sorry, we can’t.’ Contestants were left completely in the dark—was it a problem configuration error, or was the judging system really 30 times faster than local testing?

Main Contest:
This year, the testing tool for the interactive problem was once again incorrect. To my knowledge, at WF 2024 in Astana, Problem A had incorrect data, and the interactive problem’s testing tool was also flawed. It’s truly disappointing to see such issues occur two years in a row. Even if the error in this year’s interactor was easy to spot and fix, it still caused confusion and affected our team’s morale. We couldn’t tell whether the issue was in our code or in the judge’s interactor. If the jury’s approach is to attach lengthy disclaimers to testing tools and then simply issue a “We are sorry” clarification when things go wrong during the contest, it might be better not to provide testing tools at all.

Contestants have trained hard all year, participating in dozens or even hundreds of practice contests. The problem setters had a full year to prepare this competition, yet such basic mistakes still occurred. It’s deeply disappointing. I hope future World Finals can learn from this and do better.

  • »
    »
    8 months ago, hide # ^ |
    Rev. 2  
    Vote: I like it +40 Vote: I do not like it

    I'm collecting feedback on the problemset above; so I'd definitely be grateful if you did decide to comment on the problemset.

    I don't want to comment on the technical team's work, nor on their responses during the Q&A.

    I definitely want to apologize in the name of judges for the issue with the interactive testing tool. One of the takeaways from this contest for me is that we need better testing capabilities for testing tools in our package verification setup — making mistakes is a part of programming; but testing is the way to prevent mistakes from having a significant impact. That said, you seem dissatisfied with the in-contest response to this issue, once discovered. AFAIU, there was a public announcement (with an apology), an instruction on how to fix the tool, and a fixed version available to download. Would you have expected something more / different?

    I also hope that future World Finals will learn from this and do better (and collecting feedback is a part of this).

    • »
      »
      »
      8 months ago, hide # ^ |
       
      Vote: I like it +8 Vote: I do not like it

      Thank you for your response. The handling during the competition was generally acceptable, aside from the somewhat slow reply speed—which is understandable, as ensuring the corrected version was completely accurate took time. What I hope for more is that the Judges can improve the problem-setting process. As you mentioned, everyone makes mistakes, and that’s understandable. However, the problem development process in competitive programming is already quite mature. I find it hard to believe that the WF, being the world’s most prestigious contest, has repeatedly encountered such issues. Perhaps it’s time for the WF Judges to consider updating their methods to keep up with the times.

      Regarding the problemset, I think the overall style is acceptable. However, it seems the Judges misjudged the difficulty of Problems B and E. As a result, whether teams won medals—and what level of medal—depended almost entirely on their performance on E. I believe having more parallel problems of varying types for contestants to choose from would make the competition more balanced and interesting. The perceived difficulty of B largely depends on the approach taken. I suspect that with more test solvers, more comprehensive feedback could have been gathered. This type of problem isn’t inherently bad, but for most contestants, B proved too unintuitive. This forced the medal contenders to focus largely on solving E as quickly and accurately as possible, which made the fight for medals somewhat monotonous.

  • »
    »
    8 months ago, hide # ^ |
     
    Vote: I like it 0 Vote: I do not like it

    What was the problem with the interactor?

    • »
      »
      »
      8 months ago, hide # ^ |
       
      Vote: I like it 0 Vote: I do not like it

      The interactor's initial instructions state that the input uses 1-indexing, but after applying modulo-n operations, it effectively becomes 0-indexed.

»
8 months ago, hide # |
 
Vote: I like it +2 Vote: I do not like it

Will there be a CF gym mirror now since the contest is over?

»
8 months ago, hide # |
 
Vote: I like it 0 Vote: I do not like it

What are the order of problems in terms of difficulty?

»
8 months ago, hide # |
 
Vote: I like it +10 Vote: I do not like it

Hello! I don't know where is the right place to ask this question so I'll just ask here. Will the judge for the ICPC Challenge be reopened at some point? I have another idea and can't wait to see how many points it would get. Uploading the home directories would also be enough because they contain the test case generator.

»
7 months ago, hide # |
 
Vote: I like it -6 Vote: I do not like it

easy problems