Disclaimer: This is a rant about Meta Hacker Cup and may not contain any useful information.
Meta Hacker Cup is one of the biggest annual programming competitions, but it has the strangest submission format, unlike any other online judge. Why does it have to be so different? Let’s take a look at how Meta Hacker Cup 2024 Round 1 went for maomao90
The time now is 1:00 AM in Singapore. The contest starts, and maomao90 begins solving the problems.
The time now is 1:41 AM. maomao90 has solved problems A, B, and C without much trouble and starts working on problem D. After quickly coming up with a theoretical solution, maomao90 begins coding.
The time now is 2:05 AM. The code is ready and passes the sample tests. maomao90 proceeds to validate the solution.
The time now is 2:06 AM. Validation passes, and the input zip file is downloaded.
The time now is 2:07 AM. maomao90 runs the code on the final test.
Oh no! How did the code pass validation tests but fail the final test with an assertion error? Panicking, maomao90 scrambles to debug the code.
The time now is 2:11 AM. Five minutes have passed since downloading the zip file. maomao90 fails to debug his code and is no longer allowed to submit problem D. maomao90 wasted 30 minutes of his time and is left frustrated and in tears.
Problem 1: Why is the validation test so weak?
Is the validation test intentionally weak, or is it a mistake by the problem setter?
If it’s intentional, what's the goal? To make participants suffer? Brute-force algorithms often pass validation easily but take far longer than five minutes for the final test. Why is that?
Problem 2: Why are participants allowed only a single 5-minute attempt?
Almost every other online judge allows multiple submissions when your solution is incorrect. Why does Meta Hacker Cup limit participants to just one try?
One possible reason is that if someone's code takes more than 5 minutes to run, they can wait until their code finishes running before making a second attempt and AC the problem even though their solution took much longer than 5 minutes to finish running. However, there's an easy solution to this:
- Instead of one input file, create three strong input files, each worth $\frac{1}{3}$ of the total points.
- Allow participants to download each input file individually, with a 5-minute submission window for each file.
- This way, if a participant fails to submit for the first input file, they can still debug and submit for the second and third, potentially earning $$$\frac{2}{3}$$$ of the total points.
- This approach would also strengthen the final test with three times more input data.
The time now is 2:12 AM. After a brief crying session, maomao90 starts on problem E.
The time now is 3:41 AM. maomao90 validates the solution for problem E but lacks confidence after the disaster with problem D.
The time now is 3:44 AM. After a final check, maomao90 downloads the zip file and runs the code for the final test.
The time now is 3:45 AM. maomao90 submits the output for problem E. There’s nothing else to do now, as problem D can’t be submitted. maomao90 is tired and wants to sleep, but at the same time, maomao90 wants to know whether his final output is correct. Unfortunately, the final verdict will only released after the contest...
Problem 3: Why is the final verdict delayed until after the contest?
Is it to reduce server load by judging only after the contest ends? The server doesn’t even need to compile or run code~--- it only has to compare two text files. Is that really too much for the server during the contest?
If the final verdict were provided immediately, along with the solution proposed in Problem 2, the contest experience would be far more pleasant. Yet, after 14 years, there’s still no improvement in the grading system. Why is that? Even Codeforces is experimenting with pretest=system test to prevent "Fail System Test" issues.
The time now is 4:00 AM. The contest finally ends, and maomao90 can check if he solved problem E correctly. Thankfully, it was accepted and he celebrates.
The time now is 4:01 AM. Looking at the leaderboard, maomao90 sees the number of WAs on problem D.
So many red crosses! maomao90 laughs, realizing many others faced the same weak validation issues on problem D.
Problem 4: Why doesn’t Meta Hacker Cup follow other online judges and run the code for us?
The ultimate solution to all these problems is simple: adopt the standard system used by most online judges, where participants submit their code, and the platform compiles and runs it. Why hasn’t Meta Hacker Cup implemented this?
Codeforces held its first round in 2010, using the current code submission system, and Meta Hacker Cup started in 2011. Why did Meta Hacker Cup opt for this convoluted system of downloading password-encoded zip files instead of following the code submission system that Codeforces uses?
Please upvote this blog if you faced similar issues or agree with the solutions mentioned. Hopefully, Meta will consider these suggestions and improve the system in the future. :(
I have to do nothing but upvote. I solved B (at least I thought so) but missed n = 3 and n = 4. They weren't even in validation cases. I slept 30 minutes before the end of contest due to sleep and in the morning BOOM! I didn't even qualify for Round 2 :(
I mean life does give suprises and I'm ok with it. It taught me to stress test more but a little bit fault goes to the system as well... I hope that they improve their system so no other contestant faces what I faced
I too missed n = 3 and n = 4. But i think it's our fault.
They called me a hater. Its 2024 and a CP contest is still using such a silly submission format.
I don't agree with the point about weak validation test, since it's supposed to be just a sanity check for the format I guess, but everything else is 100% true.
Idea about several input files is especially great, since it opens up possibility to make 3 inputs with different complexity (easy\medium\hard) and encourage participants to write solutions for hard problems even if they are not optimal, so that they can at least get points for an easy input.
Also another minor issue is the requirement to have a facebook account, makes no sense. Meta already has standalone sites like metacareers for example which has separate and more simple account system, not sure why it can't be implemented here.
That's stupid. You could as well use the samples for "sanity check".
Making validation input almost equal to test like it's done on CF makes life easier for those who can't test their solutions properly. Instead of complaining about it maybe just git gud.
Hi, I agree with the "Why doesn’t Meta Hacker Cup follow other online judges and run the code for us?" part. I setup my entire system for the competition, including codes to increase stack size etc. I coded the solution for problem A, and upon downloading and opening the final input file (which was very large), my system crashed. By the time I could figure out a solution or switch to another device, the timer had already ended.
Are you familiar with how Google Code Jam used to operate?
If MHC were to change from the current format, I would prefer MHC to go the way of old Google Code Jam (with small and large input cases, and you still run your code locally on your own machine) instead of new Google Code Jam (where you submitted code for evaluation.)
Hmm why do you prefer running locally rather than submitting code?
It's fun and unusual. It's also double fun when you get an assertion failure while running on the final test, and have to fix it within a few minutes.
Additionally, I believe there's an important educational aspect.
There's a reason your comment there got downvoted.
And there is a reason your comment here got downvoted.
In addition to what KAN said, there's the rather unique fact that you have access to the test cases.
This enables you, for example, to verify for yourself exactly how strong D's validation tests are (it should be obvious they are weak if you read them), and then test yourself any edge cases that you think might be missing in that coverage.
Also, you get to look at the full input, which helps a lot during those 5-minute debug scrambles (an "RTE" verdict is not all you have to go off, which helps a lot). Also, you get to check if the input contains any edge cases you ignored: this is not often helpful, but there has been at least one occasion in which I noticed a case with n=0 in the input, realized I handled that incorrectly, and edited a "2" to a "1" in my output file before submitting for the AC.
I very much enjoy the aspect of 6-minute timer and sometimes scrambling to fix the solution. It's my favorite part of Hacker Cup.
It's comments like yours that make them still retain this submission format. Just because you're high rated, probably with a better pc and likely able to solve more problems even if your submission timer expires doesn't mean everyone will feel the same.
Why must every competition be enjoyable for everyone?
Hacker Cup is the only contest with this format. Let people who like it have it, you have virtually all other contests in the world if you want to compete in the more standard format.
what about us newbie guys?? we can't write so fast my highest wpm is even 79 and avg around 60
If you fail to locate an error in your solution in 6 minutes, it's less because of your typing speed and more because you thinking it's about typing
in 6 min i have to do everything till submitting... you know for a beginner this is not enjoyable...
If you look at the topcoder format it is not that different. Only sample tests to test your code on, only know the verdict at the end of the contest. You can submit multiple times, but the only reason why you would resubmit is if you have found a hacking test on your own. Still people enjoyed topcoder for a long time. It's different from CF, true, but noqt bad. What makes it bad is if you just approach it as if it's CF and submit immediately once you pass samples/validation. In this format testing your code pays off, or you can make a gamble and submit for a faster solve time but less guarantee. There's strategy to this.
In problem D it was obvious validation was very weak as you could see there were two more cases added except the sample, no big cases and D could have lots of casework and edgecases. I was also gambling on D and lost, but I was not mad about not solving.
Hmm I guess it was my bad for not taking a look at the validation test and assuming that it would be strong enough.
Seriously, what is with all the loser mentalities around here? It’s a fair format, which has advantages and disadvantages (like any other formats). Why are you blaming the contest for your sub par performance?
It is far from a fair format, but go on we are losers.
Once you stop blaming external factors and start improving yourself, only then you'll have a chance of being good at anything. Pretests were weak? You should have implemented a more careful solution. Assertion failed with 5 minutes left on the clock? You better fix it.
Just like OP showed in his blog post, it's not only him that had this issue (nor would it have been a problem if he was the only one with it). The tests were correct, his solution was incorrect, and he should be frustrated at himself for not solving the problem correctly instead of the contest for not babying him into solving the problem.
I didn't read any of that because I was referring specifically to the submission format.
There are alot factors that make it different from the regular submission. Decompressing huge files, timer, huge input files, very computer spec dependent, etc. Even though I longer participate, I don't see any advantages this submission format gives as you said in the comment I replied to.
What about me having a worse computer? Do i spend thousands of dollars to fix that too?
Although I'd be more than happy to take this debate, my comment and this blog is not about that.
The format is even good at realizing the pretests are weak. I looked at D pretests and it was instantly obvious they were super weak. You don't get that at contests where they're hidden.
Ans you didn't even waste a submission, you can go ahead and make you're own now that you saw they're weak.
I don't think we should depend on the test cases to tell whether our solution is correct or not.
From my perspective, in programming contests, an online judge cannot look at the code and prove its correctness, so the less bad option we have is to run it against thousands of test cases. That does not mean it is the ideal way of checking the correctness.
(I missed my B only for $$$n=4$$$, I'm sad)
One thing I don't like about this format is that many people with a powerful system can use parallel programming/ multithreading, and not every place has access to fast internet so the downloading of files might be very slow for some people.
Your comment about downloading files is not valid because you can take as much time as you want to download the test archive and the timer starts ticking only when you request a password to unzip the archive.
Some other issues with this format:
1) you cannot tell whether your code will actually run in time or not....by this i mean you LITERALLY cant. It depends completely on whether they try to fuck you by giving 100 tests of the same type intended to TLE your solution, or they are benevolent and only give 2 such tests.
2) there is a big difference on the basis of how powerful your computer is, especially when multithreading which depends on the number of cores you have. Take a look at comments about last round E
3) the contest format is very knowledge-heavy if you want to prevent having any issues. You have to know how to safely increase your stack size, how to multithread different test cases, run big files locally, etc. Some of you might think its not an issue and that participants should be capable to deal with such issues. I vehemently disagree. Its like setting a prefix sum problem and then putting updates on it so you have to now copypaste a segment tree. Adds nothing except for preventing the people without that useless piece of knowledge from solving the problem.
This contest format is very outdated.....last year i refused to take part in R3 because the problem quality was so bad and i did not want to bother myself with bad judging format along with bad problems. MHC really isnt worth it unless you are a top 25 participant.
1) Just assume the worst. Your solution should be able to handle the case with "100 tests of the same type intended to TLE your solution". If you opened the test case and you are not confident about that, you're taking a gamble. You might win or you might not. Same when submitting a squeezed $$$O(n \sqrt n)$$$ solution for a problem with $$$n = 500000$$$.
2) If you could multithread your code and you didn't (and you realized that you're in the situation I described in 1), then you should reflect into how to mitigate that for further contests. I somehow doubt that E couldn't be solved on a mid-level computer with multithreading. And if you think multithreading code is not a skill that you should invest in, then I'm sorry for you. It is one of the most important aspects of writing performant code in real life. As a side note, I was really sad to see Distributed Code Jam go away, I think it was so unique in its format and style, that it was one of the most exciting events to invest my time and learning into.
3) I understand your point, but you have to realize that that's your opinion. I am on the other side of this. I do think "participants should be capable to deal with such issues". I don't think running big inputs, setting stack sizes, making code paralellizable, and even using data structures to solve problems are "useless pieces of knowledge". You do. This is okay. What is not okay is advocating that all programming contests should reflect your opinion. There should always be room for variety.
Your last paragraph sums it up pretty nicely. If you don't like the format, and you don't feel like it's worth it to improve yourself on it, you can always not participate. But do understand that it has its place in the community, and a lot of people would be sad if next year's contest would be just another CF round.
About 2) you completely miaunderstood my point? I do not claim a mid computer cannot be used to solve E, rather that a better computer has a non trivial advantage. Geothermal himself claims his code would probably fail on the median computer. I gavebthe multithreading example to imply something that especially differs in performance from different computers and not just from a small constant. I do not argue on the basis that multithreading is bad as a tool (i mean i do believe it — real world benefits are not why I do cp but algorithmic improvements, but that is not my argument)
About 1) difference is i can reasomably predict TLE or not by constructing manual tests. This is what I mean by being sure it passes or not. Having sub max tests for any solution happens rarely on CF, and so i can be reasomably sure my worst case is there. This is not true for MHC as they have to cover various types of tests in the same 100 or so cases. I think I should take a look at some codes and run them on the given test vs 100 * max tests...i predict a non trivial number of failures
About 3) fair
1) But it's actually the opposite, it's much harder to estimate how your solution would behave on the judge machine with a tight TL (time limit differs by a lot between local machine and judge) than profiling the max test beforehand and multiplying the run time by the number of tests.
2) I agree that this round's problem E was more machine-dependent than ideal. However, that is not a reason to cancel the whole format. And I don't think many people were affected about the problem E in spite of doing everything in their power to solve it (i.e. people that did try parallelism, but failed), rather some people might have gotten an unfair advantage without doing the effort of having to parallelize their solution. I think the impact wasn't that big in this case, given that no people that attempted E were probably close to 5000th place.
Overall, I think if one disassociates from the idea that parallelism must for some reason be forbidden in programming contests, and uses this experience as a learning opportunity instead of spreading hateful remarks, it would all be a more pleasant experience.
custom invocation is a thing
Why is this not accurate for normal formats?
Why are they the only people entitled to complain? Personally, I didn't even bother after guaranteeing my qualification by doing ABC in 15mins at the 2 hour mark. I do not think that makes me ineligible to see unfairness.
My original comment had disassociated from that idea, i only mention it in passing as an example of something where hardware matters a lot.
Its like setting a prefix sum problem and then putting updates on it so you have to now copypaste a segment tree. Adds nothing except for preventing the people without that useless piece of knowledge from solving the problem.
Sounds like every other problem on Codechef where you are an admin.
Wanna better validation tests? Just write your own, what's the problem? Guess you got too used to codeforces'es ultra strong pretests.
Personally, I like the format a lot. Your second point makes a lot of sense though, it would enhance the contest experience by a large margin.
I think from hacker cup point of view they just want to do the least effort thing and not drag resources chasing and creating new problems, which makes sense.
What I suggest is just make pretests somewhat representative of systests. That will ensure that people that pass pretests are quite likely to pass systests, which reduces feel bad moments of mostly solving a problem but not getting any credit, which is why people are complaining. I know there are some gate keepers that will say skill issue, too bad, but I think it is a legitimate complaint and I do think a system where pretests are mostly the same as systests is better, and it's also the direction codeforces already went successfully.
maomao90, no one from MHC team will officially answer your questions for various reasons. But I believe I can help you with finding answers to them. But bare in mind, I only rely on basic logic + maybe some experience and I don't reflect official position of MHC organizers or Meta in general.
Q1: Why is the validation test so weak? First thing first. Let's make an obvious statement (for those, who know anything else except for competitive programming), that we will reuse for the next questions as well, and consider this paradigm as "this is how this world works".
Statement 1: Each tech company, when it creates its own competition, pursues certain goals. Not your goals, not mine, this company's goals. If this format involves skills that typical ICPC format doesn't, that basically means these skills are of a certain interest to this company. Let's list some of them: promptly finding edge cases, creating good test cases to check your solution, implementation skills that minimize bug count, quick debugging in strict time bounds, etc.
Based on Statement 1, validation test will also serve needs of the organizing company and strength of these tests is based on these needs. If weak validation set, which is provided to make your life easier, makes you suffer, probably you should avoid such activity in general for your mental health.
Q2: Why are participants allowed only a single 5-minute attempt? This one is easy. If you ask, you haven't properly thought about it. When you download full input, you get access to multiple edge cases you haven't considered before and now it gives you an advantage. You can also benchmark you solution's performance to fit it into the required time bounds.
Q3: Why is the final verdict delayed until after the contest? Here we are again. Let's go back to the Statement 1 and nothing else needed here. Also, as it seems to me personally, it's harder to cheat by sharing your solutions, when no one knows if this solution is 100% correct.
Q4: Why doesn’t Meta Hacker Cup follow other online judges and run the code for us? This question requires a bit more of understanding of how this world works. Of course, we still keep in mind Statement 1, but let's also ask few questions out loud:
What is a difference in expenses between the current format and the class online judge format for annual competition?
Is it really of high importance for the organizer to make your experience super convenient, so that they want to pay this delta?
Why have numerous tech companies cancelled their annual competitions regardless of formats and prizes?
Do tech companies nowadays really need competitive programmers specifically?
And few more statements to help you with answering all the questions above:
Statement 2: Company certainly knows how to count money and definitely can decide if it is worth spending extra penny on particular change in contest format.
Statement 3: Nowadays market is very different from what it used to be 10+ years ago, tech companies look for different skills. And they've certainly realized that:
Skills are not only about software engineering.
Software engineering is not competitive programming.
Competitive programming is not just solving mathematical or even algorithmic challenges.
Statement 4: Nobody, except for you and maybe few of your family members and friends, really cares about what is convenient for you.
You can make way more statements and group them up together with Statements 1-4 to create as many answers as possible. I strongly believe I gave you and everyone else enough food for thoughts.
More like food for downvote
I don't care. Feel free to downvote. Food for thoughts is for those, who can actually think.
In case of Google it's because their management is completely retarded. Corpo trash like Sundar cultivates a swamp that gets rid of GCJ instead of firing useless whores from HR when cutting costs. Objectively there's no reason to have less contests. Thank Zuck that Meta is based and contestpilled and we can still enjoy MHC. Really appreciate all the effort that went into the organization of this competition.
It's not that simple. I am also mad at Google management, because GCJ used to be my favorite competition. But if you look from their point of view at all these competitions and questionable profit they gave, you can understand it slightly better.
For instance, when you lay someone off (even HR), you are going to have a lot of extra expenses and limitations. But when you cancel all these competitions, you also reduce operational cost which means you have to lay off less people in order to save the same amount of money. Effect on programming community? Well, acceptable fee as it seems. I believe no one is afraid of angry nerds :( Imagine how many people knew what Google competitions meant to the community and how cancelling those will reflect company's reputation. Despite that, company made a decision to stop the entire competition program. I believe they had strong reasons for that and they did think twice. Also, this decision hasn't been made by Sundar or any other person on their own.
I agree about the part that the fee is negligible for business. And I would've agreed with you about the costs if GCJ had a proper onsite, but with finals going online the total operational costs plummeted and are not really comparable even to a yearly payroll of a bunch of typical completely useless hr (who can be easily laid off at least in the US).
Anyway, this is a sign of company priorities and their culture (which is heavily influenced by C-suite including Sundar): cutting costs on peanuts just for show and keeping extremely bloated teams in lots of mediocre products that do fuck all while ads\sre carry the entire company on their backs.
I don't like where this discussion is going. There are different contest formats, with their different goals and different advantages. Coming from one format and asking all other formats to be like that, well, it's just sad.
First, we have no hacks on Codeforces, just because people complain very loudly when pretests don't cover everything. Then, what, we will have no open tests format, just because people have different PCs? Aw, come on.
Back in the day, there were more people in competitive programming who cared about developing skills other than processing the essence of the problem in one's mind, be it implementation or testing or otherwise. I'm just glad some of these people are still here, and ready to make their point by conducting contests that are a bit different. These don't always go flawlessly, but it's not the reason to cancel them outright.
"Back in the day"
It's 2024.
Edit:looking at the comment section I now understand that this format mostly pleases boomers who don't want to let it go. Understandable.
Well, it's not like we are suggesting to program in PL/1 or using punchcards (could still be fun for some, but not exactly contemporary). Testing and implementation, the skills still look very much useful in 2024.
Hi Ivan. I pretty much agree with what you said, but I think CF hacks are about to be removed because of rapid development of character recognition and as a result becoming a convenient tool for cheaters to steal other contestant's solutions and spreading them online. But if it's a well-known fact that pretests became super strong for a different reason, is there any post or something for me to read about it?
Here, Um_nik mentions that "pretests = systests" is likely a requirement today.
That was already when pretests were extremely strong for a while. Even after this thread and comment. I wonder if there was a precedent that caused pretests to be as strong as they are nowadays.
You might wanna try the Chinese Olympiad, you have no guarantee on the strength of samples and you can make only one submission at the end of the contest. So chill mate!
Also the Romanian Olympiad was like that until 2020, and while I hated the format at the time (and I am happy that we changed to CMS which is the mainstream format for IOI styled contests), I have to say that the old format forced me to learn techniques and debugging styles which I might not have learned had I never seen the old format with just one submission.
Arguably I think while the mainstream contest format has many advantages, these "old styled" contests should still be around for various reasons including those mentioned in other comments.
Problem D was very unique in the sheer number of edge cases it had. I think they wanted to test exactly this -> who can think of all of the edge cases and confidently submit?
BTW this format seems like it's more AI-cheater resistant (given everybody has only one shot), so that's really great too.
It was my first time participating and I thought it was gonna be like any other cp contest so I went in with no prep. Spent so much time trying to figure out how to submit. For problem A I solved it easily but I scrambled to copy the encrypted cases into my editor, since I didn't have a "setup" and it lagged by computer. So after finally being able to run the code and copy the input, 6 mins had already ended. I also WAed Problem 4 which put me at 5700ish place. Very unfortunate :(
Yes those issue and problems with the current system/structures but lets not forget to be grateful Meta is still continuing to hold the last world wide contest at this scale. Huge kudos to SecondThread
I quite like this format. The one thing I dislike is that I don't know if I will be getting $$$t=100$$$ worst case inputs, or if there will just be tons of small test cases for checking edge cases.
Maybe it could be a good idea to, as a rule, make the validation data be 20% (sampled uniformly) of the actual test data. Then it would be much easier to estimate things like run time.
As a competitor I understand your feeling, and I also get a WA verdict on problem D. I spend almost 2 hours to solve this problem, so I also feel very upset about this.
But after when I calm down, I think it is OK that they only include all the samples and perhaps two more useless small test in the contest mode. You know accuracy and self debugging is also very important in the coding contest, and even in my work life. For example, I make a mistake on my code and make my group do a wrong decision. Thanks for my observation and debugging, I stop them before it is getting too late.
Problem D I just make a extremely false assumption, that is, number of ways must be a fibonacci number, and even this false assumption can pass the pretest, you can imagine how weak it is. As a result is, I just get punished, even if my string calculation is correct.
The real problem in the hacker cup is about the slow platform. It take me tens of seconds to see the scoreboard, and when I try to submit the solution, it just show me unknown error verdict, and I need to submit them again and again. I guess the reason is that problem D answer is very large. If they cannot improve the platform, they should make the big number string module 998244353 to avoid this.
I have never make to the round 3 before. Last two years my rank on round 2 is around 800, hope this year can be better since my ranking on round 1 and practice round is much higher than previous year.
Lol I didn't learn my lesson from round 1. I guess I don't get a t-shirt this year because of 2 FSTs in round 2
Top 2k get a shirt, you are 628.
You probably mean qualification to round 3, top 500 qualify.
Oh yah right
Feels bad for you. I did the test and I was like nah I'm not wasting time on this shit. The entire process feels very weird and unprofessional.
Note: Commenting in my personal capacity. All opinions are mine and does not reflect Meta at all.
If you remember that Meta/Facebook Hacker Cup started in 2011, then you'll realize this format is not as unique as you think.
Regarding lack of validation during contest: Even IOI have full-feedback only since 2013. Topcoder also does the same thing, and Codeforces pretests were much weaker in the earlier days (maybe serving similar purpose to what MHC validation currently does). This format forces you to ensure your submissions are correct, both algorithmically and programmatically, which possibly eliminates some controversial "prove-by-submission" strategies.
Regarding submitting the output. Google Code Jam used to have the same format until 2018. The obvious nice thing about this is contestants can use whatever "programming language" they want, including spreadsheets. Also, the legendary IPSC also has this format. Despite lack of prizes, IPSC was probably the contest I waited the most annually since it gives me the most enjoyment to participate.
I don't know what year you started programming contests, but I do hope you had the chance to participate some of the older contests that I mentioned. The difference in the format means these contests "tested" slightly different things, and that contestants need to adjust their strategy. It's what makes it interesting -- if all contests have the same format, and everyone can use one strategy that fits all contests, it's boring.
To be perfectly honest, I feel like some newer programmers are too "spoiled" (for lack of better word) with the current modern format. I encouraged newer programmers to also know what programming contest formats were used to be, and understand that while only a subset of formats suit you, all formats have their own pros and cons.