Hi Codeforces!
For you, perhaps it was yet another Codeforces round. But not for me. Codeforces Round 515 (Div. 3) is the first round tested on new judging servers at ITMO University. And this is not just an update of location. Ta-dam! Now your solutions will be judged on the new Intel i3-8100 processors. And this is not all news. The number of judging servers has increased, which means fewer queues during rounds!
I am pleased to announce that now I live in St. Petersburg, I work at ITMO, and Codeforces is gradually moving from the walls of dear to me Saratov University to ITMO University. The decision to move was not easy for me. My plan is that, based on ITMO, I can focus more on Codeforces development and work on the platform. The number of world champions per square meter is simply overwhelming, and working with a large team of such enthusiasts (and professionals!) of sports programming, like me, is extremely inspiring. I always liked St. Petersburg and the atmosphere of ITMO. Intuition did not let me down. I feel surrounded by people close to me in spirit (and I’m not only talking about a work). I am sure there are many interesting common projects ahead!
I do not say goodbye to Saratov. This is my hometown, full of people dear to me. I came to my first programming training at SSU exactly 20 years ago. Antonina Fedorova, thank you very much. Natalya Andreeve, I would like to say a personal thank you now. You have opened for me an interesting world of programming competitions. We were happy together when we first advanced to the ICPC Finals, and later when we became champions of Russia and the World. We made countless competitions and helped many Saratov students find themselves in programming. I fervently support the future of the Programming Competitions Training Center at SSU and future generations of Saratov contest participants. And now, I am in Saratov and still the head of the jury of the ICPC Subregional Contest, and even an SSU employee. I hope that we will make a good and interesting contest.
I will try to make a complete relocation of the Codeforces infrastructure to ITMO without downtimes. A good Internet connection between SSU and ITMO is encouraging. All the planned work will adapt to the schedule of the rounds, and now it pleases more than ever (I send my greetings to the coordinators!).
Currently, all Codeforces and Polygon solutions are being judged on new servers based on Intel i3-8100 processors. Fortunately, the performance of a single core is not very different from the one that the old generation of judging servers had. Thus, the time limits in all problems remain the same.
Such news. I am waiting for you on Codeforces Round #516 (by Moscow Team Olympiad).
Thanks MikeMirzayanov.
I am a simple man I see MikeMirzayanov I upvote.
Good luck in a new place!
Since the servers number has increased, please make the pretest stronger. It should handle it now :)
Learn to test your solutions.
I am not putting any of my failures on Codeforces hacking system. But some problems pretests were bad because there was not enough of them. Codeforces can handle more tests now. At least on some problems that needs to.
The strength of pretests usually comes from their type, not from number. A big number of pretests might be useful mainly for YES/NO problems, but I doubt that in the past it was a reason for a few pretests there.
More pretest equals more types. At least for some problems
Of course, you are right that more pretests means stronger pretests. I'm just saying that the computing power was rarely a limitation that caused a small number of pretests. A setter usually doesn't include many types of tests in pretests, that's it.
A different topic is whether pretests should be stronger. I perfectly understand people that think so (but I disagree). But we can't do anything about the fact that it's impossible to be estimate the strength perfectly and sometimes they will be weaker than usually.
I don't think it's that hard to estimate strength of pretests. Discard samples and mintests (unless they deal with special cases). For each remaining pretest, discard it if the answer to it is somehow obvious or it deals with some trivial case of the original problem (example: problem — sorting an array, test input — sorted array). Discard tests designed to break the same classes of solutions. How many pretests are left?
Next, you need at least one maxtest which fails a bruteforce solution, but not the simplest, most straightforward bruteforce one could think of.
Then you should think about cases which need to be handled separately (or if you want to handle them normally, you end up with bizzaro implementation 3000). For each of them, is it in pretests? If not, do you explicitly want people who don't handle it to fail after passing pretests? If yes, why? If not, add it.
tl;dr: When writing pretests, don't just use anything for the sake of reaching a given number of pretests. Think about what you're doing and why.
This requires the problemsetter to understand his problem and its solution(s). Surely that isn't such a crazy thing to expect?
"discard it if the answer to it is somehow obvious" — this is hard to explain to a new setter.
"How many pretests are left?" — this isn't enough. In some problems one test is enough to check everything, 10 completely different tests would be needed.
"This requires the problemsetter to understand his problem and its solution(s). Surely that isn't such a crazy thing to expect?" — Understanding it well enough to get some particular strength of pretests is hard. I'm not able to get that every time. I think that an average setter is less experienced than me.
Coordinators can help with that, but they don't understand generators and tests that well.
Nothing is ever enough. So? A step in the right direction is better than nothing and at least I'm offering something to build on instead of just complaining.
I never claimed what I suggest is perfect. It's possible your contest will be messed up because you missed something even while trying to follow a set of guidelines, just like it's possible it'll be messed up because you got hit by a car. Does that make carelessness a good thing?
Sure. It's also hard to explain high-level algorithms to someone who's new to competitive programming. Skill comes with experience; I'm not expecting everyone to be perfect right off the bat — rather, suggesting a framework of what to strive towards.
The same argument as above applies. Solving programming problems is hard, yet the attitude about that is completely different.
Also, you're twisting what I'm saying as if I expect strength of pretests to be something extremely specific, which I'm obviously not. "3 of my 12 pretests are samples and 5 more are mintests" is informative without going into specifics. Understanding what your solution tends to do on just random pretests, for example, is usually simple enough. It's not even problemsetting skill when you want to know if some simple solution would pass with weak tests, it's something needed when you compete too. (I don't mind so much when my solution fails because I didn't pay attention to the fact that my solution is suboptimal, but hard to fail, it's when I can seriously misunderstand part of the problem and still pass pretests that pisses me off.)
And when an inexperienced problemsetter makes completely useless pretests out of inexperience? That's fine, provided it serves as an example to avoid for that problemsetter and others, not as an opportunity to rationalise it away with "hah, it's your mistake that you actually thought pretests aren't useless!".
Pretests shouldn't be supposed to check everything and I'm not saying they are. (It's strange that this is your argument after disagreeing with "pretests should be stronger".) If you need 10 tests to check everything, pick a reasonable number of good ones in such a way that they'd check a lot, but not everything.
Note that I mentioned special cases separately. If a problem has massive casework, it's the point where contestants should recognise it and make sure they handled all cases, but the pretests shouldn't cover the same case twice.
It's only 17% faster
But I think that the weak pretest can help us improve our skills. Also,I think it gives us more fun in some ways
Thanks for such an awesome platform ! I am simply in love with Codeforces !!!
Because of CODE_CUP time in iran, I missed Codeforces Round 515 (Div. 3) and because of my school time, I'll miss Codeforces Round 516 (Div. 2, by Moscow Team Olympiad) :(. I hope in the CF517's time, my region does't have earthquake. (just kidding because of my bad luck).
Ok, but the real question is — is it 64 bit now, so that it supports __int128_t?
Ok, that's great news, maybe that's not "the real question" :D. But still a valid one :D
I imagine the answer is no -- the old server was also 64-bit, it just uses a 32-bit OS for sandboxing reasons: http://mirror.codeforces.com/blog/entry/57646?#comment-413157.
Please fix the judgement. There is large queue in status! Many codes are giving judgement failed. My code got compilation error though it compiles well in my PC!
Is it only me who is experiencing higher waiting time on queues and even higher execution time to get verdict after the server upgradation ?
Please help MikeMirzayanov
I think it's just temporarily, till servers are totally moved to itmo.
Lot many submissions in queue. >_< Update :- It's cleared now !