Привет, Codeforces!
Для вас, возможно, это был просто еще один раунд на Codeforces. Но не для меня. Codeforces Round 515 (Div. 3) — это был первый раунд, протестированный на новых тестирующих серверах в университете ИТМО. И это не просто обновление техники. Та-дам! Теперь ваши решения будут проверяться на новых i3-8100. И это не все новости. Количество тестирующих серверов возросло, а это значит меньше очередей во время раундов!
Я рад анонсировать, что теперь живу в Санкт-Петербурге, работаю в ИТМО, и Codeforces постепенно перемещается из стен дорогого мне Саратовского университета в этот ВУЗ ИТМО. Решение о переезде далось мне нелегко. Мой план состоит в том, что на базе ИТМО я смогу в большей степени сосредоточиться на развитии Codeforces и работой над платформой. Количество чемпионов мира на квадратный метр здесь просто зашкаливает, а работать вместе с большим коллективом таких же любителей (нет, профессионалов!) спортивного программирования как и я – необычайно вдохновляет. Мне всегда нравился Санкт-Петербург и атмосфера ИТМО. Интуиция не подвела – я чувствую себя в окружении близких мне по духу людей (и я не только про работу). Уверен, впереди много интересных общих дел!
Я не прощаюсь с Саратовом. Это мой родной город, в котором живёт много дорогих мне людей. На свою первую тренировку в СГУ я пришел ровно 20 лет назад. Антонина Гавриловна, спасибо вам большое. Наталья Львовна, как бы я хотел сейчас сказать лично сказать вам слова благодарности. Вы открыли мне интересный мир соревнований по программированию. Мы вместе радовались, когда впервые вышли в финал ICPC, а позже – стали чемпионами России и Мира. Мы провели бесчисленное количество олимпиад и помогли многим студентам СГУ найти себя в программировании. Я горячо болею за будущее Центра олимпиадной подготовки и будущие поколения саратовских олимпиадников. Вот и сейчас, я в Саратове и всё так же председатель жюри Четвертьфинала ICPC и даже сотрудник СГУ. Надеюсь, что у нас получится сделать хороший и интересный контест.
Я постараюсь сделать полный переезд инфраструктуры Codeforces в ИТМО без перебоев в работе систем. Хороший интернет-канал между СГУ и ИТМО внушает оптимизм. Все плановые работы будут подстраиваться под расписание раундов, а оно нынче как никогда радует (пользуясь случаем, передаю приветы координаторам!).
В настоящий момент все решения на Codeforces и в Polygon тестируются на новых серверах на базе процессоров Intel i3-8100. Удачно, что производительность одного ядра не сильно отличается от той, что была у старого поколения тестирующих серверов. Таким образом, ограничения по времени во всех задачах остаются прежними.
Такие новости. Жду вас на раунде Codeforces Round #516 (по задачам МКОШП).
Thanks MikeMirzayanov.
I am a simple man I see MikeMirzayanov I upvote.
Сорри, а какие были раньше? i3 вроде не звучит очень клёво.
А вот то, что количество серверов возросло, это радует.
Если сервера не обновлялись с тех пор, как их подарил Владимир Якунин, то процы i5-3470. Если учитывать, что старые процы были разогнаными, то производительность не должна значительно изменениться.
Good luck in a new place!
Since the servers number has increased, please make the pretest stronger. It should handle it now :)
Learn to test your solutions.
I am not putting any of my failures on Codeforces hacking system. But some problems pretests were bad because there was not enough of them. Codeforces can handle more tests now. At least on some problems that needs to.
The strength of pretests usually comes from their type, not from number. A big number of pretests might be useful mainly for YES/NO problems, but I doubt that in the past it was a reason for a few pretests there.
More pretest equals more types. At least for some problems
Of course, you are right that more pretests means stronger pretests. I'm just saying that the computing power was rarely a limitation that caused a small number of pretests. A setter usually doesn't include many types of tests in pretests, that's it.
A different topic is whether pretests should be stronger. I perfectly understand people that think so (but I disagree). But we can't do anything about the fact that it's impossible to be estimate the strength perfectly and sometimes they will be weaker than usually.
I don't think it's that hard to estimate strength of pretests. Discard samples and mintests (unless they deal with special cases). For each remaining pretest, discard it if the answer to it is somehow obvious or it deals with some trivial case of the original problem (example: problem — sorting an array, test input — sorted array). Discard tests designed to break the same classes of solutions. How many pretests are left?
Next, you need at least one maxtest which fails a bruteforce solution, but not the simplest, most straightforward bruteforce one could think of.
Then you should think about cases which need to be handled separately (or if you want to handle them normally, you end up with bizzaro implementation 3000). For each of them, is it in pretests? If not, do you explicitly want people who don't handle it to fail after passing pretests? If yes, why? If not, add it.
tl;dr: When writing pretests, don't just use anything for the sake of reaching a given number of pretests. Think about what you're doing and why.
This requires the problemsetter to understand his problem and its solution(s). Surely that isn't such a crazy thing to expect?
"discard it if the answer to it is somehow obvious" — this is hard to explain to a new setter.
"How many pretests are left?" — this isn't enough. In some problems one test is enough to check everything, 10 completely different tests would be needed.
"This requires the problemsetter to understand his problem and its solution(s). Surely that isn't such a crazy thing to expect?" — Understanding it well enough to get some particular strength of pretests is hard. I'm not able to get that every time. I think that an average setter is less experienced than me.
Coordinators can help with that, but they don't understand generators and tests that well.
Nothing is ever enough. So? A step in the right direction is better than nothing and at least I'm offering something to build on instead of just complaining.
I never claimed what I suggest is perfect. It's possible your contest will be messed up because you missed something even while trying to follow a set of guidelines, just like it's possible it'll be messed up because you got hit by a car. Does that make carelessness a good thing?
Sure. It's also hard to explain high-level algorithms to someone who's new to competitive programming. Skill comes with experience; I'm not expecting everyone to be perfect right off the bat — rather, suggesting a framework of what to strive towards.
The same argument as above applies. Solving programming problems is hard, yet the attitude about that is completely different.
Also, you're twisting what I'm saying as if I expect strength of pretests to be something extremely specific, which I'm obviously not. "3 of my 12 pretests are samples and 5 more are mintests" is informative without going into specifics. Understanding what your solution tends to do on just random pretests, for example, is usually simple enough. It's not even problemsetting skill when you want to know if some simple solution would pass with weak tests, it's something needed when you compete too. (I don't mind so much when my solution fails because I didn't pay attention to the fact that my solution is suboptimal, but hard to fail, it's when I can seriously misunderstand part of the problem and still pass pretests that pisses me off.)
And when an inexperienced problemsetter makes completely useless pretests out of inexperience? That's fine, provided it serves as an example to avoid for that problemsetter and others, not as an opportunity to rationalise it away with "hah, it's your mistake that you actually thought pretests aren't useless!".
Pretests shouldn't be supposed to check everything and I'm not saying they are. (It's strange that this is your argument after disagreeing with "pretests should be stronger".) If you need 10 tests to check everything, pick a reasonable number of good ones in such a way that they'd check a lot, but not everything.
Note that I mentioned special cases separately. If a problem has massive casework, it's the point where contestants should recognise it and make sure they handled all cases, but the pretests shouldn't cover the same case twice.
It's only 17% faster
But I think that the weak pretest can help us improve our skills. Also,I think it gives us more fun in some ways
Thanks for such an awesome platform ! I am simply in love with Codeforces !!!
Здорово, Михаил Расихович!
Because of CODE_CUP time in iran, I missed Codeforces Round 515 (Div. 3) and because of my school time, I'll miss Codeforces Round 516 (Div. 2, by Moscow Team Olympiad) :(. I hope in the CF517's time, my region does't have earthquake. (just kidding because of my bad luck).
Надеюсь после этого кф перестанет падать во время контеста ^-^
Ok, but the real question is — is it 64 bit now, so that it supports __int128_t?
Ok, that's great news, maybe that's not "the real question" :D. But still a valid one :D
I imagine the answer is no -- the old server was also 64-bit, it just uses a 32-bit OS for sandboxing reasons: http://mirror.codeforces.com/blog/entry/57646?#comment-413157.
Please fix the judgement. There is large queue in status! Many codes are giving judgement failed. My code got compilation error though it compiles well in my PC!
Is it only me who is experiencing higher waiting time on queues and even higher execution time to get verdict after the server upgradation ?
Please help MikeMirzayanov
I think it's just temporarily, till servers are totally moved to itmo.
Lot many submissions in queue. >_< Update :- It's cleared now !