Hi all,
This year in our programming competition in Samara (link) there was also an experimental contest with a marathon problem. You could face these problems at Marathon 24, Deadline 24, Google Hash Code, and recently at ICPC Challenge. It turned out Codeforces supports these problems, and this post is a tutorial how to prepare them. If MikeMirzayanov don't mention it, I'll do it.
Contest link: 2020, XIII Samara Regional Intercollegiate Programming Contest (marathon problem)
The duration of the onsite contest was 4 hours, so you can participate virtually and compare your results with onsite results:
So, how to prepare such problems and contests? Actually it is very easy. These are the differences from the standard problems:
- the problem should have only one test that should be uploaded somewhere in advance (for example, to gist.github.com). Then give participants the download link.
- use function
void quitp(double points, const std::string &message = "")
ortemplate<typename F> void quitp(F points, const char *format, ...)
in checker. - don't output anything to stderr in checker (to be precise, don't output too much to stderr), for unknown reason it breaks the checker and it starts to always return 0 points. I don't know why this happens, but that's it.
- change the contest type to IOI in the contest settings (by default it's ICPC).
- it is impossible to set Text language in the contest settings, but luckily there is a magic language PHP, which works exactly like Text for such problems and just redirects the program code (your output) to stdout. You have to be sure that any possible output for the test fits into source limit.
I think there is an upper limit on the score that we can provide in
quitp
. When we had tried to organize such a contest sometime ago, our checker kept crashing because the score was exceeding 10^7 or 10^8 (I don't remember). We couldn't think of a nice way to fix it, so we just scaled it down by a constant factor in the end.We also had mutliple test files, and it was summing all their scores up, so that is another possible contributor to the total score blowing up.
Multiple test files force you to submit source code. I think in such problems output files should be submitted.
That kinda gives an edge to people with better hardware. We wanted to maintain a more consistent execution platform. But anyway, I understand your point.