Why don't the IOI committee publish shortlisted problems and solutions like IMO does? It would be a great source of practice and countries could use the problems in their selection tests.
# | User | Rating |
---|---|---|
1 | tourist | 4009 |
2 | jiangly | 3823 |
3 | Benq | 3738 |
4 | Radewoosh | 3633 |
5 | jqdai0815 | 3620 |
6 | orzdevinwang | 3529 |
7 | ecnerwala | 3446 |
8 | Um_nik | 3396 |
9 | ksun48 | 3390 |
10 | gamegame | 3386 |
# | User | Contrib. |
---|---|---|
1 | cry | 167 |
2 | Um_nik | 163 |
2 | maomao90 | 163 |
4 | atcoder_official | 161 |
5 | adamant | 159 |
6 | -is-this-fft- | 158 |
7 | awoo | 157 |
8 | TheScrasse | 154 |
9 | nor | 153 |
9 | Dominater069 | 153 |
Why don't the IOI committee publish shortlisted problems and solutions like IMO does? It would be a great source of practice and countries could use the problems in their selection tests.
Name |
---|
There is no shortlist. If the problem wasn't selected, committee just let the authors to use them for somewhere else. (For example, APIO 2014 Beads)
Makes sense now. Still maybe they should consider compiling a shortlist and publishing it.
Well, I don't agree on that. Situation in OI and MO community is quite different. We only need pen and paper to solve math problem, while we need a configured judge environment to solve OI problem. So rather than just opening it, it's better to wait for another chance where we have prepared judges.
Because preparing a problem for IOI is a lot more work than preparing a problem for IMO. In both cases the first stage is just an idea of good problem. I guess that coming up with a really interesting IMO problem is a bit harder than for IOI, but let's leave that issue. The thing is, process for math problems ends here. And for algorithmical ones you still have a long way to go. You need to write model solutions, bruteforces, heuristics, prepare robust tests that will not allow any of bad solutions to pass, input verifiers etc. Only after that is done, problem is ready to be posed on a competition. All that effort is done only for actual tasks that are already selected and for backup tasks (that can be selected as next year actual tasks). That is reason 1. However, they may also as well publish just statements without whole packages, but that would be a little awkward.
And reason 2 is that IOI may apparently have smaller number of interesting tasks than IMO. Every year on algorithmic mailing list of my uni we get an e-mail like "IOI NEEDS PROBLEMS IT HAS NO GUD ONEZ SEND US SUM PLZPLZPLZ" (well, not exactly like that, but you got an idea) :P. However somehow every year they still manage to create a really interesting problemset :).
I agree that preparation of IOI problem takes technical stuff that IMO problems don't (prepare good tests, be fair to all languages). But the model solution and brute force part is required in IMO problems as well? They need to make sure there ain't an easier solution they're missing [probably some geo problem where inverting wrt a special circle kills it, or a combi problem where a somewhat simple algorithm kills it (p5 this year)].
They can just post statements and model solutions (like in 'Looking for a Challenge', but brief). Countries can generate test data on their own.
I see your point about lack of good problems in major OIs.
"or a combi problem where a somewhat simple algorithm kills it (p5 this year)]." — and here I am, still having no clue how to solve it xD
Lol, I have no idea of a 'simple algorithm' killing p5 this year, could you pm me with some kind of hint please?
I don't see why quick Aha! solutions would be discouraged in IMO. The nature of Math contests is that they want clean, quick and creative solutions. So, if there's a non-obvious but simpler solution, it will be welcomed.