Блог пользователя dario2994

Автор dario2994, 4 года назад, По-английски

Since the amount of information available about the preparation of a competitive programming contest for AtCoder/Codeforces is very little, I decided to collect here what was my experience. I will try to both describe my experiences and give some general advice to wannabe problemsetters.

I hope that this will be useful to future problemsetters who are "out of the loop". Moreover, participants might be curious to know what happens behind the scenes (and maybe the platforms may consider this as a form of constructive feedback).

Acronyms:

  • AGC = Atcoder Grand Contest
  • GR = Codeforces Global Round

Why I know something about problemsetting?

I am in the competitive programming world since ~8 years: I have participated in IOI/ICPC/GCJ and a number of contests on AtCoder/Codeforces (and lately Codechef). I am not a top participant but, being in this world for so long, I know, more or less, all the standard tricks.

Recently, I was the author of the flagship contests of AtCoder and Codeforces: AtCoder Grand Contest 44 and Codeforces Global Round 11. Both rounds were well-received (or, at least, this is my feeling). In the past I was also the author of a number of tasks for the Italian Olympiads in Informatics. Even though there are problemsetters much more experienced than me, I think I have enough experience and material to talk about being the author of a contest.

The timeline of the AtCoder Grand Contest

  • 2018-April 2020: I kept a file with a list of problems I created in my spare time. Some were pretty bad, some were very good (and one of the best appeared in a contest).
  • Beginning of April: I decided that I wanted to host an "important" online round. I was in doubt between an AGC and a CF Global Round.
  • 20-21 April: I decided to go for an AGC (knowing that I might receive a rejection, since the previous authors were mostly either Japanese or Belarusian...). I sent a message to maroonrk. He answered that I should write to rng_58. I wrote to rng_58 and he agreed on considering my problems.
  • 21 April-4 May: Inventing problems and discussing with rng_58 the problems. On the 4th of May the problemset was finalized (more or less).
  • 4 May-20 May: Preparation of the contest and testing. During this period I prepared the problems (and the editorial) and at the same time the testers solved them and gave me feedback.
  • 22 May-23 May: Me and rng_58, and partially also the testers, thoroughly checked everything (statements, checker, validator, solutions).
  • 23 May: Contest day! Everything went smoothly. Feedback was positive.

The timeline of the Codeforces Global Round

  • 24 May: I enjoyed organizing the AGC, thus why not doing it again? Some time before, I had received a message from MikeMirzayanov asking to propose rounds (it was sent to a number of high-rated users). Thus, I decided that my next contest would have been a global round.
  • 25 May-30 June: Inventing problems.
  • 1 July: I sent a message to antontrygubO_o (who I had chosen as coordinator because I really liked Goodbye 2019) asking whether I could be the author of a Global Round and asking some clarifications on the process.
  • 3 July: He answered saying that I could be an author and that, if I wanted, he could be the coordinator. And he also reassured me about the duration of the process (I was scared by the idea of a long queue). I said that he was perfect as coordinator and that I would have sent him some problems after my holidays.
  • 18 July-30 July: Discussing problems with antontrygubO_o.
  • 30 July: The problemset is (more or less) finalised and I start the preparation of the problems.
  • 1 August-27 August: Preparing the problems.
  • 27 August-9 October: Testing and writing editorials.
  • 10 October: Contest day! Everything went smoothly. Feedback was positive.

Why did I decide to be the author of some contests?

There are many possible motivations for problemsetting. For me, the motivation is a mix of two factors:

  1. I like to create problems.
  2. I am proud and happy if the strongest contestants in the world try to solve my problems.

There are many other good reasons to become the author of a contest, but there is also a very bad reason: compensation. It is a huge waste of time to be the author of a contest if you do it (mainly) for the compensation. Maybe in some countries it might actually be convenient "money-wise" to organize a contest, but it is not the case in "rich" countries (Western Europe, Japan, USA). I am not saying that problem-setters should be paid more, just that the current compensation does not justify the amount of dedication/time/effort/ability necessary to prepare a contest. On the other hand, it is always nice to be paid to do something you like!

The various phases

Creating problems

The creation of the problems is, by far, the hardest part of the preparation of a contest. It requires time, creativity and experience. Nonetheless, it is the best part of the whole process.

In my case, when I proposed the contests I had already created some problems (for the AGC, only one ended up in the contest, for the GR I think 3). The remaining problems were created after my first interaction with the coordinator.

My strategy to create as many problems as possible is to devote as much time as possible. I was spending the vast majority of that time discarding bad ideas, thinking about the cosmic void, feeling like I was wasting my time, drawing on a blank piece of paper, reading others' problems to find inspirations... until a decent idea pops up. It's like waiting for the bus: if you wait long enough at the bus stop, the bus will come eventually.

The criterion I follow when accepting/rejecting my own problems is: would I like to solve this during the contest?

I am not a huge fan of some of the problems that appeared in AGC44 or GR11 (for example, AGC-D and GR11-D) but I still consider them decent. On the other hand, I really love some of my problems. The perfect contest is yet to be held and you (and I) should not aim for it. Putting imperfect problems in a contest is fine and what you don't like might be very cool for someone else. On the other hand, I strongly suggest to avoid inserting bad problems in a problemset just to finish it (I guarantee you would regret it).

The ability to create nice and original problems is quite correlated with rating and experience. If you are new or you are relatively weak, I would suggest you to stay away from problemsetting for div1. On the other hand, if you are experienced and somewhat strong (let's say GM), I would suggest you to try problemsetting... it's immensely fulfilling.

Let me conclude with a few pragmatic advices:

  • Write down all the problems you come up with, because otherwise you will forget them.
  • Google your own problems. If they are well-known or easily googleable, discard them. It can happen to give a known problem in a contest (see AGC44-D), this is not a huge issue but try to avoid it.
  • Be honest with yourself about the beauty of your own problems (it is easy both to underestimate and overestimate their beauty).

Proposing to a coordinator

Both on AtCoder and Codeforces I was lucky enough to skip the "waiting in queue" part of proposing a contest and I started interacting with the coordinator from day 1. I want to sincerely thank the coordinators for giving me this opportunity. The main reason why I could skip that pain is because I was proposing "important contests" and/or I was an "experienced" problemsetter (and/or I was just lucky). Do not expect this treatment.

Both for AGC and GR, I talked (either via Google Chat or Telegram) directly with the coordinator and I kept sending problems until there were enough to create a problemset. The process was like this: I sent a problem (with statement, constraints, sketch of the solution) and in a couple of days (or a bit more) the coordinator would tell me whether he accepts or rejects it.

I think that both rng_58 and antontrygubO_o (who were my coordinators) are very good in selecting problems. Even though my taste is not exactly the same as theirs, I can understand all the rejections I received from them. In this phase they were both very responsive.

The rejection rate was very similar. I have proposed 11 problems for the AGC (which contains 6 problems) and 15 problems for the GR (which contains 8 problems).

The main role of the coordinator is to discard bad problems and decide when there are enough to create a contest. If a coordinator rejects a problem, do not insist: the decision belongs to him, not to you. Rejections are never pleasant but you should always remember that the coordinator goal and yours is the same: creating a nice contest.

In general, I strongly advice anyone to propose only decent problems to the coordinator (see what I said in the "Creating Problems" section). It's an unnecessary waste of time (both yours and the time of the coordinator) to propose bad problems.

Contest preparation

Preparing a problem consists of: writing the statement, writing the solution, writing the generator for the tests, writing the validator for the tests, and, when necessary, some additional helping programs (checker, interactor...).

The preparation phase was rather different on the two platforms.

The AtCoder contest was prepared in a Dropbox folder. To upload the problems into the contest interface (which is exactly the one you see when competing on AtCoder) I had to run a php script. Preparing the interactive problem (AGC44-D) was an immense pain due to the complete lack of any documentation (even rng_58 had no idea of how to do it). I consider this whole preparation environment pretty bad.

The Codeforces contest was prepared on Polygon. Polygon does not have a very modern UI and is not perfect, but I consider it a very good environment for preparing problems. It is vastly superior to the AtCoder way of doing it. I like that Polygon makes it very hard to make stupid mistakes (for example, validators are automatically run on all inputs, it checks that tests cover extreme values of the variables, etc...).

For both contests, during the preparation phase the problemset was perturbed slightly (a problem was replaced in AGC and a problem was modified in GR) and I kept interacting with the coordinators. rng_58 was always helpful and responsive, he helped me taking choices and he guided me through the process (anticipating what would have come next and answering my doubts). antontrygubO_o had a very long response time (something like 10 days to answer yes/no to a question that was important to proceed in the preparation) and I was quite frustrated by his behavior. At that time, I expressed with him my concerns, but almost nothing changed. Since I think this kind of ghosting is rather common in this phase, I suggest to all problemsetters to be as independent as possible in this phase and interact with the coordinator only for major changes to the problems.

In this phase it is very common to "change opinion" about a problem. Something that seems cool before preparing might seem more standard after the preparation, and something that looked easy to implement might reveal itself as a torture. Do not pay too much attention to these feelings, your judgement is too biased to be useful.

Preparing a problem is a dull job, preparing a whole contest is even worse. But keep in mind that the sum of "suffering during preparation" + "regret after the contest" is a universal constant. If you get something wrong during this phase, or you neglect something, it will affect the contest.

I made a number of preparation mistakes but, in hindsight, I consider all of them minor. After AGC44 I was very hard on myself for having given a known problem (problem D) in the contest. Now, I am confident enough to say that it can happen and was an acceptable mistake. I suggest you, future authors, to be hard on yourself before the contest but to be somewhat lenient after. It is normal to make some minor mistakes, it is part of being human.

Let me conclude this section with the order I follow when preparing a problem:

  1. Write carefully the statement with all the constraints. Changing a constraint later is annoying and requires a lot of additional work, hence it is better if you get it right.
  2. Write a checker (if necessary), a validator, and a stupid generator.
  3. Write the optimal solution and (if appropriate) a naive solution.
  4. Check that everything works together, that the time-limit and the constraints are reasonable, that the solution is correct.
  5. Write a generator for hard testcases (this is the worst part of the whole creation of a contest).
  6. Choose the parameters of the generator for pretests and tests.

Testing phase

Until this moment only you (and your coauthors) and the coordinator have had access to the problems. Now the coordinator (and you if you like) will invite some testers to test the problems. To interact with the testers, we used a Slack workspace for the AGC, while a Discord server for the GR (in both cases the environment was created by the coordinator).

Ideally a tester will:

  1. Virtually simulate the contest as if he was really competing, but taking notes of what he is doing and what are his impressions during the contest.
  2. Upsolve the problems he did not manage to solve during the simulation.
  3. Share with you a detailed feedback about the problems.

I have said above that the perfect contest is yet to come... on the other hand the perfect tester exists and is dacin21. Let me thank him here as, for both my contests, the quality of his testing was astonishing.

The testing, both with AtCoder and with Codeforces, was below my expectations (for different reasons).

For the AGC there were two official testers (which would actually be enough), but none of them virtual simulated the contest (which makes it hard to estimate appropriately the problems' difficulty). To make up for this issue I asked two strong contestants to virtual simulate (Rafbill and dacin21). Their feedback played a big role in the decision of the problems scores (which, a posteriori, were quite appropriate). On the other hand, the official testers were very scrupolous in testing the problems and I am pretty sure that any mistake in the preparation would have been noticed.

For the Global Round there was a large number of testers. The issue here was that some of the testers were not mature enough to provide useful feedback. Someone who is into competitive programming since three months will never provide a useful feedback, someone who is not able to articulate his thoughts more than "cool" or "orz" or "shit" will never provide a useful feedback, someone who considers obvious the problems he is able to solve and too hard those he can't solve will never provide a useful feedback. Moreover, even if they were an army, there were exactly 0 solutions from Codeforces-provided testers for problems G and H (here dacin21 and Giada saved the day). On the other hand, the large number of virtual participations made it easy to gauge the difficulty of the problems.

I recommend to any author to take with a grain of salt testers' opinions and comments. Remember that the responsibility is on you (the author) and you have the right to reject the suggestions of the testers.

On the other hand, testers' feedback is of uttermost importance to gauge the difficulty of the round and to polish the statements. Judging the difficulty of your own problems is very hard (at least for me) and it is much better to just guess the difficulties depending on testers' results. For the GR, I think that testers feedback improved a lot the quality of problems A, B, C (which had some serious flaws before the testing phase).

Scheduling the round

At a certain point, the coordinator will schedule your round. The process is simple: choose a day that is ok for the author and check whether there is some overlap with some other contest.

For the AGC44, the round was scheduled ``as soon as possible'', while the GR11 could have been scheduled much earlier (and was delayed mostly to hold the Global Round in October). It was not a problem at all for me to wait for the GR11 (actually it gave me more time to polish statements/editorials).

Writing editorials

Once the problems are prepared and everything seems to be working (= you expect no more changes to the statements), you shall start writing the editorials.

For me, writing the editorials carefully is the only way to be sure about the correctness of the solutions of hard problems.

In my (not so) humble opinion, I believe that the quality of the editorials for AGC44 and for GR11 is above average and thus I feel entitled to give some more precise advices on how to write editorials.

The editorial should be a pleasure to read, should give insight into the solution, should provide a detailed proof of the solution and a careful description of the algorithm. Even better if it contains some additional comments on the problem. Just writing a sketch of a solution is much easier than writing a proper editorial, but it gives way less to the reader.

If you are not used to academic writing, it might be nontrivial to produce a decent editorial. There are two very common mistakes to avoid: skipping hard steps and writing in a terrible English.

  • Skipping hard steps is the most natural thing when writing proofs. We perceive as boring and involved (and so we skip) what we do not fully understand. You should be able to detect this feeling and do exactly the opposite. The errors (even in math papers) lies very often in an "obvious" or "easy" or "straight-forward" argument that is skipped.
  • A large part of the competitive-programming community (including myself) does not speak English as his first language and nobody expects you to write perfectly. Nonetheless, you should strive for correctness and clarity. Use simple and short sentences and double-check for typos. If possible, ask someone to proofread the editorials.

In my case (in particular for the GR11) I had plenty of time to write the editorials and I decided to write them carefully. I also decided to write hints, which were very appreciated. Writing hints requires a negligible amount of time, hence I would suggest to any author to provide them.

Final checks before the contest

In both platforms, before the contest the coordinator focuses on your contest in order to check stuff and/or translate the statements.

For AGC44, the final check lasted an immense number of hours (I cannot check since my slack account expired, but something like 12 hours). During this time, me and rng_58 double-checked every single detail of the contest. It was painful, but at the end I felt safe about everything.

For GR11, the final check was more shallow but still accurate (which makes sense since Polygon is automatically checking a lot of stuff). We both checked the statements (and antontrygubO_o translated them in Russian) and all the validators and checkers. antontrygubO_o made some minor changes to the statements to improve the readability. In this phase there was some friction between me and antontrygubO_o as some of his changes did not respect my right as author to choose the style of the statements (he changed a title and inserted a joke; both changes were reverted).

As my experiences show, any problemsetter should expect to be very busy the few (or not so few) hours just before the contest.

One last thing: it is absolutely normal to be a bit stressed just before the round. Don't worry, it will go smoothly!

During the contest

In both platforms, during the contest I had to answer to questions (together with testers and/or coordinator). In both cases, I was expecting a large number of questions but the real amount was rather small (18 for AGC44, 69 for GR11). Hence, during the round I chilled watching the standings.

I enjoyed much more watching GR11 for three reasons.

  1. The number of participants was much larger, which is cool.
  2. I was not stressed by the fact that one of the problems was known (AGC44 D).
  3. I had fun interacting with antontrygubO_o during the contest. Indeed, the chatting environment was more friendly with him than with rng_58 (this is absolutely not a criticism towards rng_58 and was not an issue).

After the contest

Finally the contest is finished and you can relax... not yet! It is now time to: post the editorial, answer the post-contest comments, write the winners in the announcement. Then you can chill and watch Errichto's stream on the contest!

Let me spend a couple of words on the feedback one receives after the contest. It is very likely that someone will not like your contest. It is also very likely that someone will hate your contest. It is also very likely that someone will offend you/your problems. Haters gonna hate, ignore them.

On the other hand, take into serious consideration all concerns that are properly expressed and are raised by experienced participants. It is very hard to get honest feedback on the problems from contestants (even harder if the feedback would be negative), hence don't waste the feedback provided by negative comments.

In order to receive some more feedback, I personally asked a couple of LGMs after my contests.

Thanks for reading

Thank you for reading, I hope you enjoyed it.

It goes without saying that if you have any questions about problemsetting, you shall ask them in the comments. I will answer all the interesting questions. If you have some problemsetting experience, feel free to share in the comments whether your experience is similar or not to what I have described.

Since I really care about this blog (mostly because I would have loved to read something like this in the past) I will try to keep this in the recent blogs for some days even if it does not generate many comments (which, sadly, is the only way to keep blogs visible on codeforces).

  • Проголосовать: нравится
  • +936
  • Проголосовать: не нравится

»
4 года назад, # |
  Проголосовать: нравится +137 Проголосовать: не нравится

Proposing to a coordinator

Do you know why this sounds really funny out of context?

»
4 года назад, # |
  Проголосовать: нравится +39 Проголосовать: не нравится

Thank you Sir for this blog, really enjoyed reading it.

»
4 года назад, # |
  Проголосовать: нравится +61 Проголосовать: не нравится

I have just seen GR11 editorial (Sadly I couldn't do the contest), love the way you used spoilers and the hints section, every editorial should use spoilers, I really hate it when I check editorial for some problem and by mistake see parts of other problem editorial.

One thing you might have missed in this blog, who chooses the scoring distribution in Codeforces? Is it the author or the coordinator? And why usually it's not updated until the last minutes?

  • »
    »
    4 года назад, # ^ |
      Проголосовать: нравится +86 Проголосовать: не нравится

    In my case (but I guess it is always like this), the author and the coordinator discuss about the scoring distribution until they agree.

    I have no idea why it is usually published just before the contest (I published the scoring distribution 4 days before the CF contest).

»
4 года назад, # |
  Проголосовать: нравится +255 Проголосовать: не нравится

Thank you very much for the prepared round and for the detailed feedback. I read it with great pleasure. I am sure that all Codeforces coordinators will read it (or already). I will be happy to see more of your problems at Codeforces!

»
4 года назад, # |
  Проголосовать: нравится +121 Проголосовать: не нравится

Since I wrote a solution to problem H of GR11 during the testing (and I'm bringing the blog in the recent posts again), give me contribution!

»
4 года назад, # |
  Проголосовать: нравится -59 Проголосовать: не нравится

Please remember to test with other languages in your testing phase!

I had pretty painful experiences with your problems (to the point that I remember your name, usually I don't care who authored which problem).

For example your agc044_b and 1427C - Тяжелая работа папарацци require O(500^3) and O(500 * 100000) with 2 second TL each. You can probably increase that to 3s or 4s without changing the intended solution.

Those problems are probably only around expert level but there are multiple python masters who TLE'd on them. It's really painful to miss a lower rated problem only because of language specific constant factor optimizations!

  • »
    »
    4 года назад, # ^ |
      Проголосовать: нравится +50 Проголосовать: не нравится

    I have a language specific factor optimization for you. Use faster language.

    • »
      »
      »
      4 года назад, # ^ |
        Проголосовать: нравится +11 Проголосовать: не нравится

      I expected that quip, but I didn't think it would come from you. I thought you also agree that CP should be about algorithmic puzzle solving.

      For puzzles, you want to set the TL to differentiate between different algorithms. The intended vs bruteforce solutions are often at least 10x-1000x slower, which is much larger than the constant factor between languages. Usually java/pypy/etc can get within 2x of C++. So it's almost always possible to set a TL that will work for more languages without changing the puzzle.

      For contrast, the other extreme would be to set the TL so that only the most optimized C++ with inlined assembly and SIMD instructions can pass. To some, that's probably still "fair", just testing a different skillset. But if you do that you would alienate everyone who are here for puzzles and don't want to write assembly.

      Basically if you can allow for more inclusiveness, without ruining the problem, why not do it? Just because everyone at the top only use C++, that doesn't mean you should kill language diversity at the lower end of the competition too. If you're planning to do that, just restrict the grader to GCC only and call it a C++ contest.

      FWIW AtCoder usually does a pretty good job at this and still has a substantial non-C++ userbase. That one problem I highlighted from dario2994 was the only instance where I ran into constant factor issues (at least around my rating level). I otherwise really liked that contest. I hope this can be viewed as constructive criticism.

      • »
        »
        »
        »
        4 года назад, # ^ |
          Проголосовать: нравится +16 Проголосовать: не нравится

        You are absolutely right when you say "Basically if you can allow for more inclusiveness, without ruining the problem, why not do it?". Though, you have to understand that sometimes you cannot (or it would require additional work) and you should not take it for granted.

        By the way, even if you were downvoted to hell and even if I do not agree with your opinions, yours is constructive criticism, so I thank you.

      • »
        »
        »
        »
        4 года назад, # ^ |
          Проголосовать: нравится +24 Проголосовать: не нравится

        I agree on a general level, but there are a lot of subtle problems.

        Not all the problems are "just puzzles". Yeah, most of the problems have bruteforces in $$$O(2^{n})$$$ or worse, but it's not only bruteforces we are trying to cut out. Even if bruteforce is very slow, we still might want to distinguish between different smart solutions. $$$O(n \log^{2} n)$$$ is usually better than $$$O(n^{2})$$$ and we are usually able to cut $$$O(n^{2})$$$. But the difference between them will certainly be not 1000x and sometimes not even 10x.

        When I started in problemsetting I was told that every problem should have C++ and Java solutions in TL/2, probably because ICPC guarantees that. And I tried to adhere to it, but for several years now I don't write model solutions in Java. The reason is that I don't compete in Java. Let me explain...

        I can write code in C++, copy it to Java and then fix compile errors until there are none left. It usually works, Java is not that different in syntax. But that's not how people who compete in Java write their code, is it? As you are using the language of your choice, you start to learn its quirks, things you don't want to use because they are slow. Does one of the sorts in Java still work in quadratic time? Like, I have heard that there is such problem, but I don't even know which sort is good and which is not. What I want to say is that I don't write the code in other languages as it is intended, which can make it slower.

        It is even worse for Python, since many built-in functions are written in C and heavily optimized, it is OBLIGATORY to use them to get reasonable runtime. If I just port a code from C++ to Python verbatim (just changing syntax), it will probably work 10-100x times slower than properly written Python code. Did you know that multiplication for BigInt in Python is not quadratic? I didn't until I have learned it by receiving half of submission to my FFT problem just using Python multiplication in a smart way. Basically, there is an optimized prewritten FFT in Python's standard library. How would anyone who doesn't use Python in everyday life know that?

        So, setting TL using an implementation in inherently slower language which I don't actually use and don't know well enough can be a terrible mistake. Solution — use the language you are actually using and know well enough. We don't want to allow people to set problems only if they can write idiomatic code in all languages supported by the platform, do we?

        Different languages exist not only for diversity, but also because they have different use cases. CP is (among other things) about writing fast code. Therefore, you should use fast languages for CP.

        • »
          »
          »
          »
          »
          3 года назад, # ^ |
            Проголосовать: нравится 0 Проголосовать: не нравится

          I know it's a pretty old comment, but care to share more about that problem of yours that had Python FFT solutions? Naively I'd assume that even if Python multiplies big integers using FFT, its runtime is still too slow to benefit from it contest environment.

          That aside, while I agree with most of your reasoning, I do wish people guaranteed performant reference solutions in Java. And the reason is not just (and not as much) speed as memory consumption for problems with large inputs. I did this Belarus and Baltics 2021 virtual contest yesterday and spent about an hour optimizing solutions for 2 different problems, because reading input and stuffing it into basic data structures exceeded 256MB memory limit :|

          • »
            »
            »
            »
            »
            »
            3 года назад, # ^ |
              Проголосовать: нравится 0 Проголосовать: не нравится

            The problem reduced to multiplying a lot of small numbers in long arithmetic, the intended solution was D&C with FFT, D&C with Python int multiplication passed.

          • »
            »
            »
            »
            »
            »
            3 года назад, # ^ |
              Проголосовать: нравится +1 Проголосовать: не нравится

            I looked into this a little bit. It turns out CPython and PyPy (1, 2) both multiply large integers with Karatsuba multiplication, which needs about $$$n^{\log_2{3}} \approx n^{1.6}$$$ operations to multiply $$$n$$$-digit integers. But this is done in the machine-word base, so 64-bit PyPy multiplies two million-decimal-digit numbers in about half a second on Codeforces custom invocation, which I guess is fast enough to be somewhat usable. Python is also polite enough to provide fast and convenient to_bytes and from_bytes operations on bigints. But from what I understand, many other operations on Python bigints are still usually quadratic. From my very quick testing, conversion to and from strings is quadratic, and division is quadratic in CPython, but not PyPy.

            GHC-Haskell and FPC-Pascal provide GMP-backed bigints by default, which use a variety of multiplication algorithms and are definitely fast enough to be usable as a FFT-convolution substitute. (Multiplying two 10-million-digit integers takes about 0.1 seconds on custom invocation.) See, for example, this submission. (This one demonstrates its existence for FPC, but only uses it for normal bigint stuff. I'm not sure if this is the problem Um_nik mentioned. It has library bigint solutions, but none in Python that I can see.)

  • »
    »
    4 года назад, # ^ |
    Rev. 2   Проголосовать: нравится +22 Проголосовать: не нравится

    Dear python_feedback, posting from an anonymous account removes value from what you want to say. In any case, since you do not seem to be a troll, I will answer to your criticism.

    I don't consider the timelimits of AGC44-B and GR11-C to be strict.

    • In both problems there is a relatively small factor between the trivial solution and the desired one.
    • I firmly believe that a large timelimit is a strong hint towards the solution.
    • The C++ solutions (without any kind of optimization or trick) need less than 0.5s on both problems, the timelimits being 2s.
    • Both problems are solvable in python (I implemented the solutions before the contests).

    For these reasons, I do not regret the choice of the two timelimits (while I regret the choice of the timelimit for GR11-G).

    Even if is not the case for these two problems, let me add that whoever chooses to compete in python is choosing a very slow language and should be aware that some problems might require optimizations to get accepted (or it might be impossible to get accepted).

»
2 года назад, # |
  Проголосовать: нравится +4 Проголосовать: не нравится

Thanks sir for giving insights of problemsetting :)