Блог пользователя hxu10

Автор hxu10, история, 3 года назад, По-английски

Currently this contest will start in less than 20 minutes, and I find no blog about this contest, so I decide to write one. Here is the link:
Code Jam 2022 Round 2 link

Last year I ranked 1672 and failed to qualify for round 3, this year, I feel that I improve so much, hope I can get a T-shirt. Good luck everybody, and any discussion is welcomed.

(updated: Finally I ranked 484 and won a T-shirt, so excited, see you in round 3! )

  • Проголосовать: нравится
  • +123
  • Проголосовать: не нравится

»
3 года назад, # |
  Проголосовать: нравится +17 Проголосовать: не нравится

Imagine qualifying to round 2 :(

»
3 года назад, # |
Rev. 2   Проголосовать: нравится +49 Проголосовать: не нравится

I think I should try every problem's subtasks before trying to solve a full problem next time...

The bitmask dp and n^2 dp brutes for Jelly and I, O Bot respectively are practically free 21 points.

I opened last problem with 10 mins left and somehow coded the subtask (and fixed a WA) in the last 8 mins...

  • »
    »
    3 года назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится

    How's the dp brute for IO Bot? I only think abt it for the last 10 minutes

    • »
      »
      »
      3 года назад, # ^ |
        Проголосовать: нравится +9 Проголосовать: не нравится

      Realize that positive and negative positions are independent problems, so assume positive position. Sort a list of zeroes and a list of ones. Any operation you do is beneficial to do on closest remaining numbers. So DP is a[i][j] = the minimal cost to deliver i closest zeroes and j closest ones. From every position there are at most 5 transitions.

»
3 года назад, # |
  Проголосовать: нравится +181 Проголосовать: не нравится

Basically my performance during this round:

Spoiler
»
3 года назад, # |
  Проголосовать: нравится +14 Проголосовать: не нравится

Oh, the $$$O(n^2)$$$ solution failed A.

I'm disqualified and my day is ruined.

sobs

  • »
    »
    3 года назад, # ^ |
      Проголосовать: нравится +21 Проголосовать: не нравится

    I think a lot of people got AC with O(n^2) on the 3rd testset in a, because the testset did not represent the actual worst-case.

»
3 года назад, # |
  Проголосовать: нравится +29 Проголосовать: не нравится

The analysis for test set 1 in "Saving the Jelly" is incredibly terse.

"With N≤10, recursively enumerating all N! orders that Mr. Jolly could call out the children's names is sufficient. The total complexity is roughly O(N!×N)."

How do we handle sweets that are equal distance from a given child? Backtracking? I guess the worst case is probably all children having 2 sweets at equal distance which is just a factor 2^(N/2) ~ 32. I wonder if a Python solution would pass.

  • »
    »
    3 года назад, # ^ |
      Проголосовать: нравится +3 Проголосовать: не нравится

    You can fix any order of equally distanced sweets (and try only first in that order). It follows from the proof of correct solution for large. Would make sense to mention that in the analysis, tho

    • »
      »
      »
      3 года назад, # ^ |
        Проголосовать: нравится 0 Проголосовать: не нравится

      I thought it is provable only if one knows how to solve the large test. Thats weird..

      • »
        »
        »
        »
        3 года назад, # ^ |
          Проголосовать: нравится 0 Проголосовать: не нравится

        I agree, I don't understand how to provably solve small without solving also large :)

        • »
          »
          »
          »
          »
          3 года назад, # ^ |
            Проголосовать: нравится +30 Проголосовать: не нравится

          I did bitmask DP for C small in the last half hour after bricking very hard. The complexity is $$$O(4^n n \sqrt{n})$$$. The dp keeps track of which subsets of kids and sweets are left.

          • »
            »
            »
            »
            »
            »
            3 года назад, # ^ |
            Rev. 4   Проголосовать: нравится +10 Проголосовать: не нравится

            The direct bitmask DP brute ($$$dp[childmask][candymask] = \text{is reachable}$$$) that might appear to be $$$4^n \cdot n^2$$$ actually has a much smaller constant factor and works within the time limit.

            Since $$$popcount(childmask) = popcount(candymask)$$$ for any valid state we can reach, the actual complexity is something like

            $$$2^{2n + 1} + \sum_{i=0}^{n} {n \choose i} \cdot {n + 1 \choose i} \cdot i \cdot n$$$

            which is around $$$2 \cdot 10^7$$$ operations for $$$n = 10$$$. So even with an increased constant factor this runs comfortably in 10s for $$$T=100$$$.

          • »
            »
            »
            »
            »
            »
            3 года назад, # ^ |
              Проголосовать: нравится +10 Проголосовать: не нравится

            I did the same thing, though I thought it was $$$O(4^{n}n^2)$$$. How did you get the square root in the complexity?

            • »
              »
              »
              »
              »
              »
              »
              3 года назад, # ^ |
                Проголосовать: нравится +26 Проголосовать: не нравится

              I did the same thing but already roughly analyzed the runtime benefit of having only states reachable which have the same number kids as sweets. I calculated:

              $$$n^2 \sum_{i=0}^n{ {{n}\choose{i}}^2} = n^2 {{2n}\choose{n}} = O(4^n n \sqrt{n})$$$
    • »
      »
      »
      3 года назад, # ^ |
        Проголосовать: нравится +5 Проголосовать: не нравится

      Thanks for the pointer, that makes sense. Also, I wonder how to do it in O(N!xN). My implementation pre-computes the ordered list of sweets by distance for each child. Then, for each child in each permutation, it iterates over the sweets until it finds the first one that has not been "taken" yet. This is O(N!xN^2) and TLEs with Pypy.

  • »
    »
    3 года назад, # ^ |
    Rev. 2   Проголосовать: нравится 0 Проголосовать: не нравится

    Even I was confused after reading this, but one of my friends gave an explanation which sounds intuitively correct to me.

    Suppose there exists a participant $$$P_1$$$ such that two chocolates $$$a$$$ and $$$b$$$ are equally close, both are equivalent here, but suppose there exists some remaining $$$P_2$$$ such that only a is closer than the blue candy.

    In this case, its fine if we wrongly say its impossible since there will be some other permutation where $$$P_2$$$ comes before $$$P_1$$$ and this isn't a problem. More formally, the permutation where $$$P_1$$$ is moved to just after $$$P_2$$$ in the initial permutation should suffice.

»
3 года назад, # |
  Проголосовать: нравится +108 Проголосовать: не нравится

First problem was the worst among all codejam problems this year!

»
3 года назад, # |
  Проголосовать: нравится +11 Проголосовать: не нравится

Adhoc Jam

»
3 года назад, # |
Rev. 2   Проголосовать: нравится +34 Проголосовать: не нравится

KEKL, got AC on B thanks to OEIS (https://oeis.org/A022846)

How?
»
3 года назад, # |
Rev. 2   Проголосовать: нравится +197 Проголосовать: не нравится

Ugly A and B. And problems were too difficult to properly choose the top 1000.

What's the brute force for $$$N \leq 10$$$ in C? Trying every order of children is not enough because of ties. Do we just say that there can't be many ties? What's the proof? I implemented $$$O(2^{2n})$$$ dp. EDIT: solved in comments above https://mirror.codeforces.com/blog/entry/102849?#comment-912506

  • »
    »
    3 года назад, # ^ |
      Проголосовать: нравится +28 Проголосовать: не нравится

    I think I have a different bash than the one described in the comments? I iterated over all possible pairings of children to sweets. Then I made a graph based on whether each child had to be before another child based on their distances and ran topological sort. Overall this takes $$$\mathcal O(N^2 \cdot N!)$$$.

  • »
    »
    3 года назад, # ^ |
    Rev. 2   Проголосовать: нравится 0 Проголосовать: не нравится

    U can try to every possible assignments from children to sweets and then reduce the problem to some toposort problem where u add edges from sweets to children if its closer than the the assigned sweet.

»
3 года назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

Ugh failed C because my construction method was bad.

But If i keep the bad construction method (basically just look through all people and put it if min edge), but when I detect a bad state I just shuffle everything and redo the matching, it passes.

(Bad construction worked for small :( so though it might work, since I was sorting edges by distance, and I though the algorithm would do something so that it ensures the construction works, but clearly wrong lol)

  • »
    »
    3 года назад, # ^ |
    Rev. 2   Проголосовать: нравится 0 Проголосовать: не нравится

    Ah even just redoing the matching seems to work, no need to shuffle. I think it's because the matching algo finds min lexicographic matching and since I sorted by distances there's gonna be at least one edge that's minimum in the cycle that includes 1 (since it's lower lexicographical)

»
3 года назад, # |
  Проголосовать: нравится +41 Проголосовать: не нравится

Did anybody else solve B (Pixelated Circle) by precomputing prefix sums over draw_circle_perimeter in $$$O(R^2)$$$ and hardcoding them?

  • »
    »
    3 года назад, # ^ |
      Проголосовать: нравится +3 Проголосовать: не нравится

    I didn't hardcode solutions into the source, but I did run a quadratic-time pre-computation at runtime. It was a bit more complex than the official solution, checking whether every possible (x, y) pair (up to some symmetries) would be left as a hole by draw_circle_filled_wrong.

»
3 года назад, # |
Rev. 4   Проголосовать: нравится +46 Проголосовать: не нравится

Let me explain how I experienced this contest.

I finish the whole problem 1, and small test set of problem 2 very quickly. Then I proceed to problem 3. luckily, I can come up with small problem test set (just bitmask DP), but have no idea about large test set. Then I decide to choose problem 2, large test set. I have 75 minutes, and I think is enough. The bad thing is that problem 2 large test set is so hard, and I cannot make it till the end.

Then it takes more than 6 minutes to unveil the hidden test set, if more than 1000 people solve large test set of problem 2, I am doomed, so it is the longest 6 minutes I have ever experienced.

Luckily, only 200+ solved problem B large test set, and my final ranking is 484. I am so excited, it is the first time I proceed to round 3 and win Code Jam T shirt! The only regret that I have is that I should tried Q4 for small test set, not struggling for Q2 large, it's much easier.

»
3 года назад, # |
  Проголосовать: нравится +34 Проголосовать: не нравится

So, my solution for 3rd problem:

For each child sort the allowed edges from closest to the farthest (ties in any order) and run Kuhn algorithm (without heuristic to pre-allocate arbitrary matching). Idea is to claim that it's valid to call this edges in some order.

Can somebody prove to disprove it?

  • »
    »
    3 года назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится

    Did it pass? I did the same and it only got me first subtask

  • »
    »
    3 года назад, # ^ |
      Проголосовать: нравится +3 Проголосовать: не нравится

    I also had the same solution. Proof is based on the claim that in min cost perfect matching, there will be some student that'll be matched to its closest jelly. Then we can use induction.

    Let best[x] denote the closest jelly to student x, and match[x] be the assigned jelly for student x. Also, for a jelly y let match[y] be the student who gets y.

    Then, if you keep going form x to best[x] for each student x, and y to match[y] for each jelly y, you'll eventually get a cycle, augmenting which will result in a better cost. (For each student x in the cycle you're replacing match[x] by best[x])

    • »
      »
      »
      3 года назад, # ^ |
        Проголосовать: нравится 0 Проголосовать: не нравится

      I think I understand from your explanation, why "there exists matching" is equivalent to "there exists a valid order of calling children".

      But what's not clear for me, why unmodified Kuhn algo will find it (and will it always) For example, if we had a blackbox which just find any matching if it exists, it won't be enough. E.g for case

      0 0
      1 1
      100 100
      0 0
      1 1
      

      we are allowed to print

      1 2
      2 3
      

      or

      2 3
      1 2
      

      but not

      1 3
      2 2
      

      which is also valid matching

      • »
        »
        »
        »
        3 года назад, # ^ |
          Проголосовать: нравится 0 Проголосовать: не нравится

        Ah! I confused Kuhn's with Kuhn–Munkres. In my solution, I had costs equal to the rank in priority order, instead of just sorting the adjacent edges and running Kuhn's.

        • »
          »
          »
          »
          »
          3 года назад, # ^ |
            Проголосовать: нравится 0 Проголосовать: не нравится

          I thought about that but but though O(n^3) will be too slow. I know Kuhn also technically cubic, but it's hard to achieve big number of edges and NM complexity at the same time

      • »
        »
        »
        »
        3 года назад, # ^ |
          Проголосовать: нравится +13 Проголосовать: не нравится

        A matching is invalid if and only if there is a augmenting cycle that gives every student in it a closer jelly.

        Kuhn with your edge ordering will never produce such augmenting cycles: If an augmenting path were to produce such a cycle, it could instead have gone the other way around the cycle, which would use a lower-index edge at the vertex where it first enteres the cycle. In particular, Kuch with your edge ordering would find this augmenting path instead of the original one. (In other words, we could xor the augmenting path with the cycle to get a different augmenting path, that Kuhn would find first.)

»
3 года назад, # |
  Проголосовать: нравится +273 Проголосовать: не нравится

Undoubtedly one of the CP contests of all time

»
3 года назад, # |
  Проголосовать: нравится +198 Проголосовать: не нравится

Not a well balanced set at all. A huge portion of people qualifying are through just or mostly from partials (including myself). That wouldn't be an issue if the subtasks weren't so terribly bland. I don't recall doing anything smart at all the entire round. Seems it all came down to who decided to skip the full solutions sooner.

»
3 года назад, # |
  Проголосовать: нравится +18 Проголосовать: не нравится

How does $$$O(N^2)$$$ TLE/MLE A hidden???

  • »
    »
    3 года назад, # ^ |
      Проголосовать: нравится +37 Проголосовать: не нравится

    Because $$$T$$$ is $$$100$$$ and $$$N$$$ can be as big as $$$10^4-1$$$. So $$$T\cdot N^2$$$ can be as big as $$$10^{10}$$$, which is too slow.

  • »
    »
    3 года назад, # ^ |
    Rev. 2   Проголосовать: нравится +8 Проголосовать: не нравится

    My $$$O(K)$$$ solution passed.

    • »
      »
      »
      3 года назад, # ^ |
        Проголосовать: нравится +19 Проголосовать: не нравится

      Same lmao

      If they didn't want it to pass, they shouldn't have given us 20 seconds!

      • »
        »
        »
        »
        3 года назад, # ^ |
          Проголосовать: нравится +27 Проголосовать: не нравится

        Or they should have given the usual $$$N\leq 10^5$$$ if they wanted $$$\mathcal{O}(N)$$$. Then no one would have even risked $$$\mathcal{O}(T\cdot N^2)$$$ for 20 seconds.

»
3 года назад, # |
  Проголосовать: нравится +37 Проголосовать: не нравится

One of the worst rounds I ever solved:
A is simple trash: you can't even easily fill the board to brutforce. I got idea, but it's really hard to implement (look at tourist's 138 lines, for example). I thought that last year's problem with clocks was bad, but forget, that was brilliant task
I was late 15 minutes, but when I opened standings, I saw only about 5 people who got AC. It's awful. First problem should be easier
B is trash too: no idea in first subtask (just copy the code from statemnet and count diff), second is hard math with some handwaving proofs and praying that round and sqrt work fine:

bool correct = x * 1ll * x + y * 1ll * y <= R * 1ll * R;
//bool correct = round(sqrt(x * 1ll * x + y * 1ll * y)) <= R;

Note that result in different when using NORMAL check if point is in circle

C: could be nice problem. But how to prove that N! in small works??? I saw N = 10 and thought about permutations, but moment later I found example with equal distances that makes this solution run in N!^2. Solving hard? Guess with AC because nothing else?
D: first subtask is nice. But large is some dp optimization with hard and handwaving proofs again.
I liked D problem, but one(last!) out of four?
Also using of subtasks was really strange: you either code some easy bruteforce or solve full

  • »
    »
    3 года назад, # ^ |
      Проголосовать: нравится -10 Проголосовать: не нравится

    For A I think I came up with a reasonably easy to implement solution. If N <= 5, just hardcode all solutions. For larger N+2, all lengths can be achieved by going to the top-left corner of N – either by not using shortcuts at all or going immediately right and down; so we check if we can still finish by avoiding shortcuts on this level and reduce the problem to the smaller one.

    Still didn't enjoy the task though.

    • »
      »
      »
      3 года назад, # ^ |
      Rev. 2   Проголосовать: нравится -15 Проголосовать: не нравится

      Its possible to code it fairly cleanly, without requiring any hard coding of small $$$n$$$.

      Basically realize that the difference when we move from a cell to the "inner" adjacent cell depends only on

      1. The side length of the current subsquare.
      2. The side we're on (top / bottom / left / right).

      If you count these, for a given side length $$$s$$$ you get:

      • top: $$$4s - 5$$$
      • right: $$$4s - 7$$$
      • bottom: $$$4s - 9$$$
      • left: $$$4s - 11$$$

      Now we can realize that all odd numbers from $$$1$$$ to $$$4n - 5$$$ exist in this set. Since all skips are odd length, we can only save an even distance over the normal path.

      So now we can treat this as having a decreasing set $$${x, x - 1, x - 2, \ldots, 1}$$$ (just multiplied by 2) and we want to select some cells such that their sum is $$$y$$$. Clearly the easy greedy of taking the largest cell we can at any stage works fine.

      As for the actual cells we move through, we can realize the middle element of each initial layer works on any layer. Their value for the first layer is just $$$\frac{n + 1}{2}$$$ for the top side plus $$$n - 1$$$ for each of the remaining sides on the layer. Then calculating this as we move to inner layers is trivial using the previously calculated side length values.

  • »
    »
    3 года назад, # ^ |
      Проголосовать: нравится +33 Проголосовать: не нравится

    Though I also did not like the problems AB, what you described was not the case for me:

    In problem A, after some thought, I ended up with smth like 20 lines of meaningful code, which involved no cases (I do not consider printing IMPOSSIBLE when needed as "handling cases").

    In problem B there was no need to pray for precision, it was not so difficult to rewrite inequalities of type round(sqrt(x)) < y until all sides are integer.

    Honestly, I do not know how to guess the solution to C without proving it. It probably depends on the solution itself. The only unproven thing about ABC I had during the contest was the fact that my solution for C was fast enough (Hungarian).

    You may refer to https://youtu.be/AaweAsTZT8o for details of my solutions, once it is uploaded.

»
3 года назад, # |
  Проголосовать: нравится +13 Проголосовать: не нравится

such $$$O(n^2)$$$ passed in IOBot

vector<int> p01(n+1, -1), g01(n+1, -1);
for(int i=n-2; i>=0; --i) {
	if(s[i] == 0) {
		for(int j=i+1; j<n; ) {
			if(s[j] == 1) {
				p01[i] = j;
				break;
			}
			if(p01[j] == -1) break;
			j = p01[j] + 1;
		}
	}
}
»
3 года назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

Had cleared round 2 with 100 points , in my dream :)

»
3 года назад, # |
  Проголосовать: нравится +14 Проголосовать: не нравится

It could have been worse... (no pun intended)

Round 2 last year was actually much worse. There were 2 cakewalks, 284 full scores and solving 3/4 tasks was not enough to qualify.

  • »
    »
    3 года назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится

    giga plus

  • »
    »
    3 года назад, # ^ |
      Проголосовать: нравится +62 Проголосовать: не нравится

    I thought this year's round 2 was much worse than last year's. At least last year, the problems were ok. This year, I did not enjoy any problem I attempted (for the same reasons that many other people have commented above). Halfway through this contest, I realized I wasn't even having fun anymore, even though I eventually qualified.

    • »
      »
      »
      3 года назад, # ^ |
        Проголосовать: нравится +8 Проголосовать: не нравится

      My contest experience is summed up by me repeatedly asking myself "What is happening???" for 150 minutes straight. This is despite having the right ideas for problems A and B early on and eventually implementing it. My family probably thought I was delirious. Well, they are right.

»
3 года назад, # |
  Проголосовать: нравится +111 Проголосовать: не нравится

Problems so bad, after 2 min of contest I lost all motivation to solve problems. (Though, as jqdai0815 said, it could be worse)

»
3 года назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

I am astonished as to why highly experienced contestants could not solve both test sets for B. One could brute-force the solution locally to precompute all possible answers from $$$1$$$ to $$$10^5$$$ and simply look up the answer in an array in the actual submission.

  • »
    »
    3 года назад, # ^ |
      Проголосовать: нравится +18 Проголосовать: не нравится

    The limit on the source code is 100KB. You only get 1 byte per possible answer for precomputation.

    • »
      »
      »
      3 года назад, # ^ |
        Проголосовать: нравится 0 Проголосовать: не нравится

      Oh I totally missed this point. Although you can still use some tricks to compress the data (python zlib for example).

      • »
        »
        »
        »
        3 года назад, # ^ |
          Проголосовать: нравится +10 Проголосовать: не нравится

        Compressing doesn't improve much when numbers are mostly random. e.g. simply try zipping a file containing all answers only reduce the file size by 2x, which is not enough for this problem.

    • »
      »
      »
      3 года назад, # ^ |
      Rev. 2   Проголосовать: нравится +10 Проголосовать: не нравится

      there is a nice precomputation solution by dacin21, which uses some clever trick (if I understand correctly he only pasted prefix sums for n divisble by 20 and then computes the rest when needed)

      • »
        »
        »
        »
        3 года назад, # ^ |
          Проголосовать: нравится +8 Проголосовать: не нравится

        Yes. I used that approach too. I felt that computing the whole table was quite easy, but you have to compute the rest (answer for small interval) for each tests as well, I think that part is pretty tedious to write.

    • »
      »
      »
      3 года назад, # ^ |
      Rev. 2   Проголосовать: нравится 0 Проголосовать: не нравится

      Each wrong circle, starting with $$$r = 2$$$, has either 4 or 8 more black cells than the previous circle; therefore, it can be encoded using 1 bit per layer. Here is the working code; I grouped bits by sextuplets encoded by [0-9A-Za-z.,].

»
3 года назад, # |
  Проголосовать: нравится +87 Проголосовать: не нравится

Shit round!! It wasn't even a CP round. People qualified for "global top 1000" using partial points which involved translating given pseudocode or some kind of stupid brute force.

»
3 года назад, # |
  Проголосовать: нравится +25 Проголосовать: не нравится

Horrible problemset

»
3 года назад, # |
  Проголосовать: нравится +94 Проголосовать: не нравится
Round 2 is a different kind of a challenge, with no easy points on the table

Interestingly enough, I disagree with "no easy points on the table" in both cases. Last year had a lot of full-scorers, and this year has at least B-small.

»
3 года назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

During the contest: Made a lot of mistakes, bogged down in implementation, barely got through problems A and B, see my rank is 2000, giving up... After system tests: Jump up to rank 406

Thanks, problem B! Finally getting that GCJ T-shirt

  • »
    »
    3 года назад, # ^ |
      Проголосовать: нравится +8 Проголосовать: не нравится

    Svlad_Cjelli I had the exact same experience! Was so despondent when my rank was 2000+ while struggling with problem B thinking way over 1000 people already had full submissions. Kept trying to go for B in full and when I submitted problem B and my rank was still 1800, gave up hope. Totally shocked after score reveal.

»
3 года назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

Nice comeback bro

»
3 года назад, # |
  Проголосовать: нравится +3 Проголосовать: не нравится

Congrats with qualifying! Will be there next year too :)

»
3 года назад, # |
  Проголосовать: нравится +57 Проголосовать: не нравится

I was rather surprised that I succeeded on D large: my solution is quadratic time (basically the official solution for test set 1), just optimised to use linear space.

»
3 года назад, # |
  Проголосовать: нравится +30 Проголосовать: не нравится

Man. I get the whole visible/hidden thing adding excitement but this was a contest that just rewarded giving up on large solves. I was bitten in the backside for sticking too long with B large because it appeared highly likely I'd need it; turns out I could've just brute forced C small and been fine. I don't feel like brute force solving should be deciding things at this stage in the competition (though I wish I had done it).

  • »
    »
    3 года назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится

    Brute force solving shouldn't, but understanding that you are better off reading all problems and submitting all of the easy subtasks probably should

    • »
      »
      »
      3 года назад, # ^ |
      Rev. 3   Проголосовать: нравится 0 Проголосовать: не нравится

      Disagree. That entirely depends on the contest, and only with hindsight of how many large solves each question has. Last year (when I did qualify) had I followed that advice I would have wasted valuable time submitting small subtasks and failed. This year, it transpired after the contest that such an approach would have sufficed, but when the hidden scoreboard said > 2000 solves on B, there was no way of knowing that.

      This contest rewarded giving up easily on large solutions. Last year's rewarded sticking at the complete problems. It feels like going to the effort of learning more complex techniques and honing implementation skills essentially counted for nothing — the cheap points won the day. To me that feels hollow, and not how a CP contest should be settled.

      • »
        »
        »
        »
        3 года назад, # ^ |
          Проголосовать: нравится 0 Проголосовать: не нравится

        It does depend on the contest and I completely agree with your second paragraph, but if you wanted to maximize your result you should've noticed that 10 + 11 easy points is greater than 16 probably harder points on a weird problem and you should've prioritized the easy ones.