Блог пользователя EvenImage

Автор EvenImage, история, 15 месяцев назад, По-английски

Thank you for participating in this round! Problems I are authored by zhoukangyang and the rest by me.

Sorry for the weak pretest in problem B.

2061A - Kevin and Arithmetic

Hint 1
Solution

2061B - Kevin and Geometry

Hint 1
Hint 2
Hint 3
Solution

2061C - Kevin and Puzzle

Hint 1
Hint 2
Solution

2061D - Kevin and Numbers

Hint 1
Hint 2
Hint 3
Solution

2061E - Kevin and And

Hint 1
Hint 2
Hint 3
Solution

2061F1 - Kevin and Binary String (Easy Version)

2061F2 - Kevin and Binary String (Hard Version)

Hint 1
Hint 2
Hint 3
Hint 4
Hint 5
Solution

2061G - Kevin and Teams

Hint 1
Hint 2
Hint 3
Solution

2061H1 - Kevin and Stones (Easy Version)

2061H2 - Kevin and Stones (Hard Version)

Hint 1
Hint 2
Hint 3
Solution

Draft in Chinese

2061I - Kevin and Nivek

Hint 1
Hint 2
Hint 3
Hint 4
Solution
  • Проголосовать: нравится
  • +86
  • Проголосовать: не нравится

»
15 месяцев назад, скрыть # |
 
Проголосовать: нравится +9 Проголосовать: не нравится

Auto comment: topic has been updated by EvenImage (previous revision, new revision, compare).

»
15 месяцев назад, скрыть # |
 
Проголосовать: нравится +72 Проголосовать: не нравится

Congrats jiangly on becoming jiangly :))

»
15 месяцев назад, скрыть # |
 
Проголосовать: нравится +8 Проголосовать: не нравится

Auto comment: topic has been updated by EvenImage (previous revision, new revision, compare).

»
15 месяцев назад, скрыть # |
 
Проголосовать: нравится 0 Проголосовать: не нравится

Fun facts for problem D: When I was testing this round, this problem was Div. 2 B with $$$n \le 500$$$.(=・ω・=)

»
15 месяцев назад, скрыть # |
 
Проголосовать: нравится +8 Проголосовать: не нравится

someone please tell me why did this 302137216 code for D got accepted and this 302137051 didn't, even though the time complexity is same

»
15 месяцев назад, скрыть # |
 
Проголосовать: нравится +8 Проголосовать: не нравится

If v[0] = 0 isn’t it obvious that he is telling truth as nobody on the left?

»
15 месяцев назад, скрыть # |
 
Проголосовать: нравится +13 Проголосовать: не нравится

E is a greed-bait.

»
15 месяцев назад, скрыть # |
 
Проголосовать: нравится +45 Проголосовать: не нравится

Untitled-Export-V4

Tourist -> <-

Congrats jiangly

»
15 месяцев назад, скрыть # |
 
Проголосовать: нравится +33 Проголосовать: не нравится

Thanks to the authors for the amazing problems ! I really enjoyed the round !

For G, a good way to convince yourself that $$$\frac{n + 1}{3}$$$ is attainable is to fix three vertices a, b, c. Say the edge a-b is 1, then by induction among the other vertices there is a matching of size $$$\frac{n + 1}{3} - 1$$$ : if it is of color 1 we are done, and elsewise the edges a-c and b-c have to be 1 as well (or we get a 0-color matching). But now we can make another choice for the vertex c, and since a-b is 1 we will still get a-c and b-c to be 1. So every edge from a or b is 1, as well as every edge from c for any c ... and finally the whole graph is 1, absurd.

I've found this to be helpful, since knowing the answer gives you the idea to get one edge from 3 vertices, hence the mixed-color triplets from the solution, from which you can find a linear construction

»
15 месяцев назад, скрыть # |
 
Проголосовать: нравится 0 Проголосовать: не нравится

C is a nice problem. Thank you

»
15 месяцев назад, скрыть # |
Rev. 2  
Проголосовать: нравится +1 Проголосовать: не нравится

On problem D, I found a solution that actually does build A into B rather than the reverse by recursively trying smaller elements to build into B. Definitely way more complicated then the intended solution though.

»
15 месяцев назад, скрыть # |
Rev. 2  
Проголосовать: нравится +38 Проголосовать: не нравится

Sorry for the weak pretest in problem B

Hmm... So it was not intentional? I found this a very good kind of problem to have weak pretests. Some 2017 Codeforces vibes :) There is one simple correct solution and plenty of ways to get an "almost-correct" solution.

»
15 месяцев назад, скрыть # |
 
Проголосовать: нравится 0 Проголосовать: не нравится

I was thinking of the same idea for E but couldn't figure out that this is a convex function, hence couldn't reach the conclusion that taking $$$k$$$ largest reductions works. Can someone share how they got the intuition for this being a convex function, or is there any other solution that doesn't use this?

  • »
    »
    15 месяцев назад, скрыть # ^ |
    Rev. 3  
    Проголосовать: нравится +3 Проголосовать: не нравится

    You could do the same thing with brute force up to storing all "improvements"; say arr[i][j] = num(i, j+1) - num(i, j). Then you could maintain a multiset or some other sorted list data structure, where initially you add all arr[i][0] to the multiset. You can remove the maximum element, say arr[i][j], and insert arr[i][j+1] back into the multiset (since now that move became available). Repeat k times and you can get the final sum.

    Edit: nvm this is completely wrong

    • »
      »
      »
      15 месяцев назад, скрыть # ^ |
       
      Проголосовать: нравится +3 Проголосовать: не нравится

      But without knowing the fact that this is convex, this greedy can't be proved right. Like in a scenario when this isn't convex, maybe there can be a case where for some $$$i_1$$$, the first improvement is not as good as the first improvement for some other $$$i_2$$$, but say the next improvement for $$$i_1$$$ is very huge. If $$$k=2$$$, then the greedy algorithm will pick the first improvement for $$$i_2$$$, and then the first improvement for $$$i_1$$$, whereas the correct answer should be first and second improvement of $$$i_1$$$.

  • »
    »
    15 месяцев назад, скрыть # ^ |
     
    Проголосовать: нравится +9 Проголосовать: не нравится

    Consider the list of bits that are turned off due to using the AND operator multiple times. Say that the first operation turns off the bit in position 8. After that, no set of operations should modify anything in bits position 8 or above. This is because otherwise, we would have a better first move. Now, note that 2^x>=sum(2^y) over any subset of y in {0,1,2,...,x-1}. Thus, convexity.

  • »
    »
    15 месяцев назад, скрыть # ^ |
    Rev. 2  
    Проголосовать: нравится 0 Проголосовать: не нравится

    1

    3 3 2

    15

    5 6 9

    -- trap .. largest reductions , 15 -> 5 -> 1 -> 0

    but we can do 15 -> 6 -> 0 optimally

»
15 месяцев назад, скрыть # |
 
Проголосовать: нравится -8 Проголосовать: не нравится

it's over

»
15 месяцев назад, скрыть # |
 
Проголосовать: нравится +33 Проголосовать: не нравится

E: wtf bruteforce?! Using the facts that sum of convex functions is convex and $$$2^b \gt \sum_{k=0}^{b-1} 2^k$$$ don't justify it being E and I think it would've had a lot more solutions with lower constraint on M and lower placement in contest. It's really hard to see why this simple nested loop should be fast when many other ones with same number of repetitions aren't, especially without knowledge of cache efficiency and assembly instructions (specifically that min/max are their own instructions, not compare+branch). This just isn't a very good algorithmic problem.

Note that you don't need to prove convexity. Think about it as greedy decisions: operations on different $$$i$$$ are independent and it's always better to remove the largest possible bits from any $$$i$$$, so the first $$$k$$$ operations on any $$$i$$$ will "do more" than anything we could possibly do with $$$k+1$$$ 'th and on. This gives enough intuition and then bruteforce goes brrr.

  • »
    »
    15 месяцев назад, скрыть # ^ |
    Rev. 2  
    Проголосовать: нравится 0 Проголосовать: не нравится

    I dont understand your comment? The complexity is of the order 10^8, and expecting it to pass is not at all that unreasonable??

    • »
      »
      »
      15 месяцев назад, скрыть # ^ |
       
      Проголосовать: нравится 0 Проголосовать: не нравится

      Complexity of order 10^8 can easily hit TL. Constructing 1e7 2D vectors by randomly appending 3e7 elements takes 2 seconds on custom invocation because cache and indirection; that's still an order of magnitude less loop executions than this case.

      There's a rule of thumb, yes, but that doesn't have 10^8, it has 10^7 (or 10^6 in older times, but CPU performance hasn't been increasing all that much) and it's still with caveats — even if it's plenty in 99% of cases, there could be many hidden instructions behind C++ expressions sometimes. You can't base real runtime on a number without specific code features, like the ones I mentioned above.

      Examples: At IOI 2012, I burned myself by reconstructing std::queue ~10^6 times — literally adding static keyword made the code several times faster. In 1975G - Zimpha Fan Club, complexity in the order of 10^9 exceeded TL simply because division with dynamic modulo can't be optimised by compiler and therefore uses divq which has high CPI. Many times I've sped up complicated DFS-processing a tree by doing a simple preorder renumbering at the start and the end. Then there's this meme I made simply because it's such an efficient and simple optimisation.

      • »
        »
        »
        »
        15 месяцев назад, скрыть # ^ |
         
        Проголосовать: нравится -22 Проголосовать: не нравится

        302209959 This code does $$$N \cdot 2^M \cdot 7$$$ operations, 7 per loop for no point at all. It still passes.

        Your claims are bogus. Yes, indeed $$$10^8$$$ isn't always safe. However, here it clearly should be. Bitwise AND operation is one of the fastest operations. Also it's not like we are measuring the algorithmic complexity here, but the exact number of operations.

        You could make an argument for popcount not being fast (even though in my experience, it is also fast), however that is simple to avoid.

        Clearly, they wanted to cut (most) $$$O(N \cdot 2^M \cdot M)$$$. I don't support this decision, however neither am i against it. If you really failed to assume that the intended complexity would pass, I am sorry for you.

        There's a rule of thumb, yes, but that doesn't have 10^8, it has 10^7

        Speak for yourself, my rule of thumb is < 10^8 mostly safe, 10^8 and 10^9 unsure depends on operations.

        • »
          »
          »
          »
          »
          15 месяцев назад, скрыть # ^ |
           
          Проголосовать: нравится +24 Проголосовать: не нравится

          Fuck off with condescending statements like "your claims are bogus" or "I'm sorry for you", ok? Nobody appreciates passive aggressiveness. Stay direct and to the point.

          The exact number of operations isn't 10^8. My code is $$$O(N 2^M)$$$ with 2e8 loop executions and 7 instructions per loop, like this:

          	movl	(%rdx), %esi
          	andl	%edi, %esi
          	cmpl	%esi, %ecx
          	cmovg	%esi, %ecx
          	addq	$4, %rdx
          	cmpq	%rdx, %r8
          	jne	.L115
          

          The code you link has 36 instructions per equivalent loop (not counting labels and noops). In addition, one of them on my system (likely different from CF) is call __popcountdi2@PLT. We're at 7e9 for exact number or a bit over an order of magnitude above the number of loop executions, which is as expected. That's far above even your rule of thumb.

          Not all operations are equal. CPI matters, branch prediction matters, avoiding branching by proper assembly matters. AND is one of the fastest, but it's only one among many. Compare, branch and memory access are the important ones, not AND. Then there's a whole another factor — pipelining.

          Correctly predicting that the code you linked would fit in TL is nigh impossible as the exact number of operations is well above even your optimistic rule of thumb and there are some slow operations. Predicting that my code would fit is easier since it's intentionally written to be more constant-optimal, but still unreliable when compared to other problems, and much better to just write and measure. That's what I did.

          The issue is that this bruteforce is intended to pass, not that it can pass. Unintended real-time-efficient alternatives to an optimal algorithm happen often enough but this is a completely different situation.

          • »
            »
            »
            »
            »
            »
            15 месяцев назад, скрыть # ^ |
             
            Проголосовать: нравится -19 Проголосовать: не нравится

            We're at 7e9 for exact number or a bit over an order of magnitude above the number of loop executions, which is as expected. That's far above even your rule of thumb.

            Precisely my point. If orders of magnitude non trivially higher than your rule of thumb are still ok, it is maybe reasonable to reevaluate it?

            The code is not designed to show you that it should pass, but that this can also pass, and thus an argument that the intended is not supposed to (or we can't tell whether it should) is unreasonable. You are not supposed to predict that the code I linked passes, but the actual intended code should?

            The issue is that this bruteforce is intended to pass, not that it can pass.

            it is not? the editorial clearly states O(N * 2^m) complexity, not O(N * 2^m * m) [which is what my code is trying to mimic, but actually doing it TLEs without other optimizations at least].

            There are several problems I can link you to where intended complexity is greater than the order of 10^8 per second, and sometimes not even simple operations. (I am not saying that this is always right, but in this example I see nothing wrong). The only difference I suppose as you pointed out, is that they are harder. Ok, so? Looking at submissions, certainly doesn't seem the TL was tight.

            Meanwhile, take a look at I. The AC solution in-contest has a complexity of $$$O(N \cdot sqrt(N \cdot log(N)))$$$ with $$$N = 3 \cdot 10^5$$$. Even with my estimates, it is a stretch to believe it. Also, btw $$$10^9$$$ simple operations passing is not exactly a new thing.

            You mentioned vector push backs, division operations which are somewhat well known to be costly. Comparing them with min and AND operations is ridiculous. It's like comparing indexed_set (ordered_set) and priority queue. (Ok comparing division isn't that extreme).

            302227228 I put bitwise operators and max operations into a code and ran it 10^9 times (10^4 testcases each 10^5 size). I don't know how compilers work so maybe it's just optimized.

            • »
              »
              »
              »
              »
              »
              »
              15 месяцев назад, скрыть # ^ |
               
              Проголосовать: нравится 0 Проголосовать: не нравится

              Followup : for about 30 seconds in contest, I considered whether N * 2^m * m would pass. If i had not gotten the optimization soon enough, probably I would also submit it.

              I would not be completely wrong either since it does come close. Some others also did submit that complexity.

            • »
              »
              »
              »
              »
              »
              »
              15 месяцев назад, скрыть # ^ |
              Rev. 2  
              Проголосовать: нравится -20 Проголосовать: не нравится

              To be clear, by bruteforce I mean anything that checks all subsets of operations for each element. Not amortised, not $$$O(N 2^M)$$$ operations, but exactly $$$N$$$ $$$2^M$$$ repetitions of AND, min/max, no optimisations of which subsets are checked. The AND values can be precomputed, the popcount values can be precomputed instead of using an intrinsic in case the intrinsic would be slow on CF judge (a valid concern raised before, though it doesn't seem the case here), but those are implementation details with many possible effects and the core idea is still that we can afford to check everything as long as we don't do it in some stupid way.

              There are several problems I can link you to where intended complexity is greater than the order of 10^8 per second, and sometimes not even simple operations.

              Also, btw 109 simple operations passing is not exactly a new thing.

              There are also many problems where the intended complexity is much smaller than the order of 10^8. It's exactly part of my point that it massively depends on problem, it's quite a high level skill to predict specific code's real runtime beyond "DP tends to be fast, tree processing tends to be slower..." since it requires some understanding of hardware. I don't care that many operations pass here, but that it's artificially increasing the problem's difficulty and making it measure balls rather than skill, since it's much easier to just submit bruteforce as a hail mary approach than to have the required skills.

              Comparing them with min and AND operations is ridiculous

              Again with the snide remarks. I'm comparing them on purpose to show that CPI matters a lot, not just number of operations. You don't have a better reply than trying to ridicule it. Knowing which operations are slow and which are fast, again beyond general patterns like "float arithmetic is slow compared to int", is quite a high level skill and it ties into the argument I made above.

              The code you sent is... running the same asm in the loop that I posted, plus minus a few operations.

              There is no rule of thumb number of operations that'd make someone decide that my submission will easily fit in TL without also causing a lot of TLEs on many (though not all) other problems. That's not debatable, it's simply (10^9 > 10^8) == true. That requires additional knowledge, which I believe is largely lacking even among people who solved this problem. It should've been treated as either easier or harder, but it isn't like actually hard algorithmic problems that require some of this skill, so decreasing the limit on $$$M$$$ and score would've been a good decision.

              If the intent was to prevent solutions with an additional log-factor ($$$M$$$) from passing, then that's not following a fairly common guidance in problemsetting that we shouldn't try to separate strictly passing and strictly failing solutions based on one log-factor because it's too difficult, and this problem shall stand as another example why it's a good guidance.

          • »
            »
            »
            »
            »
            »
            15 месяцев назад, скрыть # ^ |
             
            Проголосовать: нравится +1 Проголосовать: не нравится

            For simple loops like this , you should really evaluate it as loops unrolled. (Of course I get a cost-free way of doing this by JIT). This brings the operations down to basically 4. In fact, it is lower because of executing ahead of time (there is no dependencies to not load the data ahead of the current AND).

            it is just a special rule and special case that, for simple and independent branchless sequential array access solutions, 1e9 operations can be achieved.

            It felt you are just stating reasons why it could be hard to recognize this passes for a normal person with too pessimistic of a runtime estimation, without seeing that you also provided all the reasons why it easily passes: branching, memory, instruction throughput/latency are beyond perfect in this solution. It is not that you can come to this conclusion without considering branching,memory, instructions type. It is a statement that this is very easy to do on this case, with lots of margins for error, in general, both from first principles and by experiences (if you have sent any similar solution and a similar brute force nature before. If you felt 1e8 is not fast enough in this case, you should have been surprised by runtimes on so many occasions. )

            • »
              »
              »
              »
              »
              »
              »
              15 месяцев назад, скрыть # ^ |
               
              Проголосовать: нравится -10 Проголосовать: не нравится

              For simple loops like this , you should really evaluate it as loops unrolled.

              The assembly I posted isn't what I thought up, it's exact assembly of my code. The jump at the bottom goes to the top. Sometimes loops could be unrolled, but not in this case. It's exactly 7 operations per loop. It runs in 600 ms along with $$$O(T 2^M M)$$$ preprocessing.

              In fact, it is lower because of executing ahead of time (there is no dependencies to not load the data ahead of the current AND).

              Yes, I mentioned pipelining.

              A normal person doesn't have a runtime estimation. A normal person finds A somewhat challenging. All that you mention is additional, harder knowledge that just demonstrates how difficult it truly is to estimate running time without just writing and running the code. In this case, it's not strictly needed: sequential memory access and cheap operations AND, max = this should be pretty fast, it's worth trying. However, that's not very common cp skill and confidently deciding that it will be really fast needs much more skill, so it boils down to "I solved ABCD, stuck on E, might as well try bruteforce, maybe it'll work". If it was a problem that less people try to seriously solve because there's a bunch of mid-level problems before it, that wouldn't be a concern.

              It is a statement that this is very easy to do on this case

              For me. For you. Not for 1400 that solved E and who knows how many that didn't try.

              if you have sent any similar solution and a similar brute force nature before

              I don't think I've done that for years. They're more of a hallmark of low quality ICPC subregionals.

              If you felt 1e8 is not fast enough in this case, you should have been surprised by runtimes on so many occasions.

              I'm most often negatively surprised by runtimes. TLE would've been in line with my experiences — I gave a few examples far above that do more complicated stuff, but much fewer operations. Problems with 10^8 non-simple operations tend to TLE, problems with well over 10^9 simple operations tend to barely pass or TLE. Depends on the problem. In this case, I concluded that it could pass, but if it doesn't then I know how to optimise it, which isn't always an option.

  • »
    »
    15 месяцев назад, скрыть # ^ |
     
    Проголосовать: нравится +16 Проголосовать: не нравится

    I agree with you on this part, I was also very hesitant on using __builtin_popcount inside and thought of precomputing it but it is fast enough,

    I think these days its easier to think that solution is fast enough instead of using proper idea in many problems, I was thinking N <= 1e6 and M <= 5 would have been a better choice too.

  • »
    »
    15 месяцев назад, скрыть # ^ |
     
    Проголосовать: нравится 0 Проголосовать: не нравится

    How much lower? I feel that even if the brute force was more evident it is still a harder problem than C and D. I also don't understand the part where you say we don't need to prove convexity. We can't prove that greedily taking the best decisions across all $$$i$$$ is correct without convexity right (comment regarding the same).

    The part where you mention it is better to remove largest bits first, that is indirectly the same argument that convexity claimed isn't it? The convexity proof in the editorial basically says the same, that when we go from $$$g_{i-1}$$$ to $$$g_{i+1}$$$, the largest bit is removed when we first go from $$$g_{i-1}$$$ to $$$g_{i}$$$ and the smaller bits are removed subsequently.

    • »
      »
      »
      15 месяцев назад, скрыть # ^ |
       
      Проголосовать: нравится 0 Проголосовать: не нравится

      I mean we don't need to formally prove convexity — or even anything remotely close to that. What I described is intuitively directing towards the proof in editorial, but isn't itself a proof, just intuition. The kind of thinking that makes sense but could easily overlook something. A solution can be presented in full to demonstrate a problem's difficulty or handwaved to make it seem easier — I know, I've done it.

      I think with more obvious constraints, C/D/E are basically interchangeable. C and D have basically equal number of solutions and there's always a dropoff for later problems, though it's not too steep early on.

    • »
      »
      »
      15 месяцев назад, скрыть # ^ |
       
      Проголосовать: нравится +2 Проголосовать: не нравится

      All one needs to handwavy is when we do subsequent operations, the delta by which ai changes decreases. Here are handwavy exchange arguments.

      Let's say we have a sequence of operations A,A&B1,A&B1&B2,A&B1&B2&B3, and so on.... Let say D1=A&B1 — A, D2 = A&B1&B2 — A&B1 and so on...

      If D3 < D4, means using B3 on A&B1&B2 decreases value lesser than using B4 on A&B1&B2&B3. If this happens, why not just use B4 instead of B3, you will definitely remove all the bits that have been turned off between A&B1&B2&B3 and A&B1&B2&B3&B4, with some potentially more bits.

      Spoiler
  • »
    »
    15 месяцев назад, скрыть # ^ |
     
    Проголосовать: нравится 0 Проголосовать: не нравится

    I don’t think it is bad to test understandings of big differences of constant factor, especially when that constant factor is as big as a factor of 10 away — I am very sure that an iteration of this kind with 1e9 would have passed (in C++) too. (Not all implementations equal, like bit counts has to be precomputed).

    • »
      »
      »
      15 месяцев назад, скрыть # ^ |
       
      Проголосовать: нравится -10 Проголосовать: не нравится

      It's not too bad and it's a useful skill that helps from time to time in various problems anyway, but I don't think it should have a major role in not-too-hard problems. Sometimes you screw up implementation, don't pay attention to constant factor and your code is slow. Sometimes you can squeeze an inefficient algorithm through very efficient implementation, even with tricks of its own. If it's one of the last problems that requires constant optimisation on top of other stuff — that's fine, I'll just take it as a juicy bone. But in this case I feel like it's artificially inflating complexity and at the level it was placed, it's measuring weight of balls more than that specific skill.

      It's a valid contest problem, but there ought to be a tendency to use them only rarely and even then preferrably at higher level.

  • »
    »
    15 месяцев назад, скрыть # ^ |
     
    Проголосовать: нравится 0 Проголосовать: не нравится

    For my understanding--you don't even need the fact that some removal has lower reduction than the sum of all previous reductions right? Just having a lower reduction than the last removal is sufficient (implying that the sequence of reductions is just non-increasing)?

  • »
    »
    15 месяцев назад, скрыть # ^ |
     
    Проголосовать: нравится 0 Проголосовать: не нравится

    I agree with you. It's not the end of the world, but that's how I felt when I was solving E lol

»
15 месяцев назад, скрыть # |
 
Проголосовать: нравится 0 Проголосовать: не нравится

D was a good implementation of use of heaps

»
15 месяцев назад, скрыть # |
 
Проголосовать: нравится 0 Проголосовать: не нравится

Is codeforces problemset really this hard? I have to concentrate hard on B and C problems for a long time in order to arrive at the solution.

»
15 месяцев назад, скрыть # |
 
Проголосовать: нравится 0 Проголосовать: не нравится

My D TLE'd due to hash collisions smh....

»
15 месяцев назад, скрыть # |
Rev. 2  
Проголосовать: нравится +15 Проголосовать: не нравится

if you could not understand the editorial solution for C,

Let dp[i][0], and dp[i][1] denote number of sequences ending at i, where i-th person is honest and liar respectively

i-1th person and ith person can be:

  1. honest-honest, in this case, the condition is: a[i-1] should be the same as a[i] since both are true.

  2. honest-liar, we do not need any condition in this case.

  3. liar-honest, in this case, the condition is: i-2th person should be honest. so a[i-2]+1= a[i].

Overall, dp[n-1][0]+dp[n-1][1] will give us the answer

»
15 месяцев назад, скрыть # |
Rev. 2  
Проголосовать: нравится -8 Проголосовать: не нравится

Hi, why does 302120893 solution for D gets TLE while 302121766 passes comfortably. The time complexity should be about same.

»
15 месяцев назад, скрыть # |
Rev. 2  
Проголосовать: нравится +6 Проголосовать: не нравится

In problem D, I'm trying to understand the time complexity of this solution. This straightforward DFS using std::map runs in 300ms:

https://mirror.codeforces.com/contest/2061/submission/302153025

For each target x, I appear to be doing 2^(log x) recursive calls in worst case, each with a logn map operation, but the actual runtime looks much better. What am I missing? Is it just the testcases are not strict enough?

Also, I don't understand why is std::map is much faster here? If I use unordered_map with exact same solution it TLE's.

Edit: ok. nvm i got it. amount of recursive calls is just bound by the input size it gets consumed. and unordered_map is just targeted for collisions, works fine with a custom hash function.

»
15 месяцев назад, скрыть # |
 
Проголосовать: нравится +16 Проголосовать: не нравится

Problem C can be solved (overcomplicated) with 2-sat.

Let's figure out what the array should look like. It should look something like:

$$$0 \ \ 0 \ \ 0 \ \ \texttt{LIAR} \ \ 1 \ \ 1 \ \ 1 \ \ 1 \ \ \texttt{LIAR} \ \ 2 \ \ \texttt{LIAR} \ \ 3 \ \ 3$$$

The algorithm consists of iterating over the array from $1$ to $$$n$$$. Let's maintain the current variable $$$\texttt{sync}$$$, initially $$$0$$$. An element $$$a_i$$$ is out of sync if it does not equal $$$\texttt{sync}$$$. Naturally, a liar is found nearby and we do $$$\texttt{sync++}$$$. Formally, what this means is that at least one of the $$$i$$$ or $$$(i-1)$$$ is a liar. We can write $$$(p_{i-1} \ \lor \ p_i)$$$. Additionally, we write $$$(\neg p_{i-1} \ \lor \ \neg p_i)$$$, since at least one of them tells the truth. There are a few cases. For example, if $$$a_i \gt \texttt{sync}+1$$$, then $$$i$$$ is surely a liar. We write down $$$(p_i \ \lor \ p_i)$$$.

A good way to check if a solution exists is to run a standard SCC 2-sat. I personally avoided doing so during the contest and chose to do tedious casework. Afterward, when you are sure at least one valid solution exists, find adjacent clause components. If there are intersections or singular clauses like $$$(p_i \ \lor \ p_i)$$$, the component has exactly $$$1$$$ way to be filled with LIARs. If the component has no intersections, such as in $$$(4,5), (6,7), (8,9)$$$, it has $$$k+1$$$ ways to be filled with LIARs, where $$$k$$$ is the total number of clauses in the component. Let me show an example:

$$$(\textbf{*},\ \cdot \ ), (\textbf{*},\ \cdot\ ), (\textbf{*},\ \cdot\ )$$$
$$$(\textbf{*}, \ \cdot\ ), (\textbf{*},\ \cdot\ ), (\ \cdot\ ,\textbf{*})$$$
$$$(\textbf{*},\ \cdot\ ), (\ \cdot\ ,\textbf{*}), (\ \cdot\ ,\textbf{*})$$$
$$$(\ \cdot\ ,\textbf{*}), (\ \cdot\ ,\textbf{*}), (\ \cdot\ ,\textbf{*})$$$

Since the components are independent, just multiply the results of each.

Here is my submission: 302126337

»
15 месяцев назад, скрыть # |
 
Проголосовать: нравится +3 Проголосовать: не нравится

FST contest

»
15 месяцев назад, скрыть # |
 
Проголосовать: нравится 0 Проголосовать: не нравится

Maybe solution of G can be intuitively understood considering induction proof of this: https://math.stackexchange.com/questions/4603759/show-that-rmk-2-mk-2-3m-1

»
15 месяцев назад, скрыть # |
 
Проголосовать: нравится +8 Проголосовать: не нравится

Hi, could someone explain F2 in greater detail?

I don't really understand this "Let the distance between the i-th block and the nearest preceding 1-block be p. The number of 0s between blocks j and i cannot exceed p."

Thanks a lot

»
15 месяцев назад, скрыть # |
 
Проголосовать: нравится 0 Проголосовать: не нравится

[submission:302161510][

(https://mirror.codeforces.com/contest/2061/submission/302161510)

can anyone help, why is my code getting WA on test 6?

»
15 месяцев назад, скрыть # |
Rev. 2  
Проголосовать: нравится +25 Проголосовать: не нравится

Can someone explain F2 in detail?

»
15 месяцев назад, скрыть # |
 
Проголосовать: нравится +3 Проголосовать: не нравится

In E , why doesn't the greedy method for finding num(i , j) (going iteratively) work ?

  • »
    »
    15 месяцев назад, скрыть # ^ |
     
    Проголосовать: нравится +6 Проголосовать: не нравится

    not sure I can visualize this well, but imagine that the "best" operation has two highest bits: A, B (and nothing else on). The second best operation has only bit A on, but also some other bits CD, the third-best operation has bit B and other bits EF.

    If you can use exactly two operations on that number, it's better to choose second and third best, skipping the top one (as together they have the bits ABCDEF which is the most you can get with two operations).

»
15 месяцев назад, скрыть # |
 
Проголосовать: нравится 0 Проголосовать: не нравится

It is possible to solve E without proving convexity or really any property of the function g for that matter.
Submission

»
15 месяцев назад, скрыть # |
 
Проголосовать: нравится 0 Проголосовать: не нравится

Can someone explain the DP solution of C in a better way Can't the last person be liar?as in editorial why this is not considered in transition!?

»
15 месяцев назад, скрыть # |
Rev. 2  
Проголосовать: нравится +53 Проголосовать: не нравится

Just upsolved F2. It is one of the best problems I have seen. On the other hand, I have no clue on how the editorial just skips to the following claim :

Let the distance between the i-th block and the nearest preceding 1-block be p. The number of 0s between blocks j and i cannot exceed p. There is a restriction: j≥li.

I do get there eventually, but the steps are definitely non trivial. I decided to write my own solution for both F1 and F2, explaining the intermediate steps.

F1 Solution
F2 Solution
  • »
    »
    15 месяцев назад, скрыть # ^ |
     
    Проголосовать: нравится 0 Проголосовать: не нравится

    Great explanation!

    The ending characters need to be handled properly (and evaluating if the ranges $$$(0,x), (x,n+1)$$$ are valid). One way is to brute-force on the final values of $$$S_1$$$ and $$$S_n$$$ but that too has a few cases (in my way at least).

    Could you elaborate more on why it's necessary to "brute-force the final values of $$$S_1$$$ and $$$S_n$$$"? Couldn't we just add dummy characters ("fixed" block) on both ends of $$$S$$$ and $$$T$$$? Thanks a lot.

»
15 месяцев назад, скрыть # |
 
Проголосовать: нравится 0 Проголосовать: не нравится

i solved A,B,C,D,F1 all questions felt below 1600 got a delta of +200

»
15 месяцев назад, скрыть # |
Rev. 2  
Проголосовать: нравится 0 Проголосовать: не нравится

Hello everyone, I am trying to do the problem D of this contest. My idea is that I would make 2 sets of a and b (map storing frequencies), and then I would check for every element of set b-

  • if this element is there in set a, remove it from a
  • if this element is not present in set a
    • if this number is odd, find $$$x = num / 2 - 1$$$ and $$$y = x + 1$$$, and then try to find x and y in set a, if any is present then remove it from set a, otherwise add that to set b
    • if this number is even, find $$$x = num / 2$$$ and $$$y = x$$$, and do the same procedure as above
    • if in any case x or y is less than the minimum element in set a, print "No"

at last, if set a is also empty then print "Yes"

According to my understanding, it should take $$$O(n * log(max(b_i)))$$$, which should be feasible, but it is giving TLE verdict. Can anyone help me in this, whether I am missing something important?

Thanks

UPD: I got to know that this approach is same as the editorial.

»
15 месяцев назад, скрыть # |
 
Проголосовать: нравится 0 Проголосовать: не нравится

I've got a question about problem I. I implement the O(n\sqrt(n\log n)) solution ,but the second part( j−(l−j)<−(r−l) ) seems to be not convex because the transition point is not monotonic(in very rare occasions,but it does accure).

»
15 месяцев назад, скрыть # |
 
Проголосовать: нравится 0 Проголосовать: не нравится

In my friends list, a pupil, a specialist, an expert(me before this contest ;-;), a candidate master and a master all got FST for B. B destroyed everyone equally XD.

»
15 месяцев назад, скрыть # |
 
Проголосовать: нравится 0 Проголосовать: не нравится

In problem E, I thought, at each move, i could greedily decide which magic to use with which number. I know this code would receive TLE, but it is getting WA. Submission

can anyone please help me out here? Is it the assumption that is wrong?

»
15 месяцев назад, скрыть # |
 
Проголосовать: нравится 0 Проголосовать: не нравится

Can someone explain problem E in simpler words? Couldn't grasp the editorial

»
15 месяцев назад, скрыть # |
 
Проголосовать: нравится 0 Проголосовать: не нравится

An interesting fact.Is jqdai0815 trying to hide the editorial of H ?(). Why he post it on a google using Chinese while Chinese can't be accessed to it and others can't understand Chinese.

»
15 месяцев назад, скрыть # |
 
Проголосовать: нравится +4 Проголосовать: не нравится

Where is the editorial for the problems H1, H2 ?

»
15 месяцев назад, скрыть # |
 
Проголосовать: нравится 0 Проголосовать: не нравится

In Problem E, why doesn't just a greedy approach work for each A[i]. For example why is this approach wrong and why do we need to iterate over each subsequence of B.

vector<int> d;

	
for(int i = 0; i < n; i++){
	int g = a[i];
	int h = g;
	while(true){
		for(int j = 0; j < m; j++){
			h = min(h, g & b[j]);
		}
		if(h == g)break;
		d.push_back(g - h);
		g = h;
	}

	
}
»
15 месяцев назад, скрыть # |
 
Проголосовать: нравится 0 Проголосовать: не нравится

The Editorial is not good :(

»
12 месяцев назад, скрыть # |
 
Проголосовать: нравится 0 Проголосовать: не нравится

After thinking for a while, I finally understood why the answer to Problem C is dp[n] + dp[n-1]. It's because when the n-th person lies, the number of valid cases equals the number of cases where the (n-1)-th person tells the truth.

»
9 месяцев назад, скрыть # |
 
Проголосовать: нравится 0 Проголосовать: не нравится

For div 2 D, how to get the approximation that the time complexity is O((n+m)log(n+m)) and not a simple O(nlogn), although it's roughly on the same scale?