TheScrasse's blog

By TheScrasse, history, 17 hours ago, In English

Today I've taken part in AtCoder Regular Contest 186, solved problem E and spent the remaining 90 minutes not solving anything.

Then I've taken part in Codeforces Global Round 27 and spent 90 minutes solving 2035D - Yet Another Real Number Problem.

How to stop being stupid?

  • Vote: I like it
  • +132
  • Vote: I do not like it

»
16 hours ago, # |
Rev. 2   Vote: I like it +45 Vote: I do not like it

I'd gladly exchange my skill of solving easy problems with your stupidity

»
16 hours ago, # |
Rev. 2   Vote: I like it -8 Vote: I do not like it

lol xxD

»
16 hours ago, # |
  Vote: I like it -8 Vote: I do not like it

same here, in today's global contest i spent 18 minutes to solve A , thats give me emotional damage.

»
16 hours ago, # |
  Vote: I like it -10 Vote: I do not like it

I would like to understand your thought process ?

were you stuck in finding the core-logic ?

Or were you stuck with implementation ?

In my case, I had found logic in the first 30 minutes while solving D. But I had to think a lot while implementation, to avoid getting into large numbers. Because, there would have been integer overflows. I had to split each number into two parts. 1) odd number, 2) power of 2's in that number. This took me more than 40 minutes to do it fully. .

  • »
    »
    16 hours ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    In the end you did it, that's matter

  • »
    »
    16 hours ago, # ^ |
      Vote: I like it -14 Vote: I do not like it

    You doing cp past 8/9 years sir, it's inspiring btw can we connect through linked-in

  • »
    »
    14 hours ago, # ^ |
      Vote: I like it +6 Vote: I do not like it

    I was stuck mainly because I found out the stack is small (length $$$\leq 8$$$?) and I was trying to iterate on subsets to remove instead of just removing the last element.

»
14 hours ago, # |
  Vote: I like it 0 Vote: I do not like it

I've taken part in Codeforces Global Round 27 and spent too much time solving 2035C, so I solved 2035D instead.

  • »
    »
    9 hours ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    This seems like a good idea actually, I wasted a lot of time on C and solved it in 20 mins of clear thinking after contest. So the tip is if you're struggling with finding good ideas for a problem try another problem.

    • »
      »
      »
      112 minutes ago, # ^ |
      Rev. 2   Vote: I like it 0 Vote: I do not like it

      my bug was the following

      usedNumbers[1<<i - 1] = true
      

      should've been

      usedNumbers[(1<<i) - 1] = true
      

      I'm DUMB

»
13 hours ago, # |
Rev. 2   Vote: I like it +75 Vote: I do not like it

I think it's not uncommon to find that a problem's point value or number of solves doesn't always correspond to how difficult the problem is for you? At least you solved D in the end. :)

For example, I thought that problem A from today's ARC was harder than all the others, and that F from today's global round was comparable or easier than D.

»
8 hours ago, # |
  Vote: I like it +11 Vote: I do not like it

I find D today to be crazy hard, here is what my thought process is:

Firstly, no immediate properties comes to mind. So, start by the next line of generic thinking:

Start from an optimal configuration, (decent amount of thinking ), figure out a condition for every pair

then, ( decent amount of thinking later), lift this condition for the global configuration

then,(even more thinking later), find out that a stack can indeed maintain the optimal global configuration and is correct to do so.

Perhaps, the idea is it is far easier if you just guess a solution and verify that it works. I am also curious how can anyone figure this out considerably easier.

in contrast, E and F are basically trivially: they fall to the first statement — an immediate property did come to my mind. SQRT for E, monotone by +=2n for F.

TLDR: I think 2035D is crazy hard and I used the maximum amount of thinking for it, not unlike problems in regions of >=2700.

  • »
    »
    8 hours ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    It's very strange because it was the exact opposite for me — D felt easy enough and E seemed extremely hard.

    • »
      »
      »
      8 hours ago, # ^ |
        Vote: I like it 0 Vote: I do not like it

      Maybe I should also say my thought process for E so to compare/ infer the difference?

      First (by intuition) I know that problem of this type usually fall to SQRT. And the constraints do look good for SQRT. To execute the SQRT,I just need to correctly understand what it means to have k upgrade, or k attacks, for small k.

      I then naturally (ok I just realized this isn’t actually that natural, but anyways) tried to look at the sub cases of fixing number of attacks and upgrades, and then clearly see the unique best arrangement. I also quickly see that the damage dealt is monotone in both arguments. Together this is enough to give the solution for SQRT+Binary Search.

      (I made a mistake in contest, they are only monotone for valid pairs of parameters)

      Compared to D, far fewer “rewriting/calculations” and “problem solving decisions” are needed. All statements I have used are short and simple, and so are their justifications. The properties that I want to look for are exactly right.

      • »
        »
        »
        »
        5 hours ago, # ^ |
        Rev. 3   Vote: I like it 0 Vote: I do not like it

        My thought process for problems as a CM :

        for D , First thinked we should carry all 2's to the end , wrote the code. It failed . I understood why that happened. Then I thinked the problem as distributing 2's to elements to their right. Thinked of dp's but couldnt find anything since number of 2's can be nlogn. Then I thinked about stack , realized to optimize the current answer we can keep switching the 2's of the last element in stack to the current element or do nothing. Proved it as : values in stack will be monotonically decreasing so if the value we pushed is a better value it only makes sense to pop things from the top of the stack. Got AC.

        for E I was just dumb , I didnt thinked about anything and did random stuff in hopes of it would pass. I should have used my brain.

        for F , I taught the problem was a easy binary search + greedy problem. Reduced the problem to ranges transforming the tree to euler tour. Wrote the code , realized it fails because we cant skip any range , we need to either -1 or +1. Couldnt find a fix for it.

        • »
          »
          »
          »
          »
          117 minutes ago, # ^ |
            Vote: I like it 0 Vote: I do not like it

          I see, I think D and F I both rejected the simple wrong approaches very quickly because “It couldn’t be that easy”. (So I only looked for disproofs and not proofs.) I guess if I were following your thought process this point alone probably make it a lot more smooth.

  • »
    »
    6 hours ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    I think for D, if you notice that it's always not worse to operate only for pairs $$$(i,i+1)$$$, then you'd realize adding an element $$$i$$$ simply means doing some operations $$$(*,i)$$$ from the previous optimal solution. With this I think it's quite easy to get the greedy solution using stacks.

    However I found F really hard (even harder than G2 I think) because I had no idea about the period thing, and though I knew how to check for a fixed value from the beginning, I was only able to notice the period after a long time. Was this some sort of tricks or something?

    • »
      »
      »
      2 hours ago, # ^ |
        Vote: I like it 0 Vote: I do not like it

      I don’t quite understand what you mean for D. It seems that what you said make the situation worse:now that you can pass through some intermediate construction that is worse than the starting setup?

      Though I see the form Igor_Parfenov provided below is indeed what I am looking for. I just wasn’t able to find the conclusion, but the proof feels much easier if I did find it.

      for F, I think the idea of “problem feels probably monotone” -> “ok it is not “ ->” but it is monotone if we group differently” is sufficiently common. Though, most of the time it is about grouping odd/even due to only parity constraints. I think I saw this mostly in technical math problems though.

»
6 hours ago, # |
  Vote: I like it 0 Vote: I do not like it

Same :(

It seems like a common theme for me in contests to approach easy problems wrong often and lose a ton of time or even get completely stuck.

»
6 hours ago, # |
  Vote: I like it 0 Vote: I do not like it

In my last contest I solved C slower than F1&F2...

»
6 hours ago, # |
Rev. 2   Vote: I like it +8 Vote: I do not like it

I think its just that A-D requires an almost different skillset compared to anything E and beyond

So if you kept practicing the harder problems naturally you will get a bit rusty on those

»
3 hours ago, # |
  Vote: I like it 0 Vote: I do not like it

Btw, how to prove this greedy, which is heavily used by my solution?

How to solve the "simple" problem. Assume number $$$a_i$$$ is even. Then we can before start divide it by two and remember one operation: choose number $$$j \ge i$$$ and multiply $$$a_j$$$ by two. Let's apply all those divisions, and finally we will have an array of only odd numbers and a set of suffixes, where we choose an element and multiply it by two.

Now to maximize the final sum use the following greedy: go from right to left (e.g. from small suffixes to big) and apply multiplication to the largest currently number.

It sounds for me quite correct, but I can't prove it.

  • »
    »
    3 hours ago, # ^ |
      Vote: I like it +2 Vote: I do not like it

    Let the current max be $$$A$$$ , and the current second max be $$$B$$$. ($$$A > B$$$)

    If we don't apply the current suffix to $$$A$$$ and apply it to $$$B$$$ that simply means we get a worse answer because :

    1 — all future suffixes can also do operations on $$$A$$$ and $$$B$$$.

    2 — it is more optimal for a set of operations that can be done on both $$$A$$$ and $$$B$$$ to be done completely on $$$A$$$ (since $$$A$$$ is the larger one).

    Thus for all operations that we applied on $$$B$$$ that we can apply to $$$A$$$ , we need to shift them from $$$B$$$ to $$$A$$$ to make the answer more optimal.

    I don't know how formal this is but I think this counts as a proof.

»
2 hours ago, # |
  Vote: I like it 0 Vote: I do not like it

i am wondering the same thing :(

»
115 minutes ago, # |
  Vote: I like it 0 Vote: I do not like it

Bro suffering from success.

»
56 minutes ago, # |
  Vote: I like it 0 Vote: I do not like it

I'm actually wondering about problem E from Global Round. I tried different ways to bruteforce it, but it got either TL or WA. Then I just additionally bruteforced around $$$\sqrt{(z*y/x)}$$$, which is an optimal point for $$$f(u) = x * u + y * z / u$$$, where u — is the number of upgrades used. As you can see this function is simplified and doesn't consider the restriction given by $$$k$$$. Surprisingly it still worked, and I don't know why. Maybe the optimal point doesn't shift that much, or maybe it's just hard to make strong tests in this problem.