Codeforces и Polygon могут быть недоступны в период с 6 декабря, 22:00 (МСК) по 7 декабря, 00:00 (МСК) в связи с проведением технических работ. ×

Блог пользователя 0x0002

Автор 0x0002, история, 14 месяцев назад, По-английски

I think I am ok with algorithm problems, but I am so bad in problems like greedy, constructive problems, interactive problems. The problem A, B and C in div2 are often problems like that. I often can't find the key observation. How to practise? Just doing more problems like that or something else?

  • Проголосовать: нравится
  • +3
  • Проголосовать: не нравится

»
14 месяцев назад, # |
Rev. 3   Проголосовать: нравится +11 Проголосовать: не нравится

Send some examples of those problems and I can try giving you my opinions on them.

Just so you know: I also have trouble in some of those. In particular there was a recent-ish div2 round that I was able to mindsolve every problem other than C basically in the time that it took to read the statement and wasn't able to solve C. Most problems aren't hopeless like that though.

Also if you're ok with algorithm problems, then please skip problems during contests. C might be too hard for you because it's a problem that "everyone with common sense can approach" but the people that judge that make a bad judgment often and D-F might be a direct algorithm/data structure problem.

  • »
    »
    14 месяцев назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится

    Thanks. For example, the problem of 1861C and 1861D.

    • »
      »
      »
      14 месяцев назад, # ^ |
        Проголосовать: нравится 0 Проголосовать: не нравится

      I can hardly solve problem A-C quickly in div2 without penalty.

    • »
      »
      »
      14 месяцев назад, # ^ |
        Проголосовать: нравится 0 Проголосовать: не нравится

      Of course, they are easy. I actually can't really understand why you think they are hard. Sometimes learning too many difficult algorithms can lead to situation like this. For example in 1861D, if you just scan through the problem, you will know that there is a dividing point. But if you are already used to use complicated data structures, you may think in another way that completely doesn't work. Actually I think it's strange that you think you are not good at it even you have solved many of them, while sometimes I can solve these problems faster than you. (:

    • »
      »
      »
      14 месяцев назад, # ^ |
      Rev. 2   Проголосовать: нравится +3 Проголосовать: не нравится

      C is more a matter of understanding the problem and applying "imagine the worst case" or in this case "image the best case". Let's say you receive 1 when the array has size N: it means that for every 1 <= i < j <= N it's true that a[i] <= a[j] (it'd be sufficient to say that a[i] <= a[i+1]). If you receive 0 when the array has size N: it means the opposite of what I wrote above so there's some 1 <= i < j <= N such that a[i] > a[j] (it'd be sufficient to say that a[i] > a[i+1]). When you receive 1 queries it means that the prefix is sorted. When you reveice 0 queries it means that the prefix isn't sorted. There's not much to do about when you receive 1, but when you receive 0 there's some freedom for which i has a[i] > a[i+1]. The best case is if a[N-1] > a[N], because then when you remove the thing in position N the array will lose the "unsorted" property. Now it's a matter of maintaining the size of the prefix that's sorted and the leftmost position that is surely unsorted. The only observation here is that "there's some freedom for which i has a[i] > a[i+1]" and what to do with that, the rest is understanding the problem formally.

      For D I thought for like 20 minutes and didn't manage to solve it but after refreshing my mind and rereading the statement, the observations that I had point to the positive suffix adding cost number of i such that a[i] >= a[i+1] and the prefix adding cost 1 + number of i such that -a[i] >= -a[i+1] then it's a matter of "try every possibility". I had thought of "positive suffix and negative prefix" and "what matters is the border between adjacent pairs" but I hadn't realized that an operation will solve at most 1 pair (this is the key observation) and can possibly mess up the other endpoint, realizing that leads to the solution. I was blinded by my bias due to having seen similar problems before and an operation solving at most 2 bad pairs is something that's somewhat common in this kind of problem. I've solved some div1D-E problems more effortlessly than this one so great example :P. For this problem, I'd say that you can learn that often in these problems that have subarray updates the borders are the only things that change and that in greedy problems sometimes checking the "best and worst case of reducing the number of bad things" for operations gives you the solution directly.

»
14 месяцев назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

I think you should just train more on those problems. Greedy and constructive problems sometimes are easier than other problems, but that sometimes causes ignorance. You are ok with those algorithms and you always get the better score than me, but those basic things aren't ignorable.

»
14 месяцев назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

It is same as math. In primary school, the problems are usually based on a weird setting. Those problems usually are hard because of the amount of calculating, and the "algorithm" students should use is not hard at all. Problem A — C are easy in their algorithms, but the way to make them more interesting and meaningful is to add some trap and zigzags. Those primary school's problems are sometimes even "harder" than the problems that I am doing now.