Блог пользователя cdkrot

Автор cdkrot, история, 6 лет назад, По-английски

D2A author: 300iq, cdkrot, developer: 300iq

Tutorial is loading...

D2B author: isaf27, developer: cdkrot

Tutorial is loading...

D1A author: isaf27, developer: isaf27

Tutorial is loading...

Jury's solution (isaf27): 40973089

D1B author: 300iq, developer: Flyrise

Tutorial is loading...

D1С author: pashka, developer: cdkrot

Tutorial is loading...

D1D author: tourist, developers: qoo2p5, VArtem

Tutorial is loading...

The first solution: 40971595 and the second solution: 40971634.

D1E author: isaf27, developer: isaf27

Tutorial is loading...

Jury's solution (by isaf27): 40973023

D1F author: GlebsHP, developers: demon1999, PavelKunyavskiy

Tutorial is loading...

Credits to all jury members, who contributed to this round and EJOI: tourist, PavelKunyavskiy, niyaznigmatul, 300iq, GlebsHP, pashka, qoo2p5, VArtem, demon1999, Flyrise, ifsmirnov, isaf27, yeputons, cdkrot.

  • Проголосовать: нравится
  • 0
  • Проголосовать: не нравится

»
6 лет назад, # |
  Проголосовать: нравится +210 Проголосовать: не нравится

Downvoted, thanks for explaining the "simple cases analysis" for Div1D

»
6 лет назад, # |
Rev. 2   Проголосовать: нравится -12 Проголосовать: не нравится

During the round,I noticed that in div2 B it is written that "In one operation you can select some i (1 ≤ i ≤ n) and replace element ai with ai & x,",which it's said "some",meaning I can choose an many i(s) as I like.So I think the max ans should be 1 instead of 2(Maybe it's just a translation mistake).

However, I got tons of Wrong Answer because of this. So,is it a mistake....or because I am too weak in English?

SORRY FOR MY POOR ENGLISH

»
6 лет назад, # |
  Проголосовать: нравится +31 Проголосовать: не нравится

my div1 D solution

40970859

I coded so many lines...

»
6 лет назад, # |
  Проголосовать: нравится +14 Проголосовать: не нравится

some of the submissions in editorial can not be visited

»
6 лет назад, # |
Rev. 12   Проголосовать: нравится +42 Проголосовать: не нравится

In Div2E/Div1C, here is a simplier solution:

Let dp[i][j][k] means the min cost that we are at pos i, we have built j houses in [1,i]and k means if we are going to build a house at pos i ( k = 0 or 1 )

when transforming , it's clear that:

when k = 0, we can use dp[i][j][0] to update dp[i+1][j+1][1] and dp[i+1][j][0],means if we are going to build a house at pos (i+1)

when k = 1, we can use dp[i][j][1] to update dp[i+2][j+1][1] and dp[i+2][j][0],means if we are going wo build a house at pos (i+2) ( we cannot build at pos(i+1) now since pos i has already had a lovely house)

You can see the DP equation in my Code: 40965480

#define Mymax(a,b) if(a<b) a = b;
int P( int pos , int goal ){
	if( a[pos] <= goal ) return 0;
	return a[pos] - goal;
}
dp[1][0][0] = 0;
	dp[1][1][1] = 0;
	int div2 = (n>>1) + (n&1);
	For(i,n){
		Forx(j,0,div2){
			For0(k,2){
				#define N dp[i][j][k]
				if( dp[i][j][k] == -INF ) continue;
				if(!k){
					Mymin(dp[i+1][j+1][1],N+P(i,a[i+1]-1));
					Mymin(dp[i+1][j][0],N);
				}else{
					Mymin(dp[i+2][j+1][1],N+P(i+1,min(a[i],a[i+2])-1));
					Mymin(dp[i+1][j][0],N+P(i+1,a[i]-1));
				}
			}
		}
	}

Complexity: O(n2)

I used the algorithm during the contest and acceptted,and I think it's much clearer than the Editorial

SORRY FOR MY POOR ENGLISH

  • »
    »
    6 лет назад, # ^ |
      Проголосовать: нравится +16 Проголосовать: не нравится

    Yeah, that is another of jury solutions, moreover, it is the first solution I have prepared :)

    But I decided that the second solution explained in the editorial is simpler.

  • »
    »
    6 лет назад, # ^ |
      Проголосовать: нравится +3 Проголосовать: не нравится

    In your solution aren't we constructing j houses in [1,i] instead of [1,i) because when k =0 you incremented j i.e dp[i+1][j+1][1]. Here i th element was not taken as k=0 for i and i+1 th element is taken and that was shown in j too as j was incremented.

  • »
    »
    6 лет назад, # ^ |
    Rev. 4   Проголосовать: нравится +6 Проголосовать: не нравится

    why dont we update dp[i+1][j][0] from dp[i][j][1]?

    • »
      »
      »
      6 лет назад, # ^ |
        Проголосовать: нравится 0 Проголосовать: не нравится

      We cannot know the height of (i+1) if you do that.

      dp[i+1][j][0] = dp[i][j][1] + ????(something you don't know)

      • »
        »
        »
        »
        6 лет назад, # ^ |
          Проголосовать: нравится 0 Проголосовать: не нравится

        So how do we transform dp[i][j][1] into dp[i+2][j][0] then ?

        • »
          »
          »
          »
          »
          6 лет назад, # ^ |
            Проголосовать: нравится 0 Проголосовать: не нравится

          Since we have built a house at pos i, it's clear that we cannot build at pos (i+1).If we are not going to build at pos (i+2), then the cost will just be (making mountant (i+1) lower than i)

          See in my code:

          Mymin(dp[i+1][j][0],N+P(i+1,a[i]-1));

          Are you clear now? :)

  • »
    »
    6 лет назад, # ^ |
      Проголосовать: нравится +11 Проголосовать: не нравится

    Thanks for your code and solution, I think it's easier for me to understand it than Tutorial. But the code will be more readable if you don't use the word like "forx", "For". Anyway,Thanks. :)

  • »
    »
    6 лет назад, # ^ |
    Rev. 4   Проголосовать: нравится 0 Проголосовать: не нравится

    if(!k) Mymin(dp[i+1][j+1][1],N+P(i,a[i+1]-1));

    for this part,we are updating (i+1)th position considering ith position's state but why don't you consider i+2 position's value as this value can be greater or eqaul to the value of ith position. Doesn't it be? Mymin(dp[i+1][j+1][1],N+P(i,i+2,a[i+1]-1));

    • »
      »
      »
      6 лет назад, # ^ |
        Проголосовать: нравится 0 Проголосовать: не нравится

      The cost of pos i+2 is consider in the part

      Mymin(dp[i+2][j+1][1],N+P(i+1,min(a[i],a[i+2])-1));
      Mymin(dp[i+1][j][0],N+P(i+1,a[i]-1));
      

      What's more,if you write like Mymin(dp[i+1][j+1][1],N+P(i,i+2,a[i+1]-1));,You will consider the cost of pos i+2 twice and get wrong answer.

      • »
        »
        »
        »
        6 лет назад, # ^ |
          Проголосовать: нравится 0 Проголосовать: не нравится

        I understood it later. Thanks for the tutorial.It's more easier to understand than the editorial.

  • »
    »
    6 лет назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится

    Thanks for the explanation! I have a slight correction (please correct me if I'm wrong):

    when k = 1, we can use dp[i][j][1] to update dp[i+2][j+1][1] and **dp[i+2][j][0]**
    

    dp[i+2][j][0] should be dp[i+1][j][0]. This confused me a bit initially but might help others.

    • »
      »
      »
      6 лет назад, # ^ |
        Проголосовать: нравится +1 Проголосовать: не нравится

      It should be dp[i+2] If you want to go to dp[i+1]from dp[i]and k= 1 , it will be annoying to think of the cost of i+1 because we do not know whether we are going to build at pos i+2 So,use dp[i+2],for more convindence.

»
6 лет назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

Editorial for Div. 2 B:

"Clearly, if it is possible then there are no more than 2 operations needed."

Can someone explain that? Thanks in advance.

  • »
    »
    6 лет назад, # ^ |
    Rev. 3   Проголосовать: нравится 0 Проголосовать: не нравится

    Because having an "and" operation on one number for twice is no use, (a&x)&x is equal to a&x

    SORRY FOR MY POOR ENGLISH

  • »
    »
    6 лет назад, # ^ |
    Rev. 2   Проголосовать: нравится +3 Проголосовать: не нравится

    observation...

    if (value & x) = m

    then (m & x) = m and again (m & x) = m and so on ..

    (11000 & 10001) = (10000)

    (10000 & 10001) = (10000)

    (10000 & 10001) = (10000)

    ............. so on

    so from value V, the maximum one different number can be achieved.

    so it is enough to check two equal number can be achieved in two operations.

    sorry for my poor English.

»
6 лет назад, # |
Rev. 2   Проголосовать: нравится +19 Проголосовать: не нравится

In tutorial of Div1F, I guess it should be d ≥ dp[A] instead of d ≤ dp[A]. Also O(2nn2) seems to be sth like O(2n n2) (wow it seems to be a bug of codeforces).

»
6 лет назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

can someone explain more clearly for div2D ?

  • »
    »
    6 лет назад, # ^ |
      Проголосовать: нравится +18 Проголосовать: не нравится

    Maybe I can help if you ask exact question.

  • »
    »
    6 лет назад, # ^ |
    Rev. 2   Проголосовать: нравится +6 Проголосовать: не нравится

    Have you tried drawing out the graph with nodes named r1,c1,r2,c2, you will see that connected component accuratly represents table.

    Now you have K components, how to connect them in a fast way? Well you can just do K-1 merges between them. So answer is K-1. (by merge I mean chose cell (r,c) such that r and c are not in same component, but you can add (r,c) to connect them).

    Actually vamaddur explained to me the intuition as normal floodfill but done in smarter way.

    • »
      »
      »
      6 лет назад, # ^ |
        Проголосовать: нравится +1 Проголосовать: не нравится

      Submission for Reference

      Note that the comps part was just used for debugging. I can provide a more detailed explanation of my solution if someone requests.

    • »
      »
      »
      6 лет назад, # ^ |
        Проголосовать: нравится 0 Проголосовать: не нравится

      What do you mean by "drawing out the graph with nodes named r1,c1,r2,c2"?

      • »
        »
        »
        »
        6 лет назад, # ^ |
          Проголосовать: нравится +4 Проголосовать: не нравится

        So if we have elements in positions (r1, c1), (r1, c2), (r2, c1), where r1 ≠ r2 and c1 ≠ c2, then we can produce element (r2, c2).

        Draw a edge between (r1, c1), (r1, c2), (r2, c1), and notice that (r2, c2) is in the same connected component.

»
6 лет назад, # |
Rev. 2   Проголосовать: нравится +1 Проголосовать: не нравится

Hey I noticed something strange on div2D/div1B:

When you submit the O(nlogn), its actually faster than O(n) dsu.

Strange right?

My O(nlogn) — 40990303 — 124 ms (path compression)

My O(n) — 40990296 — 155ms (path comp + ranking)

I thought, it must just be me. But I was curious, so I also edited vamaddur's solution to see if similar would happen.

The same pattern happened, even with different implementations (he had some debugging stuff, etc):

O(n) — 40963207 — 483 ms (path comp + ranking)

My edit on his O(n) to O(nlogn) — 40990415 — 311ms (removed ranking, only path comp)

Now of course I understand complexity is not everything, there is constant factor. But I don't understand how even bad constant factor of one O(n) solution can overcome logn factor of slower dsu.

I think this is strange, maybe it has something to do with the data? Why does the nlogn run faster than the linear solution?

  • »
    »
    6 лет назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится

    DSU is O(NlogN), due to the get father part going through at most logN different DSUs before arriving at rhe correct one

    • »
      »
      »
      6 лет назад, # ^ |
        Проголосовать: нравится 0 Проголосовать: не нравится

      Path compression + ranking is actually n*a(n), a(n) means you keep taking the log2 until you reach 1. Actually this is almost always considered O(1) because for n = 2^16 a(n) = 4, so most inputs you deal with a(n) won't be more than 4 or 5 operations

      • »
        »
        »
        »
        6 лет назад, # ^ |
        Rev. 2   Проголосовать: нравится 0 Проголосовать: не нравится

        However, I don't think that it could be the reason to consider InverseAckermann(n) as a constant completely...

        • »
          »
          »
          »
          »
          6 лет назад, # ^ |
            Проголосовать: нравится 0 Проголосовать: не нравится

          It caps at 4 for this problem, so I just consider it 4, which should be smaller than log2(n).

  • »
    »
    6 лет назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится

    I remember that path compression + ranking is not O(n)...

    Isn't it O(N * InverseAckermann(N))?

    BTW, maybe that what makes ranking slower is the if-else statements and more RAM access for array rank?

    • »
      »
      »
      6 лет назад, # ^ |
        Проголосовать: нравится 0 Проголосовать: не нравится

      My comment above.

      And yeah I suspect maybe something like that, but even if you get the max value from a(4e5) = 4

      and you compare O(4n) with O(nlogn)

      log2(4e5) = 18.6something

      so to get 4n > 18n the added constant factor from if-else statements + accesses would have to make it at least 4 or 5 times slower.

      Now I'm wondering if that is actually the case, or that data is unbalanced. If that is actually the case, why not always just write nlogn dsu with the better runtime?

      • »
        »
        »
        »
        6 лет назад, # ^ |
        Rev. 2   Проголосовать: нравится 0 Проголосовать: не нравится

        As I know, the penalty for branch predicting failure in CPU is huge. And it's hard for CPUs to predict complex conditional jump like the if-else statements in ranking...

        (Sorry for the poor English...Maybe the proper noun isn't correct)

        BTW, if we consider the InverseAckermann(n) as a constant because it's capped at 4, why can't I say that since the n is capped at 2e5, my algorithm is O(1)?

        • »
          »
          »
          »
          »
          6 лет назад, # ^ |
          Rev. 2   Проголосовать: нравится 0 Проголосовать: не нравится

          OK, implementing it with just if statements is still 155 ms.

          As about treating a(n) as constant factor, it is typically done because it is easier to think about. So I just call it O(n).

          As for making something large a constant factor, it might be useful, I'm not sure.

          • »
            »
            »
            »
            »
            »
            6 лет назад, # ^ |
              Проголосовать: нравится 0 Проголосовать: не нравится

            I think he means that the if statements are harder to get branch predicted

          • »
            »
            »
            »
            »
            »
            6 лет назад, # ^ |
            Rev. 2   Проголосовать: нравится 0 Проголосовать: не нравится

            Well, actually I mean that the if statements in ranking DSU if(rank[x]>rank[y]) is complex for CPU (it used two RAM references) and it's harder than other branches (like loops) to get branch predicted (value of that expression is almost random). So the conditional jump in ranking DSU (whatever it is, if or if-else) could be the performance bottleneck.

            Another thing that increases the constant factor could be cache miss in RAM accessing. Since the size of array rank is over 1MB so it couldn't be put into cache entirely, it can be another reason.

            • »
              »
              »
              »
              »
              »
              »
              6 лет назад, # ^ |
                Проголосовать: нравится 0 Проголосовать: не нравится

              So would using conditional operator remove this bottleneck?

              • »
                »
                »
                »
                »
                »
                »
                »
                6 лет назад, # ^ |
                Rev. 2   Проголосовать: нравится 0 Проголосовать: не нравится

                Maybe.

                Normally, the operator ?: is based on conditional value statements which run faster than conditional jump statements if the expressions have no side effect because there's no branch prediction here. So you can try the operator ?: to optimize that.

                Try this: prt[rank[x]>rank[y]?x:y]=rank[x]>rank[y]?y:x;

                But for RAM accessing...I have no idea with optimizing that.

»
6 лет назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

Can someone explain more clearly div2B ?I don't understand why at most 2 operations will suffice for an equal pair??

  • »
    »
    6 лет назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится

    Firstly its obvious that you'll apply the operation to any element atmost once(because a&x == (a&x)&x ). Let's apply it to the value at position i. Suppose we couldn't find any element a[i]&x in the array. So we apply the operation to a position j. This makes the count of operations used = 2. Still we couldn't find any element a[j]&x in the array. Now suppose we find a position k, such that a[i]&x = a[k]&x. So, using the operation on the position j was a waste. So, at max 2 operations will always suffice. Rest of the elements need not be tampered with.

»
6 лет назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

Which problems appeared in EJOI?

»
6 лет назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

HELP with Problem B. I don't understand the editorial at all.

  • »
    »
    6 лет назад, # ^ |
      Проголосовать: нравится +3 Проголосовать: не нравится

    Ask exact question, please :).

    • »
      »
      »
      6 лет назад, # ^ |
        Проголосовать: нравится 0 Проголосовать: не нравится

      How Problem B is a greedy one ?

      • »
        »
        »
        »
        6 лет назад, # ^ |
          Проголосовать: нравится 0 Проголосовать: не нравится

        I wouldn't call the problem B greedy.

        The problem is about detecting which one of 4 cases happens.

        But yes, some cases are detected greedily. Like case for ans = 2.

»
6 лет назад, # |
  Проголосовать: нравится +8 Проголосовать: не нравится

My solution for div1D, finally:

  • let's append undeletable 'a' and 'b' at the end of the original strings (trying both ways to do it) and group together consecutive identical letters
  • if both compressed strings have equal lengths, it has an easy greedy solution
  • the optimal solution will reach this situation in at most 2 steps
  • there are only a few types of operations we want to do in these two steps, it seems "move the first group from one string to the other" and "swap approx. half of one string with approx. half of the other string" are the only ones needed (6 types in total)
  • bruteforce these at most two steps, do greedy if possible afterwards, pick the best solution obtained this way

It has a large constant, but at least it should be provable more easily. It's still kind of a PITA to code.

»
6 лет назад, # |
  Проголосовать: нравится +3 Проголосовать: не нравится

Wow, Div1B has an amazing solution in my opinion. What a great problem! It is so easy to code, yet so hard to solve.

  • »
    »
    6 лет назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится

    help me with this problem ? if possible with a elaborate explanation . thanks !

    • »
      »
      »
      6 лет назад, # ^ |
      Rev. 2   Проголосовать: нравится 0 Проголосовать: не нравится

      The solution is above, so I'll try to explain how to get there intuitively.

      First, consider an empty n by m grid. Observe that the minimum number of nodes is n+m+1 — you get that number by making an L shape. Why is it minimal? Well, you must at least have a element in every column and every row, and in a way, they have to support each other. That is, simply making a diagonal can control each column and row, but you won't really be able to get anything unless you have elements in the same column and row.

      But, if you do have 2 elements in the same row (say row 0), then they will support each other a lot! Suppose we have 2 elements in the same row, and suppose they're in columns 1 and 2. Then, if we put another element in column 1, we will control that row of column 2 automatically. So, if we can solve column 1, we can also solve column 2, and vice versa.

      So, in a way, row 0 links columns 1 and 2 together. Similarly, columns can link rows together. If we put an element in column 1, row 3, then solving row 0 will solve row 3 and vice versa.

      Thinking about this relationship, maybe you are inspired to draw these links as edges on a graph. Each element is a link, from row (whatever row it is) to column (whatever column it is). You end up with a bipartite graph. When two rows are in the same connected component, that means that for every solved element in the first row, there is a corresponding solved element in the second, and vice versa. If we can connect the entire graph, then we say that, if any element of a row is solved, the whole column is solved! And since we've connected the whole graph, we must have drawn connections to each column, so the whole graph must be solved!

      So, our goal is to make the described bipartite graph fully connected. To do this, count the number of connected components. Then, we want to add edges to connect them. We can always find an edge to connect 2 components here, so the answer is the number of disconnected components — 1. To count we do a very simple dfs.

      Here is my solution for implementation details. http://mirror.codeforces.com/contest/1012/submission/41098552

»
6 лет назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

Another solution to Div1C (Div2E) is to extend the solution that you found for k and use it to find the solution for k+1.

Let's define used[i] = 1 if the i'th hill contains a flat, otherwise used[i] = 0.

To extend the solution from k to k+1, we need to build one extra flat, so we can make one of the following transformations to one of the contiguous subsequences:
1- Transform 0 0 0 → 0 1 0. That is, build a new house if it's neighbors are not occupied.

2- Transform 0 01010 0 → 0 10101 0. By this, we get an extra 1. Of course the length of this sequence is arbitrary.

Then you simply have to loop over the possible transformations and choose the transformation that minimizes the total cost.

However, keeping track of the total cost and the cost of transformations turned out to be a little bit tedious and that the DP solution is definitely much simpler to code. Anyway, here is my submission if someone is interested: 41124194.

»
6 лет назад, # |
Rev. 2   Проголосовать: нравится 0 Проголосовать: не нравится

I wrote a slow but correct dynamic programming for Div1D.And I found if the length of any string is large, decision making seems regular. You can found these regular patterns here.(in function g(x, y, d), where x represents the length of the first string, y represents the length of the second string, and d represents whether the first characters are different). Could anyone prove it or hack it?

»
4 года назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

Please help. What is complexity for this?

https://mirror.codeforces.com/contest/1012/submission/91080831

I thought it was n^3 but it got AC.