Блог пользователя majk

Автор majk, 5 лет назад, По-английски
Tutorial is loading...
Tutorial is loading...
Tutorial is loading...
  • Проголосовать: нравится
  • +30
  • Проголосовать: не нравится

»
5 лет назад, # |
Rev. 2   Проголосовать: нравится +3 Проголосовать: не нравится

Alternate solution for A: The answer is $$$\frac{m}{2}$$$ times the number of acyclic orientations of $$$G$$$. To count them, compute the chromatic polynomial in $$$O(n\ 3^n)$$$ by some dp where you partition the vertices into $$$1, 2, \dots, n$$$ independent sets, evaluate it at $$$-1$$$ and multiply by $$$(-1)^{n}$$$. (I never though that I would use that theorem some day.) This gives $$$63$$$ points, to get $$$100$$$ you need to remove vertices of degrees $$$\leq 1$$$.

I knew the theoretical solution to C from maths, but I didn't want to implement it.

  • »
    »
    5 лет назад, # ^ |
    Rev. 2   Проголосовать: нравится 0 Проголосовать: не нравится

    We knew about this solution, but we didn't think it could pass the TL. Also, the fact that this is googleable is less of a concern for the onsite.

    We also considered some slight modification to the cost function or further limit the set of allowed operations, but doing so complicated the statement too much.

  • »
    »
    5 лет назад, # ^ |
      Проголосовать: нравится +3 Проголосовать: не нравится

    The value of chromatic polynomial $$$C(x)$$$ when $$$x=-1$$$ can be computed in $$$O(2^nn^2)$$$ by subset power series.

  • »
    »
    5 лет назад, # ^ |
      Проголосовать: нравится +2 Проголосовать: не нравится

    You know what you can do for 63 points? Add vertices in layers. The top layer is made up by all vertices with indegree 0 (in the resulting DAG), the second layer by all vertices with indegree 0 when those are removed and so on. Each layer has to contain no internal edges and each of its vertices has to be connected to the previous layer; this means a sequence of layers uniquely defines a DAG. You can just generate all valid pairs of (this layer, previous layer) and do a DP where each state is (vertices in all layers so far, last layer). Much simpler and much more intuitive, since it just fixes the wrong idea "add vertices one by one and keep the set of those added so far".

»
5 лет назад, # |
  Проголосовать: нравится +40 Проголосовать: не нравится

Problem C is usually known as Wallace-Bolyai-Gerwien Theorem.

Also problem A can be solved in $$$O(2^n n^2)$$$ time. Will write about it when I have some more time (hopefully).

  • »
    »
    5 лет назад, # ^ |
      Проголосовать: нравится +3 Проголосовать: не нравится

    We're aware. The point is to solve it without knowledge of chromatic polynomials.

  • »
    »
    5 лет назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится

    I read your code for problem A, but I wasn't able to understand it.

    (Is it calculating C(-1) where C is chromatic polynomial?)

    I saw that lots of Chinese students are very strong in the problems like problem A. (efficient bitmask dp, squeeze 3^n poly(n) -> 2^n poly(n) )

    Is there any material to study about this?

    • »
      »
      »
      5 лет назад, # ^ |
        Проголосовать: нравится +31 Проголосовать: не нравится

      Yeah, the rough idea is that if you calculate C(c) when c is sth positive, you're convoluting the independent-set bitmasks c times. And when you're calculating C(-1) you're actually inversing that.

      About some details: assuming you're aware of fmt for calculating or-convolution, then for doing these kinds of disjoint-or-convolution, you only need to add one dimension (popcount) for the arrays and do convolution on the popcounts. So, if 'or's really took effect, the popcounts will be wrong (1s missing) and we'll get the correct result if we only look at the positions with correct popcounts. Finding inverse isn't hard too. Just try to make the result fit.

      I think the awareness about these topics is partly because vfleaking (thanks, really!) has a well-written tutorial about formal power series (i.e. fwt, fmt etc.) in his 2015 team paper (that every Chinese team member needs to write). Then these tricks become widely known and become popular in Chinese OI-style contests. Probably you can try to read it, but I'm not aware of any translation.

»
5 лет назад, # |
  Проголосовать: нравится +8 Проголосовать: не нравится

There is one more way to solve subtask3 in Amusement Park. It is basically the same as for subtask2 but with one optimization.

We can pre-compute all possible triangles in O(n^3). Then, (when we build reorientations in our recursive function) if our current edge makes oriented triangle with previous edges we will break current branch of recursion. It helps when number of edges is much bigger than number of vertexes

»
5 лет назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

Some details regarding our solution to A:

  • This solution can be easily extended to any costs of flipping edges and more general costs of DAGs like (sum of edge costs)^k with an additional O(k^2) in complexity. Can chromatic polynomials be extended in the same way?
  • The time complexity is approximately O(2.5^n * polynomial) in practice. I suspect the base is sqrt(6); for example, this bound is trivial if the minimum degree is O(n/2) and the maximum number of states induced (generated) by vertices n/2..n-1 being added is O(6^(n/2)). Can you prove that the presented algorithm is actually o(3^n)?
  • »
    »
    5 лет назад, # ^ |
      Проголосовать: нравится +21 Проголосовать: не нравится

    How do you store the states for A? $$$O(3^n)$$$ memory is obviously too much. For my last submission, I stored a vector of pairs $$$(B,dp[A][B])$$$ for each $$$A$$$ but this only works because the number of nonzero states is sparse. Is there a better way to do this?

    • »
      »
      »
      5 лет назад, # ^ |
        Проголосовать: нравится 0 Проголосовать: не нравится

      vector<unordered_map> is one option, but I used (see my solutions below) a simple vector<vector<state>>>. I just push_back and before processing states for a given set of used vertices, I sort them and merge 'states' with the same set of candidates. This adds a logarithm to complexity, but it's actually really fast.

»
5 лет назад, # |
Rev. 2   Проголосовать: нравится 0 Проголосовать: не нравится

My solutions:

»
5 лет назад, # |
  Проголосовать: нравится +5 Проголосовать: не нравится

I think that checker for problem C is not complete. In my submission in test 3(https://mirror.codeforces.com/contest/1193/submission/57907084) I tried to print a tape operation with the rotated rectangle in a result, but checker did not get it. But from the equivalent definition it seems that we can actually rotate figures. Can you please check this issue?

  • »
    »
    5 лет назад, # ^ |
      Проголосовать: нравится +13 Проголосовать: не нравится

    If you just want to rotate, the 2 lines you print should be identical. The rotation is uniquely defined by the id of the original shape and by the new partial shape.

    • »
      »
      »
      5 лет назад, # ^ |
        Проголосовать: нравится 0 Проголосовать: не нравится

      Sorry, I think I've given the wrong link. https://mirror.codeforces.com/contest/1193/submission/57911386 In this attempt I print two identical lines. What's the problem here?

      • »
        »
        »
        »
        5 лет назад, # ^ |
          Проголосовать: нравится +10 Проголосовать: не нравится

        You're renumbering the vertices in the rotation. The checker doesn't look at all rotations of your shape and check if one of them satisfies rot(angle, orig_shape.x[i], orig_shape.y[i]) == (rotated_shape.x[i], rotated_shape.y[i]). It only checks if the shape you printed satisfies that.

        In other words, the checker follows a natural action you'd perform if you actually had a piece of paper, scissors and tape, and verifies that you know exactly what actions to perform, not just "I want this polygon to fit here". You pick a polygonal piece you just cut out with scissors and placed somewhere on the desk, and write an integer next to each vertex to make sure everything is labeled and you don't get confused, and you also write an id on each piece. Then, you rotate this piece by some angle; the same happens for other pieces. Then, you move those pieces in some ways and note the new coordinates of their vertices in the same way you numbered them before. Finally, you tape those pieces together, erase all numbers written on them and since you got a new piece, again, you number its vertices nicely in some counter-clockwise order and write an id on it.

        I don't even know the code of the checker. This is all knowledge of the statement (after reading through it in detail several times) and the checker log.

        • »
          »
          »
          »
          »
          5 лет назад, # ^ |
            Проголосовать: нравится 0 Проголосовать: не нравится

          Thanks a lot! Yeah, it makes more sense for me now. Unfortunately it wasn't so clear for me earlier, sorry. I thought that the nodes order importance was mentioned only for the mirroring case.

»
5 лет назад, # |
  Проголосовать: нравится +3 Проголосовать: не нравится

During the actual contest, were contestants able to view the checker log for the example test cases of C? If not, it would have been quite frustrating for those stuck at 0 points ... (Also, why isn't the grader provided for local testing?)

  • »
    »
    5 лет назад, # ^ |
      Проголосовать: нравится +10 Проголосовать: не нравится

    The on-site contestants had the checker freely available as a binary. Showing them just the checker log would be meaningless without also seeing the input, and we did not want to give them the test cases.

»
5 лет назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

Can somebody explain how we do the updates on the multiset in problem B, I tried looking at some submission but I don't understand why many people add and subtract on v values of the dp tables. In particular I was looking at this submission: 57885601

  • »
    »
    5 лет назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится

    Map stores nonzero dp[i][x] — dp[i][x-1]

  • »
    »
    5 лет назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится

    I'm not sure if that solution is the same as me, but after I looked into that submission, I think the idea is the same as mine.

    In the official solution, the author mention a data structure called 'range tree' and in subtask 7 we don't have to use range tree. The idea of this solution comes from the idea from solving subtask 7.

    The idea is to keep values in the multiset, but the hard part is when we update the root of a subtree into the set. Consider the non-decreasing array C[u,t] in the official solution, when we update the value of the root of a subtree, one element of C[u,t] will be increased, and the array won't be non-decreasing anymore. We try to fix that by resetting the values after that t.

    For example: 0 2 2 3 4 4 4 5 5 6 9 9 9

    may be changed into 0 2 2 3 7 4 4 5 5 6 9 9 9

    so we must reset into 0 2 2 3 7 7 7 7 7 7 9 9 9

    To process the reset, we can simply erase items in the multiset and if the last item does not match, just subtract the value there.

    The misleading part is that, this sounds like each update takes $$$\mathcal{O}(N)$$$. If we update each nodes, the time would be $$$\Omega(N^2)$$$, but that's wrong, because we can analyze that for each deleted values, they must have been inserted beforehand. Since we can insert only $$$\mathcal{O}(N \log{N})$$$ values (from merge small into large trick), the erasion (sum of all operations) would be also $$$\mathcal{O}(N \log{N})$$$ also. So the amortized time is $$$\mathcal{O}(\log{N})$$$ per operation, and it takes $$$\mathcal{O}(\log{N})$$$ to change the value of the last element. So the update is in $$$\mathcal{O}(\log{N})$$$ amortized.

    Therefore, the running time of the whole solution is $$$\mathcal{O}(N \log^{2}{N})$$$ due to merge small into large in $$$\mathcal{O}(N \log N)$$$ and each operation takes $$$\mathcal{O}(\log{N})$$$ on top.

    Not sure if this is correct or not, but it is my idea during the contest. My submission: 57889967

    • »
      »
      »
      5 лет назад, # ^ |
        Проголосовать: нравится +5 Проголосовать: не нравится

      I'm the main author of the range/segment/interval tree idea. I didn't realise until the contest that it's solvable just with map/set, but yeah, it should work. The idea I had is more difficult to implement, but it was the most obvious to me — we just want a structure that implicitly stores an array with size $$$K$$$ and supports operations get(index), set(index, value) and "take this array and add it to the structure in another array", with the realisation that we just need add(l..r, value) for the last operation.

»
5 лет назад, # |
Rev. 6   Проголосовать: нравится +32 Проголосовать: не нравится

Here is my $$$O(3^n)$$$ solution for problem A.

It's known to all that the final result is $$$m \over 2$$$ times the ways to construct a DAG of graph $$$G$$$, suppose $$$G$$$ is an undirected graph. Let $$$dp[i]$$$ be the number of ways of construct a DAG with vertex set of $$$i$$$, and a boolean array $$$jud[i]$$$ represent that if the set $$$i$$$ doesn't contain any edges that connect them(which means if $$$u\in i$$$,$$$v\in i$$$ and there is an edge connected $$$u$$$ and $$$v$$$, then $$$jud[i]=0$$$, otherwise $$$jud[i]=1$$$).

Consider the method to calculate the array $$$dp[i]$$$. We can enumerate the vertex set $$$i$$$. As $$$i$$$ is an vertex set of a DAG, there must be some vertexes with $$$0$$$ in-degree. Then enumerating a vertex set $$$j$$$ which is a subset of $$$i$$$, and let all the vertexes in $$$j$$$ have $$$0$$$ in-degree. Obviously $$$jud[j]=0$$$ (if it exists an edge connected vertexes of $$$j$$$, then there must be an edge with nonzero in-degree). But it may calculate a same method more than once, so we need calculate it by inclusion-exclusion principle.

$$$ dp[i]=\sum_{j\in i,j\neq 0}(-1)^{|j|+1}dp[i\oplus j] $$$

Excuse my poor English...XD. Here is my code.

  • »
    »
    5 лет назад, # ^ |
      Проголосовать: нравится +3 Проголосовать: не нравится

    But why time complexity is O(3^n)? I know it is right because it runs like O(3^n), but I can't prove it.

    • »
      »
      »
      5 лет назад, # ^ |
        Проголосовать: нравится +10 Проголосовать: не нравится

      Well it's actually not so hard to proof. Let's do this: If we have a set with x elements then the number of subsets is going to be 2^x. So the sum of the number of subsets of all sets is: C(0,n)*2^0+C(1,n)*2^1+C(2,n)*2^2+...+C(n,n)*2^n=(2+1)^n=3^n(Binomial coefficient)

      Sorry for my English

  • »
    »
    5 лет назад, # ^ |
    Rev. 3   Проголосовать: нравится 0 Проголосовать: не нравится

    here is almost same solution, but without using the fact that answer is m/2 times the ways to construct the DAGs. In this solution I calculate the answers for each subset using the same method and the complexity is the same (but it's kinda slow and there are some constant optimizations used there)

    • »
      »
      »
      5 лет назад, # ^ |
        Проголосовать: нравится 0 Проголосовать: не нравится

      I'm not quite understand though... Which value did you memorize? And how to transport the number of methods?

»
5 лет назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

Any estimates when you will publish test data for day 2?

»
5 лет назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

If I recall correctly, the invariant of 3D scissors-congruent is the sum of all (side length) * (cos angle between two face contain that side).

»
5 лет назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

You can also solve B with square root decomposition by running an Eulerian tour on the tree and finding the range of values which consist of the subtree of any vertex. You can build sqrt(N) segment trees to represent each sqrt(N) range after the Eulerian tour, and simply query for the minimum K value at each vertex to calculate the DP.

»
4 года назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

How to solve B with HLD?

  • »
    »
    6 недель назад, # ^ |
      Проголосовать: нравится +2 Проголосовать: не нравится

    привет If you sort the fruits by D, you can have a straight forward O(n^2) solution For each new day, we can build a dp(u) which is the maximum total of juiciness we can have from the subtree of u. i.e building from bottom to top if d(i) = D(which is the current day), we can have dp(i) = w(i) + sum of dp(v) (v is u's children)

    dfs(u): sum<->for(v:child[u])dfs(v),sum+=dp(v) dp[u]=sum if d(u)=D: dp[u]+=w(u)

    We can build HLD over this to optimize the code Call f(i) as dp(i) when (D=d(i)) See that if a node i has been calculated, it will remain as f(i) (when d(i)=D) or the sum value of its child (after some days)

    Once dp(i) is no longer f(i) -> when a new node p in its subtree is ripe, dp(i) will add w(p). Build hld, and maintain a set. TC: O(n.log^2)