Comments

See this case:

8
00100110
10001101

You can see that if you clean the cell in (1, 3) (1-indexed, (row, col)) you will avoid doing some forced cleaning twice near the end.

If you are doing DP, probably you just need to add one more transition to take care of this.

On Tensei8869Invitation to CODUO 2022, 4 years ago
0

I see "Form link is expired" when I try to register

Could you elaborate a bit more on this part?

Consider the directed graph with these conditions. One can see, that this graph is acyclic, so there is a topological order of that graph which will satisfy all the conditions.

i.e. how does finding that there are no contradictions when labeling each edge as odd/even guarantee we can ultimately remove all the edges in some order (in particular, according to the way it's done in the editorial's implementation)

I didn't get Bellman Ford to pass, but alternatively you can do Dijkstra instead by grounding your constraints on 0s instead of 1s, i.e.

  • If 1s must satisfy $$$B_{R_j} - B_{L_j - 1} \le X_j$$$, then 0s must satisfy $$$B_{R_j} - B_{L_j - 1} \le R_j - L_j + 1 - X_j$$$

The other two constraints must be satisfied as well:

  • If there are $$$B_i$$$ 0s, then there's at least that many 0s in $$$B_{i + 1}$$$, i.e. $$$B_i \le B_{i + 1} \Leftrightarrow B_i - B_{i + 1} \le 0$$$

  • There can be at most one extra zero added from $$$B_{i}$$$ to $$$B_{i + 1}$$$, i.e. $$$B_{i + 1} \le B_i + 1 \Leftrightarrow B_{i + 1} - B_i \le 1$$$.

Constructing a constraint graph (see link provided by Lain) and running Dijkstra (OK as there are no negative edges) leads to its distance array $$$dist$$$ being equal to $$$B$$$ (which is now a prefix sum array on the number of 0s) from which is easy to derive an answer. Also, $$$dist$$$ has the property that $$$B[n] - B[0]$$$ is maximized, and so the number of 0s is as big as possible.

Apparently this technique is well-known in Japan as 牛ゲー. Also for anyone reading this and wanting to learn more, you can also look up chapter 24.4 in CLRS.

My submission: https://atcoder.jp/contests/abc216/submissions/25456483

In problem F, to optimize the solution to $$$O(2^n \cdot n)$$$ using the first way described, how to actually achieve that complexity? Wouldn't updating the products and rolling back (in $$$O(n)$$$) for each possible state of $$$f(i, S)$$$ calculated recursively (for which there are $$$(n + 2) * 2^{(n + 2)}$$$ states) take $$$O(2^n \cdot n^2)$$$ in total?

You would need to keep the number of zeros/ones in each node, and then you can find the first prefix s.t. there are >=k zeros/ones.

Here are some slides on the exponential formula that are also very useful.

Can you elaborate on what a convexity argument is?

Could you add yukicoder to the list of platforms?

I will need to go back to this problem again lol. Thanks for writing this down!

Here's an smaller case:

4 4
1 2
2 3
1 4
4 3

If we DFS from 1, and end up visiting in this order: 1 -> 2 -> 3 -> 4, when we are at 4, your solution would count that 4 can only have one color choice (since 1 and 3 have been visited), but since there is no edge between 1 and 3, nothing forbids them from having the same color.

If this DFS algorithm worked, then k-coloring would be solvable in polynomial time :)

Do we have an update on this contest?

Supposing we know $$$a$$$ (we do, these are the initial positions of the tokens) and $$$b$$$ (we don't, these are the final positions of the tokens): if the XOR (symmetric difference) of these two sets is $$$T$$$, then at least we know the answer exists, and $$$b$$$ is one such possibility.

The intuition here is the following:

  • When we move a token from position $$$a$$$ to position $$$b$$$, moving the token directly from $$$a$$$ to $$$b$$$ ($$$a \rightarrow b$$$) is equivalent (albeit with different cost) to $$$a \rightarrow INF \rightarrow b$$$ (since the path $$$a \rightarrow b$$$ is traversed once, the path $$$b \longleftrightarrow INF$$$ is traversed twice, so the edge colors in this latter part are un-affected)
  • Writing the move above ($$$a \rightarrow b$$$) in terms of applying "XOR" to a range, this is doing XOR $$$1$$$ in the range $$$[a, b)$$$, which is the same as doing both XOR $$$1$$$ in range $$$[a, INF)$$$ and XOR $$$1$$$ in range $$$[b, INF)$$$ (given the rationale described previously that these two are "equivalent").
  • Jumping ahead a bit, let's look at $$$T$$$. If for each position $$$t \in T$$$ we XOR 1 the range $$$[t, INF]$$$, and given that $$$|T|$$$ is even, you can notice that we will get the alternating 0 — 1 — 0 ... — 0 sequence of edge segments. (think about why this only works for even $$$|T|$$$).
  • Each of the XOR 1 $$$[t, INF]$$$ (for all $$$t \in T$$$) operations above can be done an odd number of times. You will still get the alternating 0 — 1 — 0 ... — 0 sequence of edge segments.
  • Remember how in our second bullet point we showed that doing the move $$$a \rightarrow b$$$ (XOR $$$1$$$ in the range $$$[a, b)$$$) is equivalent to $$$a \rightarrow INF \rightarrow b$$$ (XOR $$$1$$$ in range $$$[a, INF)$$$ and XOR $$$1$$$ in range $$$[b, INF)$$$)? That means that our desired set of XOR ranges (XOR 1 $$$[t, INF]$$$ (for all $$$t \in T$$$)) can correspond back to some $$$a$$$ and $$$b$$$. We know $$$a$$$, so we need to find $$$b$$$.

Now, from $$$a$$$ and $$$T$$$ we can recover $$$b$$$. Because:

  • $$$a \oplus b = T$$$ (from the previous paragraph)

  • $$$a \oplus T = b$$$.

It should be the case that $$$|a| \geq |b|$$$. Otherwise, we need more final positions than # of initial tokens we had.

Note that when we recover $$$b$$$ from $$$a \oplus T$$$, it could be that $$$|b| \leq |a|$$$.

Finally, we do the following DP. $$$dp[i][j]$$$: The minimum cost of matching the first $$$i$$$ positions in (sorted) $$$a$$$ with the first $$$j$$$ positions in (sorted) $$$b$$$. For each transition, we might decide to pair $$$(i, j)$$$ or pair $$$(i, i + 1)$$$ (since there are not enough $$$b$$$s). The answer is when everything has been paired ($$$dp[|a|][|b|]$$$).

That helps, thanks!

Nice proof! There's two things I would like to ask about:

  1. Why is the sum $$$|Y_a| + |X_b| \geq n - 2$$$ and not $$$\geq n - 1$$$? (given that $$$k[b] - k[a]$$$ could be at least $$$0$$$).

  2. On this line: "we have that there are n — 2 nodes and the sum of elements of $$$Y_a$$$ and $$$X_b$$$ is bigger than n — 2". Why can we say that it is bigger than $$$n - 2$$$? supposing the inequality you mentioned in 1. is $$$\geq n - 2$$$, would the statement be "bigger or equal to $$$n - 2$$$"? (or "bigger or equal to $$$n - 1$$$" if that's the correct right-hand side of the inequality). In such case, how can show that there'll be at least one element in common?

In problem D (Moving pieces on a line), can anyone provide some intuition/case why the following greedy solution is incorrect?

  1. Mark an edge with 1 if we need to make a change to it (i.e. let a token traverse that edge). Otherwise, mark it with 0 (no token needs to traverse the edge). Initially, each edge in segments $$$[1, t_0 - 1]$$$, $$$[t_1, t_2 - 1]$$$ etc are marked with 0. Edges in segments $$$[t_0, t_1 - 1]$$$, $$$[t_2, t_3 - 1]$$$ etc are marked with 1.

  2. Sort tokens by their position (in non-decreasing order). Iterate over each token in that order, and see if there's any edge to the left marked with 1. If there is at least one, move the token to the vertex left to the leftmost edge marked with 1. Make sure to flip/xor every edge that was traversed on that path. After this, "that leftmost edge" that was marked with 1 will now be marked with 0. Some other edges to the right of it might flip to 1 though. For each token that had to be moved, mark it as moved. Some might not be moved if there's no edge marked with 1 to the left of it.

  3. Repeat this process, but now sort them in non-increasing order: Iterate over each token in this order, but now see if there's any edge to the right marked with 1. If so, give the rightmost such edge, and move your token to the vertex just after it, and flip/xor every edge along the way.

At the end you check if every edge is marked with 0. If they are, there's a solution, otherwise there isn't. The cost of your solution is defined by how you decided to greedily move the tokens (as described above).

It's never possible to have a set of odd size of odd vertices.

Why: Think about including an edge. An edge is always incident to two vertices. When the edge is included, the parity of both is changed, so the size of your "odd degree" set of vertices changed in size by +2, 0, or -2. Size = 0 is always possible (don't include any edge), and from then you can only move (+2,0,-2) as I described.

Heh, I feel like problem E from ARC 115 (contest that took place ~2 hours before this one) is very similar to problem E here.

Sorry for reviving this. But in case anyone is interested, I drafted this compilation on the AND, OR and XOR convolutions sometime back in 2018: link (maybe chapter 3 and 4 could be interesting to someone).

+3

$$$C = 2.5⋅10^6$$$ is the maximum value an element can take.

+1 I would also appreciate if someone could write down an additional solution sketch too.

+8

You can think of the pigeonhole principle.

Remember that there are always four distinct indices $$$x, y, z, w$$$ such that $$$a_x + a_y = a_z + a_w$$$ ($$$ = S$$$) when there are at least four distinct pairs of indices $$$(x_1, y_1), (x_2, y_2), (x_3, y_3), (x_4, y_4)$$$ (indices may be repeated across them) such that $$$a_{x_1} + a_{y_1} = a_{x_2} + a_{y_2} = a_{x_3} + a_{y_3} = a_{x_4} + a_{y_4} = S$$$.

Let's imagine we are running our $$$\mathcal{O}(n^2)$$$ bruteforce and trying every pair; for every pair $$$(i, j)$$$ we compute its sum $$$a_i + a_j = S$$$ and increment the count on how many times we have seen the sum $$$S$$$.

Given that the maximum sum of any pair is $$$2*C$$$, the "longest" it would take the bruteforce to see one sum $$$S$$$ occurring four times is around ~$$$3 * (2 * C) = \mathcal{O}(C)$$$. Therefore, if a solution exists, it will be found in time $$$\mathcal{O}(C)$$$. If there's no solution (we tried all pairs), it was the case that (roughly) $$$n^2 \le 3 * (2 * C)$$$ (or something close to this).

On ImakfCodeforces Round #706, 5 years ago
0

Turns out I wrote the wrong thing then ):

I think there's a typo in this line:

"And here you go, the average-case complexity for a k-d tree query is actually $$$O(3^d * N)$$$."

Shouldn't it be $$$O(3^d * logN)$$$?

I thought we had all of February 5th (until 23:59 UTC) at least to register... ):

You might also consider that accidental/intentional participants who are subject to rating changes who decided not to solve anything/quitted the contest will result in rating inflation for those who did solve something.

Change int check() to long long check() and you get AC :)

A few typos in the first line of the solution explanation for problem D. It should be:

Since sequence $$$b$$$ is non-decreasing and sequence $$$c$$$ is non-increasing, we need to find $$$max(c_1, b_n)$$$.

Replace bool[] arrays with vector

+24

It's happening :D

Here for a nice explanation by iica

For problem E, Magolor's solution differs from Kostroma's solution in this input:

4 4
1 1
1 2
2 3
2 2
1 1
1 2
2 1
2 0

Output should be "No", since reflection is not allowed.

Because we are assuming that b is the maximum distance to a corner cell, which is equal to the maximum number in the list.

On csacademyRound #74 (Div. 2), 8 years ago
0

I received an e-mail claiming the round was to be held at 15:05 UTC. To my surprise, one contest hour had already passed when I was getting ready to participate :(

ITESM Campus Querétaro (México)

ivanzuki

fabo3000

barrons.guillermo.sal

0

I have a question regarding your definition on what a extreme point is and how it behaves in the code.

For the polygon stabbing function, you find both the right and left extreme points. Let's say we have the following polygon (I have labeled the vertices starting from 0 in counterclockwise order) and the following line:

Since the line vector is horizontal pointing in the x-axis positive direction, I would expect the extreme point in this direction to be point at index 2 (coordinates: (7,3)), as there is no vertex in the polygon further to the right (or as they define it equivalently in geomalgorithms, no other point in the polygon protected onto the line is further in the direction of the line vector). For the same vector in the opposite direction, I would expect the extreme point to be point at index 0 (coordinates: (1,1)).

I tried running such example using the polygon stabbing function presented here but I get index 3 for extreme point when the line vector is pointing in the x-axis positive direction, and index 0 when the line vector is pointing in the x-axis negative direction.

Could you enlighten me on what I'm misunderstanding?

Also, could we use dot product instead of cross product to check whether some vertex i is above some vertex j? (particularly in the case when the direction is fixed)

Why is it allowed to wait in the start point?

The statement says: "After the show starts the dog immediately begins to run to the right to the first bowl."

Am I missing something?

Yes, I omitted that detail. Let's say you initialize left = width, right = 0. For every column of interest, update them accordingly if column c ended up having an unenlightened cell:

left = min(left, c)
right = max(right, c)

First, let's enumerate the columns of interest, because we believe that we may find a cell that has not been enlightened in them. Let's call this set of columns C = [c1, c2, ..., ck + 1]

Notice that c1 = 1, since it's the leftmost column in the grid and there's a chance there is at least one empty cell. As for the rest of the columns of interest, we cannot try every single other because the width of the grid can be up to 109, so we should reason about which columns we should pick. Notice that if, when we are doing our binary search and we are evaluating if it's possible to enlighten the whole grid at a time t, the leftmost position an initially enlightened cell at (r, c) will be able to reach is (r, c - t). Therefore, column c - t - 1 may have one unenlightened cell. So we check for that column if that is the case. Let's do this for the remaining k initially enlightened cells (c2, c3, ..., ck + 1), so we have a total of k + 1 columns to try for the leftmost one. The same way idea can be applied to the rightmost column, top and bottom rows.

How to check for a column c if it has one unenlightened cell? Let's iterate over every initialized enlightened cell (there are k of them). For each of them, at a time t, you want to know the range of cells from column c it is responsible for enlightening. Let's say that initialized enlightened cell is at (r', c'). The leftmost cell it's going to be able to enlighten is c' - t. If c' - t is at or left to c, it's obviously going to reach it, and what vertical range does it cover in this column? It's going to be [r' - t, r' + t], because as fire propagates in one direction, it's going to be propagating in the rest of the directions, so if it propagated t units to the left, every of those t columns covered will also cover t cells up and down. Store the range [l, r] this initialized enlightened cell spanned for column c in a list. Repeat for the rest of the k - 1 enlightened cells.

Now the only remaining step is, sort the list by l, in case of tie, by r. Calculate the total range it covers by keeping track of the last endpoint covered so as to avoid counting some segment more than once, and check if cellscovered < heightgrid. If that is the case, there was at least one unenlightened cell in that column.

+1

We don't iterate from i = 1 to , instead it's up to . When you want to divide some ai into k sets, each set will be of size . Therefore or . Thus, it's possible to check every case by checking up to the root. Since on each iteration we iterate over our array of size n, the time complexity of the solution is

Can someone elaborate on the following from problem E's editorial?

Countries as dangos. Nice!

Nevermind what I wrote, I thought you were referring to problem A.

Sigh, I was solving the wrong problem A the whole time. I was solving "change at most one letter" instead of "exactly one letter", even if that was in bold in the problem statement :(

0

Just a moment after I posted, I realized sorting was enough to assign contiguous mice to a hole.

As for the hole capacities, yep, I meant that (and I missed it's as simple as that!)

Thank you :)

0

How do you deal with hole capacities and the fact that contiguous mice are not necessary assigned to the same hole?

Xellos, could you reupload the image you used to explain the solution for problem A?

This is my solution: http://pastebin.com/ei4w5zRU

Thank you!

Can you elaborate a bit more on setting dp[x][0] = 1 and dp[x][k+1] = 1?

On arsijoCodeforces Round #371, 10 years ago
+4

I am having the same problem.