Geothermal's blog

By Geothermal, history, 6 hours ago, In English

Since the editorial for today's Div. 1 hasn't been published yet, I thought I'd share solution sketches. Some of these won't include all of the specific details, but hopefully they should communicate the main ideas. (I didn't compete officially, but I got AC on all of the problems using the below approaches, so I'm fairly confident in correctness.) Feel free to leave questions below and I'll respond if I have time.

A — A Wonderful Contest

If any problem has $$$100$$$ subtasks, the answer is yes: we can achieve an arbitrary score by solving some of the other problems fully and by solving the right number of subtasks of the $$$100$$$-subtask problem. Otherwise, there is no way to achieve a score of 1, so the answer is no.

B — Artistic Balance Tree

Note that each of the reversal operations swaps odd positions with odd positions and even positions with even positions, as $$$u-i$$$ and $$$u+i$$$ differ by $$$2i$$$ and thus have the same parity. Thus, when we are to mark an odd index, we can only mark an element originally at an odd position, and vise versa when we must mark an even index. We can also observe that by choosing the right $$$(u, y)$$$, we can mark any element whose position has the required parity.

Handle the operations on odd indices and even indices separately. Within each group, we can take a greedy approach: if we need to mark $$$k$$$ numbers, we should mark either the $$$k$$$ largest numbers or, if there are fewer than $$$k$$$ positive numbers, we should mark all the positive numbers. (Note also that if there are no positive numbers, we must mark the largest remaining number as long as $$$k \gt 0$$$.)

C — Median Partition

The shared median must be equal to the median of the entire array. One way to see this is that if there are $$$m$$$ subarrays with the same median, the array contains $$$\frac{n-m}{2}$$$ elements less than or equal to the shared median, $$$m$$$ copies of the shared median, and $$$\frac{n-m}{2}$$$ elements greater than the shared median, which implies that the shared median is the median of the whole array.

Let $$$dp_i$$$ be the maximum number of subarrays we can create using the first $$$i$$$ elements such that each subarray has median equal to the median of the entire array. Initially, $$$dp_0 = 0$$$, and we want to find $$$dp_n$$$. To transition, starting from a given $$$i$$$, iterate over $$$j$$$ in increasing order, and maintain the number of elements between $$$i$$$ and $$$j$$$ that are less than and greater than the median. Using these counts, we can tell if any subarray has median equal to the median of the whole array; if the array containing indices $$$i \cdots j$$$ does, then we can set $$$dp_{j+1} = \max(dp_{j+1}, dp_i).$$$ With $$$O(n)$$$ states and $$$O(n)$$$ transitions per state, this gives an $$$O(n^2)$$$ solution.

D — Permutation Construction

The key claim is that we should place the elements in ascending order by suffix sum, i.e., we should place $$$1$$$ at the position $$$i$$$ such that the sum of $$$a_i, \cdots, a_n$$$ is minimized, and so on.

To understand why this is true, note that an inversion between positions $$$i$$$ and $$$j$$$ contributes $$$s_i - s_j$$$ to the summation, where $$$s_k$$$ is the sum of the suffix of $$$a$$$ starting at position $$$k$$$. Thus, $$$s_i$$$ appears with a positive sign for every $$$j \gt i$$$ with $$$p_j \lt p_i$$$ and with a negative sign for every $$$j \lt i$$$ with $$$p_j \gt p_i$$$. Suppose there are $$$m$$$ elements less than $$$p_i$$$ to the right of $$$i$$$. Then, there are $$$n-i-m$$$ elements greater than $$$p_i$$$ to the right of $$$i$$$, so there are $$$(n-p_i) - (n-i-m) = i+m-p_i$$$ elements greater than $$$p_i$$$ to the left of $$$i$$$. Thus, the total number of times $$$s_i$$$ is added to the sum is $$$m - (i +m - p_i) = p_i - i.$$$

The quantity we seek to maximize is therefore $$$\sum s_i p_i - s_i i.$$$ The $$$s_i i$$$ term is fixed, so all we can do is maximize the former term by choosing the largest values of $$$p_i$$$ where $$$s_i$$$ is largest, as desired.

E — Seek the Truth

Take $$$a = 0$$$, then insert $$$f(0)$$$ into $$$S$$$. If $$$k = 1$$$, then $$$f(0) = 0$$$ and the size of $$$S$$$ will be 1; otherwise, $$$f(0) = c$$$ and the size of $$$S$$$ will be 2. This is enough to determine whether $$$k = 1$$$, i.e., whether the operation is AND.

If the operation is AND, insert $$$f(2^m)$$$ into $$$S$$$ for all $$$0 \leq m \lt n.$$$ If $$$c$$$ contains $$$2^m$$$ in its binary representation, $$$f(2^m) = 2^m$$$, so $$$|S|$$$ will increase; otherwise, $$$f(2^m) = 0$$$, so $$$|S|$$$ will remain the same. This is enough to determine $$$c$$$ in a total of $$$n+1$$$ queries.

Otherwise, since $$$S$$$ contains $$$0$$$ and $$$c$$$, we can use $$$n$$$ queries of the second type to binary search for $$$c$$$. Afterwards, all that remains is to determine whether the operation is OR or XOR. We split into two cases. First, if $$$c$$$ is a power of two, let $$$m$$$ be the bitwise OR of $$$c$$$ and some other power of two, say $$$2^k$$$. Then, we can insert $$$f(m)$$$ into $$$S$$$. This will equal $$$m$$$ if the operation is OR or $$$2^k$$$ if the operation is XOR. Since $$$m \gt 2^k$$$, we can use the second query to check if $$$m \in S$$$, which solves the problem.

If $$$c$$$ is not a power of two, let $$$2^k$$$ be a power of two contained in the binary representation of $$$c$$$. Insert $$$f(2^k)$$$ into $$$S$$$. This will equal $$$c$$$ if the operation is OR, but if the operation is XOR, it will equal neither $$$0$$$ nor $$$c$$$, so a new element will be added to $$$S$$$ if and only if the operation is XOR. Thus, checking the number of elements after this operation is sufficient to solve the problem.

F — Building Tree

Let's think about how to construct the new graph conceptually. By Kruskal's Algorithm, first, we should connect all pairs of vertices that are accessible without crossing an edge of weight $$$0$$$. Then, we should add edges connecting pairs of disconnected vertices that are connected without crossing an edge of weight $$$1$$$, and so on.

How can we do this? The rough idea is to group the edges of the original graph by weight and build a segment tree over the weights. We'll DFS through this tree and use a DSU with rollbacks to iteratively construct the connectivity state of the original graph with edges of weight $$$w$$$ excluded, for all $$$w$$$ in increasing order. As we do so, we'll add edges to the new graph.

Specifically, build a DSU with rollbacks storing the standard information as well as a "representative" for each vertex--conceptually, the representative represents the connected component in the new graph corresponding to the current component of the original graph. Initially, vertices that appear in the new graph are their own representatives and other vertices have no representatives.

Then, start at the root of the segment tree. When we're at a node corresponding to weights $$$[L, R],$$$ let $$$M = \lfloor \frac{L+R}{2} \rfloor$$$ and add all edges of weights $$$[M+1, R]$$$ to the DSU. As we do so, when we merge two components in the DSU, we should check if their representatives are in different components of the new graph and merge them if not. Initially, the weight of the edges we're adding to the new graph will be $$$0$$$; we'll update this as we continue our DFS. Then, we should DFS into the node corresponding to weights $$$[L, M]$$$ and recurse. Afterwards, rollback the DSU to remove edges of weights $$$[M+1, R]$$$, add edges of weights $$$[L, M]$$$, and DFS into the node corresponding to weights $$$[M+1, R].$$$

Once we reach a leaf node of the segment tree, we've added all the edges we possibly can in the new graph without crossing edges in the old graph of the desired weight, so we should increase the weight of the edges we're adding to the new graph by 1 before continuing with our DFS.

At the end, we can check if the new graph is connected before outputting the answer. The total complexity is $$$O(n \log^2 n)$$$, since operations on DSU with rollbacks are $$$O(\log n)$$$ and each edge gets processed $$$O(\log n)$$$ times.

G — Statistics on Tree

This problem can be solved using small-to-large merging. Iterate over the LCA $$$v$$$ of the vertices in our desired pair. Note that our path can be split into up to two paths down from the LCA. The size of the largest component after removing a path will be equal to either the maximum subtree size left after removing edges in one of the two downward paths (I'll call this maximum the "score" of each downward path) or the size of the component containing the LCA, all vertices outside the subtree of the LCA, and all vertices in the subtree of the LCA but outside of the subtrees containing the downward paths.

Call the child of $$$v$$$ with the largest subtree size the "heavy child" and call its subtree the "heavy subtree", and call the subtrees of other children of $$$v$$$ "light subtrees". To handle paths containing a vertex in the heavy subtree, iterate over the vertices in the light subtrees. For a fixed such vertex $$$u$$$, the value of $$$u$$$ when paired with a vertex in the heavy subtree will be the maximum of the score of the path from $$$v$$$ to $$$u$$$, the score of the path in the heavy subtree, or $$$N$$$ minus the sum of the sizes of the heavy subtree and the light subtree containing $$$u$$$. This reduces to adding a pair with value $$$\max(k, x)$$$ for every heavy vertex with score $$$x$$$, which we can do with careful bookkeeping by adding the number of heavy vertices with score under $$$x$$$ to the answer for value $$$k$$$ and using a lazy approach to handle heavy vertices with score at least $$$x$$$.

To handle paths containing vertices in two distinct light subtrees, note that the value of such a path will always be $$$N$$$ minus the sizes of the two light subtrees (since the size of the heavy subtree alone will be greater than the score of any vertex in a light subtree). We can group the light subtrees by size and iterate over all pairs of sizes to handle this case. Since there can be at most $$$O(\sqrt{n})$$$ distinct subtree sizes, this step requires work at most $$$O(n \sqrt{n})$$$ summed over all $$$v$$$.

H — Counting Sort?

Fixing the starting array $$$b$$$, let $$$f(f(b)) = [d_1, d_2, \cdots, d_n].$$$ Note that $$$d_i$$$ is the number of nonzero elements occurring $$$i$$$ times in $$$b$$$. Thus, $$$\sum i d_i \leq n.$$$ With $$$n \leq 50$$$, there are only about $$$1.3 \cdot 10^6$$$ such arrays, meaning it's tractable to enumerate all of them and compute $$$g(b)$$$ for each of them.

Now, we need to count the number of valid $$$b$$$ satisfying $$$f(f(b)) = d$$$ for all possible d. To do so, let $$$dp_{i, d}$$$ be the number of ways to achieve array $$$d$$$ if we've assigned all occurrences of values $$$i+1, \cdots, n$$$ in $$$b$$$ (and $$$d$$$ is equal to $$$f(f(b))$$$ if we assume all unassigned values are set to $$$0$$$). Then, we can transition by iterating over the number of elements set to $$$i$$$. The number of elements we can choose from is equal to the number of elements of $$$r$$$ greater than or equal to $$$i$$$ minus the sum of $$$i d_i.$$$

The total number of transitions from a given state is bounded above by $$$n - \sum i d_i$$$. Thus, if we let $$$cnt_k$$$ be the number of arrays $$$d$$$ satisfying $$$\sum i d_i \leq k$$$, the total number of steps in our DP is bounded by $$$n (\sum_{k = 1}^n cnt_k)$$$. That summation comes out to about $$$8 \cdot 10^6$$$ when $$$n = 50$$$, which is fast enough (with some care required to make sure the constant factor isn't too bad).

Afterwards, we need to handle cases where $$$g(b) \leq 2$$$ separately. This is straightforward since there are only $$$n+1$$$ such arrays in total (the zero array and all arrays with $$$1$$$ in one position and $$$0$$$ elsewhere).

  • Vote: I like it
  • +89
  • Vote: I do not like it

»
5 hours ago, hide # |
 
Vote: I like it 0 Vote: I do not like it

Thanks! I couldn’t sleep without solving D

»
4 hours ago, hide # |
 
Vote: I like it 0 Vote: I do not like it

I think the solution to E can be simplified somewhat. Here is my solution:

First, we determine whether $$$k=1$$$. We use $$$a=0$$$, then add $$$f(0)$$$. $$$f(0)=0 \iff k=1$$$, so $$$|S|=1 \iff k=1$$$.

Now we guarantee $$$c\in S$$$. If $$$k\neq 1$$$, $$$f(0)=c$$$ already. Otherwise, $$$k=1$$$ and we add $$$f(2^n-1)=c$$$. We can simply binary search for $$$c$$$.

If $$$k=1$$$, we output both $$$k$$$ and $$$c$$$. Otherwise, we must determine whether $$$k=2$$$ or $$$k=3$$$. At this point, $$$S={0,c}$$$. We have two cases.

  • If $$$c \neq 2^n-1$$$, then we add $$$f(2^n-1)$$$. $$$(2^n-1) \mid c =2^n-1$$$ and $$$(2^n-1) \oplus c \lt 2^n-1$$$, so one query of the second type suffices.
  • If $$$c=2^n-1$$$, observe that $$$k=2\implies f(x)=2^n-1$$$ regardless of $$$x$$$. Thus we can add any $$$f(x)$$$ where $$$(2^n-1)\oplus x \not\in S$$$ and $$$|S|$$$ will not change if $$$k=2$$$.
»
113 minutes ago, hide # |
 
Vote: I like it 0 Vote: I do not like it

I think the solution to F can be simplified using Divide and Conquer with DSU Rollback, avoiding the explicit Segment Tree construction.

1. Definitions

Let $$$G = (V, E)$$$ be the original graph. Define the subset of edges restricted by weights in the range $$$[l, r]$$$:

$$$E_{[l, r]} = \{ e \in E \mid l \le w_e \le r \}$$$

2. D&C Invariant

Define a recursive procedure $\text{Solve}(l, r)$$$. Invariant: Upon entering $$$\text{Solve}(l, r)

Unable to parse markup [type=CF_MATHJAX]

.

3. Transitions

For a state where $$$l \lt r$$$, let $$$mid = \lfloor \frac{l+r}{2} \rfloor$$$. * Left Branch: 1. Add edges $$$E_{[mid+1, r]}$$$ to DSU. 2. Recurse $$$\text{Solve}(l, mid)$$$. 3. Rollback DSU. * Right Branch: 1. Add edges $$$E_{[l, mid]}$$$ to DSU. 2. Recurse $$$\text{Solve}(mid+1, r)$$$. 3. Rollback DSU.

4. Base Case & Implicit Kruskal's

When $$$l = r = x$$$, we reach a leaf: * The DSU currently contains $$$E \setminus E_{[x, x]}$$$ (all edges except those with weight $$$x$$$). * Since the D&C traversal visits leaves strictly from left to right ($$$x = 0, 1, 2, \dots$$$), $$$x$$$ represents the strictly increasing $$$\text{mex}$$$ cost. * Maintain a separate $$$\text{DSU}_{new}$$$ for the new graph. Let $$$R(u)$$$ be the representative of vertex $$$u$$$ in the new graph. $$$\forall (u, v)$$$ being merged in the main DSU, if $$$R(u) \neq \emptyset \land R(v) \neq \emptyset$$$, we add an edge $$$(R(u), R(v))$$$ with cost $$$x$$$ to $$$\text{DSU}_{new}$$$.

5. Complexity

Each edge $$$e \in E$$$ is pushed to the DSU exactly $$$\mathcal{O}(\log W)$$$ times along the path to its excluded leaves. Using Union-by-Rank for rollbacks, the connectivity state is maintained in:

$$$\mathcal{O}(m \log W \log n)$$$

This formulation is mathematically equivalent to the editorial but eliminates the overhead of explicit segment tree nodes.